venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Multi-objective Bayesian optimisation with preferences over objectives Abstract We present a multi-objective Bayesian optimisation algorithm that allows the user to express preference-order constraints on the objectives of the type “objective A is more important than objective B”. These preferences are defined based on the stability of the obtained solutions with respect to preferred objective functions. Rather than attempting to find a representative subset of the complete Pareto front, our algorithm selects those Pareto-optimal points that satisfy these constraints. We formulate a new acquisition function based on expected improvement in dominated hypervolume (EHI) to ensure that the subset of Pareto front satisfying the constraints is thoroughly explored. The hypervolume calculation is weighted by the probability of a point satisfying the constraints from a gradient Gaussian Process model. We demonstrate our algorithm on both synthetic and real-world problems. 1 Introduction In many real world problems, practitioners are required to sequentially evaluate a noisy black-box and expensive to evaluate function f with the goal of finding its optimum in some domain X. Bayesian optimisation is a well-known algorithm for such problems. There are a variety of studies such as hyperparameter tuning [27, 13, 12], expensive multi-objective optimisation for Robotics [2, 1], and experimentation optimisation in product design such as short polymer fiber materials [16]. Multi-objective Bayesian optimisation involves at least two conflicting, black-box, and expensive to evaluate objectives to be optimised simultaneously. Multi-objective optimisation usually assumes that all objectives are equally important, and solutions are found by seeking the Pareto front in the objective space [4, 5, 3]. However, in most cases, users can stipulate preferences over objectives. This information will impart on the relative importance on sections of the Pareto front. Thus using this information to preferentially sample the Pareto front will boost the efficiency of the optimiser, which is particularly advantageous when the objective functions are expensive. In this study, preferences over objectives are stipulated based on the stability of the solutions with respect to a set of objective functions. As an example, there are scenarios when investment strategists are looking for Pareto optimal investment strategies that prefer stable solutions for return (objective 1) but more diverse solutions with respect to risk (objective 2) as they can later decide their appetite for risk. As can be inferred, the stability in one objective produces more diverse solutions for the other objectives. We believe in many real-world problems our proposed method can be useful in order to reduce the cost, and improve the safety of experimental design. Whilst multi-objective Bayesian optimisation for sample efficient discovery of Pareto front is an established research track [9, 18, 8, 15], limited work has examined the incorporation of preferences. Recently, there has been a study [18] wherein given a user specified preferred region in objective space, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. ∂x have opposite signs since the weighted sum of gradients of the objectives with respect to x must be zero: sT ∂∂x f (x) = 0. In (b) we additionally require that || ∂f1(x) ∂x || > ||∂f0(x)∂x ||, so perturbation of x will cause relatively more change in f1 than f0 - i.e. such solutions are (relatively) stable in objective f0. (c) Shows the converse, namely ||∂f0(x)∂x || > || ∂f1(x) ∂x || favoring solutions that are (relatively) stable in objective f1 and diverse in f0. the optimiser focuses its sampling to derive the Pareto front efficiently. However, such preferences are based on the assumption of having an accurate prior knowledge about objective space and the preferred region (generally a hyperbox) for Pareto front solutions. The main contribution of this study is formulating the concept of preference-order constraints and incorporating that into a multi-objective Bayesian optimisation framework to address the unavailability of prior knowledge and boosting the performance of optimisation in such scenarios. We are formulating the preference-order constraints through ordering of derivatives and incorporating that into multi-objective optimisation using the geometry of the constraints space whilst needing no prior information about the functions. Formally, we find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem: D? ⊂ X? = argmax x∈X f (x) (1) subject to preference-order constraints - that is, assuming f = [f0, f1, . . . , fm], f0 is more important (in terms of stability) than f1 and so on. Our algorithm aims to maximise the dominated hypervolume of the solution in a way that the solutions that meet the constraints are given more weights. To formalise the concept of preference-order constraints, we first note that a point is locally Pareto optimal if any sufficiently small perturbation of a single design parameter of that point does not simultaneously increase (or decrease) all objectives. Thus, equivalently, a point is locally Pareto optimal if we can define a set of weight vectors such that, for each design parameter, the weighted sum of gradients of the objectives with respect to that design parameter is zero (see Figure 1a). Therefore, the weight vectors define the relative importance of each objective at that point. Figure 1b illustrates this concept where the blue box defines the region of stability for the function f0. Since in this section the magnitude of partial derivative for f0 is smaller compared to that of f1, the weights required to satisfy Pareto optimality would need higher weight corresponding to the gradient of f0 compared to that of f1 (see Figure 1b). Conversely, in Figure 1c, the red box highlights the section of the Pareto front where solutions have high stability in f1. To obtain samples from this section of the Pareto front, we need to make the weights corresponding to the gradient of f0 to be smaller to that of the f1. Our solution is based on understanding the geometry of the constraints in the weight space. We show that preference order constraints gives rise to a polyhedral proper cone in this space. We show that for the pareto-optimality condition, it necessitates the gradients of the objectives at pareto-optimal points to lie in a perpendicular cone to that polyhedral. We then quantify the posterior probability that any point satisfies the preference-order constraints given a set of observations. We show how these posterior probabilities may be incorporated into the EHI acquisition function [11] to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint and away from those that do not. 2 Notation Sets are written A,B,C, . . . where R+ is the positive reals, R̄+ = R+ ∪ {0}, Z+ = {1, 2, . . .}, and Zn = {0, 1, . . . , n − 1}. |A| is the cardinality of the set A. Tuples (ordered sets) are denoted A,B,C, . . .. Distributions are denoted A,B, C, . . .. column vectors are bold lower case a,b, c, . . .. Matrices bold upper case A,B,C, . . .. Element i of vector a is ai, and element i, j of matrix A is Ai,j (all indexed i, j = 0, 1, . . .). The transpose is denoted aT,AT. I is the identity matrix, 1 is a vector of 1s, 0 is a vector of 0s, and ei is a vector e(i)j = δij , where δij is the Kronecker-Delta. ∇x = [ ∂∂x0 ∂ ∂x1 . . . ∂∂xn−1 ] T, sgn(x) is the sign of x (where sgn(0) = 0), and the indicator function is denoted as 1(A). 3 Background 3.1 Gaussian Processes Let X ⊂ Rn be compact. A Gaussian process [23] GP(µ,K) is a distribution on the function space f : X → R defined by mean µ : X → R (assumed zero without loss of generality) and kernel (covariance) K : X× X→ R. If f(x) ∼ GP(0,K(x,x′)) then the posterior of f given D = {(x(j), y(j)) ∈ Rn×R|y(j) = f(x(j))+ , ∼ N (0, σ2), j ∈ ZN}, f(x)|D ∼ N (µD(x), σD(x,x′)), where: µD (x) = k T (x) ( K + σ2I )−1 y σD (x,x ′) = K (x,x′)− kT (x) ( K + σ2I )−1 k (x′) (2) and y,k(x) ∈ R|D|, K ∈ R|D|×|D|, k(x)j = K(x,x(j)), Kjk = K(x(j),x(k)). Since differentiation is a linear operation, the derivative of a Gaussian process is also a Gaussian process [17, 22]. The posterior of ∇xf given D is∇xf(x)|D ∼ N (µ′D(x),σ′D(x,x′)), where: µ′D (x) = ( ∇xkT (x) ) ( K + σ2I )−1 y σ′D (x,x ′) = ∇x∇Tx′K (x,x′)− ( ∇xkT (x) ) (K + σ2i I) −1 (∇x′kT (x′))T (3) 3.2 Multi-Objective Optimisation A multi-objective optimisation problem has the form: argmax x∈X f (x) (4) where the components of f : X ⊂ Rn → Y ⊂ Rm represent the m distinct objectives fi : X→ R. X and Y are called design space and objective space, respectively. A Pareto-optimal solution is a point x? ∈ X for which it is not possible to find another solution x ∈ X such that fi(x) > fi(x ?) for all objectives f0, f1, . . . fm−1. The set of all Pareto optimal solutions is the Pareto set X? = {x? ∈ X|@x ∈ X : f (x) f (x?)} where y y′ (y dominates y′) means y 6= y′, yi ≥ y′i ∀i, and y y′ means y y′ or y = y′. Given observations D = {(x(j),y(j)) ∈ Rn × Rm|y(j) = f(x(j)) + , i ∼ N (0, σ2i )} of f the dominant set D∗ = { (x∗,y∗) ∈ D|@ (x,y) ∈ D : y y∗} is the most optimal subset of D (in the Pareto sense). The “goodness” of D is often measured by the dominated hypervolume (S-metric, [31, 10]) with respect to some reference point z ∈ Rm: S (D) = S (D∗) = ∫ y≥z 1 ( ∃y(i) ∈ D ∣∣y(i) y) dy. Thus our aim is to find the set D that maximises the hypervolume. Optimised algorithms exist for calculating hypervolume [29, 25], S(D), which is typically calculated by sorting the dominant observations along each axis in objective space to form a grid. Dominated hypervolume (with respect to z) is then the sum of the hypervolumes of the dominated cells (ck) - i.e. S (D) = ∑ k vol (ck) . 3.3 Bayesian Multi-Objective Optimisation In the multi-objective case one typically assumes that the components of f are draws from independent Gaussian processes, i.e. fi(x) ∼ GP(0,K(i)(x,x′)), and fi and fi′ are independent ∀i 6= i′. A popular acquisition function for multi-objective Bayesian optimisation is expected hypervolume improvement (EHI). The EHI acquisition function is defined by: at (x|D) = Ef(x)|D [S (D ∪ {(x, f (x))})− S (D)] (5) [26, 30] and represents the expected change in the dominated hypervolume by the set of observations based on the posterior Gaussian process. 4 Problem Formulation Let f : X ⊂ Rn → Y ⊂ Rm be a vector of m independent draws fi ∼ GP(0,K(i)(x,x)) from zeromean Gaussian processes. Assume that f is expensive to evaluate. Our aim is to find a representative set of Pareto-optimal solutions to the following multi-objective optimisation problem: D? ⊂ X? = argmax x∈XI⊂X f (x) (6) subject to preference-order constraints. Specifically, we want to explore only that subset of solutions XI ⊂ X that place more importance on one objective fi0 than objective fi1 , and so on, as specified by the (ordered) preference tuple I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′), where Q ∈ Zm is the number of defined preferences over objectives. 4.1 Preference-Order Constraints Let x? ∈ int(X)∩X? be a Pareto-optimal point in the interior ofX. Necessary (but not sufficient, local) Pareto optimality conditions require that, for all sufficiently small δx ∈ Rn, f(x? + δx) f(x), or, equivalently ( δxT∇x ) f (x?) /∈ Rm+ . A necessary (again not sufficient) equivalent condition is that, for each axis j ∈ Zn in design space, sufficiently small changes in xj do not cause all objectives to simultaneously increase (and/or remain unchanged) or decrease (and/or remain unchanged). Failure of this condition would indicate that simply changing design parameter xj could improve all objectives, and hence that x? was not in fact Pareto optimal. In summary, local Pareto optimality requires that ∀j ∈ Zn there exists s(j) ∈ R̄m+\{0} such that: sT(j) ∂ ∂xj f (x) = 0 (7) It is important to note that this is not the same as the optimality conditions that may be derived from linear scalarisation, as the optimality conditions that arrise from linear scalarisation additionally require that s(0) = s(1) = . . . = s(n−1). Moreover (7) applies to all Pareto-optimal points, whereas linear scalarisation optimisation conditions fail for Pareto points on non-convex regions [28]. Definition 1 (Preference-Order Constraints) Let I = (i0, i1, . . . iQ|{i0, i1, . . .} ⊂ Zm, ik 6= ik′∀k 6= k′) be an (ordered) preference tuple. A vector x ∈ X satisfies the associated preference-order constraint if ∃s(0), s(1), . . . , s(n−1) ∈ SI such that: sT(j) ∂ ∂xj f (x) = 0 ∀j ∈ Zn where SI , { s ∈ R̄m+\ {0} ∣∣ si0 ≥ si1 ≥ si2 ≥ . . .} . Further we define XI to be the set of all x ∈ X satisfying the preference-order constraint. Equivalently: XI = {x ∈ X| ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn} where S⊥I , { x ∈ X| ∃s ∈ SI, sTx = 0 } . It is noteworthy to mention that (7) and Definition 1 are the key for calculating the compliance of a recommended solution with the preference-order constraints. Having defined preference-order constraints we then calculate the posterior probability that x ∈ XI, and showing how these posterior probabilities may be incorporated into the EHI acquisition function to steer the Bayesian optimiser toward Pareto optimal points that satisfy the preference-order constraint. Before proceeding, however, it is necessary to briefly consider the geometry of SI and S⊥I . 4.2 The geometry of SI and S⊥I In the following we assume, w.l.o.g, that the preference-order constraints follows the order of indices in objective functions (reorder, otherwise), and that there is at least one constraint. We now define the preference-order constraints by assumption I = (0, 1, . . . , Q|Q ∈ Zm\{0}), where Q > 0. This defines the sets SI and S⊥I , which in turn define the constraints that must be met by the gradients of f(x) - either ∃s(0), s(1), . . . , s(n−1) ∈ SI such that sT(j) ∂ ∂xj f (x) = 0 ∀j ∈ Zn or, equivalently ∂∂xj f (x) ∈ S ⊥ I ∀j ∈ Zn. Next, Theorem 1 defines the representation of SI. Theorem 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define SI as per definition 1. Then SI is a polyhedral (finitely-generated) proper cone (excluding the origin) that may be represented using either a polyhedral representation: SI = { s ∈ Rm|aT(i)s ≥ 0∀i ∈ Zm } \ {0} (8) or a generative representation: SI = { ∑ i∈Zm ciã(i) ∣∣ c ∈ R̄m+ }\ {0} (9) where ∀i ∈ Zm: a(i) = { 1√ 2 (ei − ei+1) if i ∈ ZQ ei otherwise ã(i) = { 1√ i+1 ∑ l∈Zi+1 el if i ∈ ZQ+1 ei otherwise and e0, e1, . . . , em−1 are the Euclidean basis of Rm. Proof of Theorem 1 is available in the supplementary material. To test if a point satisfies this requirement we need to understand the geometry of the set SI. The Theorem 1 shows that SI∪{0} is a polyhedral (finitely generated) proper cone, represented either in terms of half-space constraints (polyhedral form) or as a positive span of extreme directions (generative representation). The geometrical intuition for this is given in Figure 2 for a simple, 2-objective case with a single preference order constraint. Algorithm 1 Test if v ∈ S⊥I . Input: Preference tuple I Test vector v ∈ Rm. Output: 1(v ∈ S⊥I ). // Calculate 1(v ∈ S⊥I ). Let bj = ãT(j)v ∀j ∈ Zm. if ∃i 6= k ∈ Zm : sgn(bi) 6= sgn(bk) return TRUE elseif b = 0 return TRUE else return FALSE. Algorithm 2 Preference-Order Constrained Bayesian Optimisation (MOBO-PC). Input: preference-order tuple I. Observations D = {(x(i),y(i)) ∈ X× Y}. for t = 0, 1, . . . , T − 1 do Select the test point: x = argmax x∈X aPEHIt (x|Dt). (aPEHIt is evaluated using algorithm 4). Perform Experiment y = f(x) + . Update Dt+1 := Dt ∪ {(x,y)}. end for Algorithm 3 Calculate Pr(x ∈ XI|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R. Test vector x ∈ X. Output: Pr(x ∈ XI|D). Let q = 0. for k = 0, 1, . . . , R− 1 do //Construct samples v(0),v(1), . . . ,v(n−1) ∈ Rm. Let v(j) = 0 ∀j ∈ Zn. for i = 0, 1, . . . ,m− 1 do Sample u ∼ N (µ′Di(x),σ′Di(x,x)) (see (3)). Let [v(0)i, v(1)i, . . . , v(n−1)i] := uT. end for //Test if v(j) ∈ S⊥I ∀j ∈ Zn. Let q := q + ∏ j∈Zn 1(v(j) ∈ S⊥I ) (see algo rithm 1). end for Return qR . Algorithm 4 Calculate aPEHIt (x|D). Input: Observations D = {(x(i),y(i)) ∈ X× Y}. Number of Monte Carlo samples R̃. Test vector x ∈ X. Output: aPEHIt (x|D). Using algorithm 3, calculate: sx = Pr (x ∈ XI|D) s(j) = Pr ( x(j) ∈ XI ∣∣D) ∀ (x(j),y(j)) ∈ D Let q = 0. for k = 0, 1, . . . , R̃− 1 do Sample yi ∼ N (µDi(x), σDi(x))) ∀i ∈ Zm (see (2)). Construct cells c0, c1, . . . from D∪ {(x,y)} by sorting along each axis in objective space to form a grid. Calculate: q = q+ sx ∑ k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− s(j) ) end for Return q/R̃. The subsequent corollary allows us to construct a simple algorithm (algorithm 1) to test if a vector v lies in the set S⊥I . We will use this algorithm to test if ∂ ∂xj f(x) ∈ S⊥I ∀j ∈ Zn - that is, if x satisfies the preference-order constraints. The proof of corollary 1 is available in the supplementary material. Corollary 1 Let I = (0, 1, . . . , Q|Q ∈ Zm\{0}) be an (ordered) preference tuple. Define S⊥I as per definition 1. Using the notation of Theorem 1, v ∈ S⊥I if and only if v = 0 or ∃i 6= k ∈ Zm such that sgn(ãT(i)v) 6= sgn(ã T (k)v), where sgn(0) = 0. 5 Preference Constrained Bayesian Optimisation In this section we do two things. First, we show how the Gaussian process models of the objectives fi (and their derivatives) may be used to calculate the posterior probability that x ∈ XI defined by I = (0, 1, . . . , Q|Q ∈ Zm\{0}). Second, we show how the EHI acquisition function may be modified and calculated to incorporate these probabilities and hence only reward points that satisfy the preference-order conditions. Finally, we give our algorithm using this acquisition function. 5.1 Calculating Posterior Probabilities Given that fi ∼ GP(0,K(i)(x,x)) are draws from independent Gaussian processes, and given observations D, we wish to calculate the posterior probability that x ∈ XI - i.e.: Pr (x ∈ XI|D) = Pr ( ∂ ∂xj f (x) ∈ S⊥I ∀j ∈ Zn ) . As fi ∼ GP(0,K(i)(x,x)) it follows that ∇xfi(x)|D ∼ Ni , N (µ′Di(x),σ′Di(x,x′)), as defined by (3). Hence: Pr (x ∈ XI|D) = Pr v(j) ∈ S⊥I ∀j ∈ Zn ∣∣∣∣∣∣∣∣ v(0)i v(1)i ... v(n−1)i ∼ Ni∀i ∈ Zm where v ∼ P (∇xf |D). We estimate it using Monte-Carlo [6] sampling as per algorithm 3. 5.2 Preference-Order Constrained Bayesian Optimisation Algorithm (MOBO-PC) Our complete Bayesian optimisation algorithm with Preference-order constraints is given in algorithm 2. The acquisition function introduced in this algorithm gives higher importance to points satisfying the preference-order constraints. Unlike standard EHI, we take expectation over both the expected experimental outcomes fi(x) ∼ N (µDi(x), σDi(x,x)), ∀i ∈ Zm, and the probability that points x(i) ∈ XI and x ∈ XI satisfy the preference-order constraints. We define our preference-based EHI acquisition function as: aPEHIt (x|D) = E [SI (D ∪ {(x, f (x))})− SI (D)|D] (10) where SI(D) is the hypervolume dominated by the observations (x,y) ∈ D satisfying the preference-order constraints. The calculation of SI(D) is illustrated in the supplementary material. The expectation of SI(D) given D is: E [SI (D)|D] = ∑ k vol (ck) Pr(∃ (x,y)∈D|y ỹck ∧ . . .x ∈ XI) . . . = ∑ k vol (ck) (1− ∏ (x,y)∈D:y ỹck (1− Pr (x ∈ XI|D))) where ỹck is the dominant corner of cell ck, vol(ck) is the hypervolume of cell ck, and the cells ck are constructed by sorting D along each axis in objective space. The posterior probabilities Pr(x ∈ XI|D) are calculated using algorithm 3. It follows that: aPEHIt (x|D) = Pr (x ∈ XI|D)E [ ∑ k:y ỹck vol (ck) ∏ j∈ZN :y(j) ỹck ( 1− Pr ( x(j) ∈ XI ∣∣D)) ∣∣∣yi ∼ . . . N (µDi (x) , σDi (x)) ∀i ∈ Zm ] where the cells ck are constructed using the set D ∪ {(x,y)} by sorting along the axis in objective space.We estimate this acquisition function using Monte-Carlo simulation shown in algorithm 4. 6 Experiments We conduct a series of experiments to test the empirical performance of our proposed method MOBO-PC and compare with other strategies. These experiments including synthetic data as well as optimizing the hyper-parameters of a feed-forward neural network. For Gaussian process, we use maximum likelihood estimation for setting hyperparameters [21]. 6.1 Baselines To the best of our knowledge there are no studies aiming to solve our proposed problem, however we are using PESMO, SMSego, SUR, ParEGO and EHI [9, 20, 19, 14, 7] to confirm the validity of the obtained Pareto front solutions. The obtained Pareto front must be in the ground-truth whilst also satisfying the preference-order constraints. We compare our results with MOBO-RS [18] by suitably specifying bounding boxes in the objective space that can replicate a preference-order constraint. 6.2 Synthetic Functions We begin with a comparison on minimising synthetic function Schaffer function N. 1 with 2 conflicting objectives f0, f1 and 1-dimensional input. (see [24]). Figure 3a shows the ground-truth Pareto front for this function. To illustrate the behavior of our method, we impose distinct preferences. Three test cases are designed to illustrate the effects of imposing preference-order constraints on the objective functions for stability. Case (1): s0 ≈ s1, Case (2): s0 < s1 and Case (3): s0 > s1. For our method it is only required to define the preference-order constraints, however for MOBO-RS, additional information as a bounding box is obligatory. Figure 3b (case 1), shows the results of preference-order constraints SI , { s ∈ R̄m+\ {0} ∣∣ s0 ≈ s1} for our proposed method, where s0 represents the importance of stability in minimising f0 and s1 is the importance of stability in minimising f1. Due to same importance of both objectives, a balanced optimisation is expected. Higher weights are obtained for the Pareto front points in the middle region with highest stability for both objectives. Figure 3c (case 2) is based on the preference-order of s0 < s1 that implies the importance of stability in f1 is more than f0. The results show more stable Pareto points for f1 than f0. Figure 3d (case 3) shows the results of s0 > s1 preference-order constraint. As expected, we see more number of stable Pareto points for the important objective (i.e. f0 in this case). We defined two bounding boxes for MOBO-RS approach which can represent the preference-order constraints in our approach (Figure 3e and 3f). There are infinite possible bounding boxes can serve as constraints on objectives in such problems, consequently, the instability of results is expected across the various definitions of bounding boxes. We believe our method can obtain more stable Pareto front solutions especially when prior information is sparse. Also, having extra information as the weight (importance) of the Pareto front points is another advantage. Figure 4 illustrates a special test case in which s0 > s1 and s2 > s1, yet no preferences specified over f2 and f0 while minimising Viennet function. The proposed complex preference-order constraint does not form a proper cone as elaborated in Theorem 1. However, s0 > s1 independently constructs a proper cone, likewise for s2 > s1. Figure 4a shows the results of processing these two independent constraints separately, merging their results and finding the Pareto front. Figure 4b implies more stable solutions for f0 comparing to f1. Figure 4c shows the Pareto front points comply with s2 > s1. 6.3 Finding a Fast and Accurate Neural Network Next, we train a neural network with two objectives of minimising both prediction error and prediction time, as per [9]. These are conflicting objectives because reducing the prediction error generally involves larger networks and consequently longer testing time. We are using MNIST dataset and the tuning parameters include number of hidden layers (x1 ∈ [1, 3]), the number of hidden units per layer (x2 ∈ [50, 300]), the learning rate (x3 ∈ (0, 0.2]), amount of dropout (x4 ∈ [0.4, 0.8]), and the level of l1 (x5 ∈ (0, 0.1]) and l2 (x6 ∈ (0, 0.1]) regularization. For this problem we assume stability of f1(time) in minimising procedure is more important than the f0(error). For MOBO-RS method, we selected [[0.02, 0], [0.03, 2]] bounding box to represent an accurate prior knowledge (see Figure 5). The results were averaged over 5 independent runs. Figure 5 illustrates that one can simply ask for more stable solutions with respect to test time (without any prior knowledge) of a neural network while optimising the hyperparameters. As all the solutions found with MOBO-PC are in range of (0, 5) test time. In addition, it seems the proposed method finds more number of Pareto front solutions in comparison with MOBO-RS. 7 Conclusion In this paper we proposed a novel multi-objective Bayesian optimisation algorithm with preferences over objectives. We define objective preferences in terms of stability and formulate a common framework to focus on the sections of the Pareto front where preferred objectives are more stable, as is required. We evaluate our method on both synthetic and real-world problems and show that the obtained Pareto fronts comply with the preference-order constraints. Acknowledgments This research was partially funded by Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
1. What is the main contribution of the paper regarding multi-objective Bayesian optimization? 2. What are the weaknesses of the paper regarding its motivation and use cases? 3. How does the reviewer assess the clarity and alignment of the paper's content, particularly in the introduction and abstract? 4. What are the potential benefits of the proposed method in real-world applications, such as model search tasks? 5. How can the authors improve the paper to alleviate the reviewer's concerns?
Review
Review Summary: This paper proposes a method for multi-objective Bayesian optimization when a user has given “preference order constraints”, i.e. preferences about the importance of different objectives. For example, a user might specify that he or she wants to determine where, along the pareto front, a given objective varies significantly with respect to other objectives (which the authors term “diversity”) or when the objective is static with respect to other objectives (which they term “stability”). The authors give algorithms for this setting and show empirical results on synthetic functions and on a model search task. Comments: > My main criticism of this paper is that I am not convinced about the motivation for, and uses cases of, the described task of finding regions of the pareto front where an objective is “diverse” or “stable” as they are defined in the paper. There are two potential examples given in the introduction, but these are brief and unconvincing (another comment on these below). A real experiment is shown on a neural network model search task, but it is unclear how the method, when applied here, provides real benefits over other multi-objective optimization methods. More written discussion on the benefits and application of this method (for example in the model search task) could help alleviate this issue. > The three examples given in the introduction are: - A case where both objectives have constraints (precision>=0.8, recall>=0.7). - A case where we want diverse objective values along the pareto front. - A case where we want regions of the pareto front where a large change in one objective is required to obtain a small improvement in the other objective. Intuitively, these all seem to constrain the pareto front or prioritize regions of the pareto front over others. The abstract describes these as “constraints on the objectives of the type ‘objective A is more important than objective B’”. I feel that the introduction does not clearly describe how the description in the abstract aligns with the three examples given in the introduction. Is the argument that diversity/stability is a property that directly corresponds to the importance of an objective? It would be great if you could provide better clarity on this definition. > The dominated hypervolume is defined in section 2.2. It would be valuable to give some intuition about this quantity, in addition to the definition, in order to provide some clarity on how it will be used. ---------- Update after author response ---------- I want to thank the authors for their response. I believe the authors description of a couple real world examples are nice, but do not shed much light on the motivations for this method beyond the original submission. While appreciated, I will not change my score.
NIPS
Title Accelerated consensus via Min-Sum Splitting Abstract We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization. 1 Introduction Min-Sum is a local message-passing algorithm designed to distributedly optimize an objective function that can be written as a sum of component functions, each of which depends on a subset of the decision variables. Due to its simplicity, Min-Sum has emerged as canonical protocol to address large scale problems in a variety of domains, including signal processing, statistics, and machine learning. For problems supported on tree graphs, the Min-Sum algorithm corresponds to dynamic programming and is guaranteed to converge to the problem solution. For arbitrary graphs, the ordinary Min-Sum algorithm may fail to converge, or it may converge to something different than the problem solution [28]. In the case of strictly convex objective functions, there are known sufficient conditions to guarantee the convergence and correctness of the algorithm. The most general condition requires the Hessian of the objective function to be scaled diagonally dominant [28, 25]. While the Min-Sum scheme can be applied to optimization problems with constraints, by incorporating the constraints into the objective function as hard barriers, the known sufficient conditions do not apply in this case. In [34], a generalization of the traditional Min-Sum scheme has been proposed, based on a reparametrization of the original objective function. This algorithm is called Splitting, as it can be derived by creating equivalent graph representations for the objective function by “splitting” the nodes of the original graph. In the case of unconstrained problems with quadratic objective functions, where Min-Sum is also known as Gaussian Belief Propagation, the algorithm with splitting has been shown to yield convergence in settings where the ordinary Min-Sum does not converge [35]. To date, a theoretical investigation of the rates of convergence of Min-Sum Splitting has not been established. In this paper we establish rates of convergence for the Min-Sum Splitting algorithm applied to solve the consensus problem, which can be formulated as an equality-constrained problem in optimization. The basic version of the consensus problem is the network averaging problem. In this setting, each node in a graph is assigned a real number, and the goal is to design a distributed protocol that allows the nodes to iteratively exchange information with their neighbors so to arrive at consensus on the average across the network. Early work include [42, 41]. The design of distributed algorithms to solve the averaging problem has received a lot of attention recently, as consensus represents a widely-used primitive to compute aggregate statistics in a variety of fields. Applications include, for instance, estimation problems in sensor networks, distributed tracking and localization, multi-agents coordination, and distributed inference [20, 21, 9, 19]. Consensus is typically combined with some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. form of local optimization over a peer-to-peer network, as in the case of iterative subgradient methods [29, 40, 17, 10, 6, 16, 39]. In large-scale machine learning, consensus is used as a tool to distribute the minimization of a loss function over a large dataset into a network of processors that can exchange and aggregate information, and only have access to a subset of the data [31, 11, 26, 3]. Classical algorithms to solve the network averaging problem involve linear dynamical systems supported on the nodes of the graph. Even when the coefficients that control the dynamics are optimized, these methods are known to suffer from a “diffusive” rate of convergence, which corresponds to the rate of convergence to stationarity exhibited by the “diffusion” random walk naturally associated to a graph [44, 2]. This rate is optimal for graphs with good expansion properties, such as complete graphs or expanders. In this case the convergence time, i.e., the number of iterations required to reach a prescribed level of error accuracy ε > 0 in the `2 norm relative to the initial condition, scales independently of the dimension of the problem, as Θ(log 1/ε). For graphs with geometry this rate is suboptimal [7], and it does not yield a convergence time that matches the lower bound Ω(D log 1/ε), where D is the graph diameter [37, 36]. For example, in both cycle graphs and in grid-like topologies the number of iterations scale like Θ(D2 log 1/ε) (if n is the number of nodes, D ∼ n in a cycle and D ∼ √ n in a two-dimensional torus). Θ(D2 log 1/ε) is also the convergence time exhibited in random geometric graphs, which represent the relevant topologies for many applications in sensor networks [9]. In [7] it was established that for a class of graphs with geometry (polynomial growth or finite doubling dimension), the mixing time of any reversible Markov chain scales at least like D2, embodying the fact that symmetric walks on these graphs take D2 steps to travel distances of orderD. Min-Sum schemes to solve the consensus problem have been previously investigated in [27]. The authors show that the ordinary Min-Sum algorithm does not converge in graphs with cycles. They investigate a modified version of it that uses a soft barrier function to incorporate the equality constrains into the objective function. In the case of d-regular graphs, upon a proper choice of initial conditions, the authors show that the algorithm they propose reduces to a linear process supported on the directed edges of the graph, and they characterize the convergence time of the algorithm in terms of the Cesàro mixing time of a Markov chain defined on the set of directed edges of the original graph. In the case of cycle graphs (i.e., d = 2), they prove that the mixing time scales like O(D), which yields the convergence time O(D/ε log 1/ε). See Theorem 4 and Theorem 5 in [27]. In the case of (d/2)-dimensional tori (D ∼ n2/d), they conjecture that the mixing time is Θ(D2(d−1)/d), but do not present bounds for the convergence time. See Conjecture 1 in [27]. For other graph topologies, they leave the mixing time (and convergence time) achieved by their method as an open question. In this paper we show that the Min-Sum scheme based on splitting yields convergence to the consensus solution, and we analytically establish rates of convergence for any graph topology. First, we show that a certain parametrization of the Min-Sum protocol for consensus yields a linear message-passing update for any graph and for any choice of initial conditions. Second, we show that the introduction of the splitting parameters is not only fundamental to guarantee the convergence and correctness of the Min-Sum scheme in the consensus problem, but that proper tuning of these parameters yields accelerated (i.e., “subdiffusive”) asymptotic rates of convergence. We establish a square-root improvement for the asymptotic convergence time over diffusive methods, which allows Min-Sum Splitting to scale like O(D log(D/ε)) for cycles and tori. Our results show that Min-Sum schemes are competitive and get close to the optimal rate O(D log(1/ε)) recently established for some algorithms based on Nesterov’s acceleration [30, 36]. The main tool used for the analysis involves the construction of an auxiliary linear process supported on the nodes of the original graph to track the evolution of the Min-Sum Splitting algorithm, which is instead supported on the directed edges. This construction allows us to relate the convergence time of the Min-Sum scheme to the spectral gap of the matrix describing the dynamics of the auxiliary process, which is easier to analyze than the matrix describing the dynamics on the edges as in [27]. In the literature, overcoming the suboptimal convergence rate of classical algorithms for network averaging consensus has motivated the design of several accelerated methods. Two main lines of research have been developed, and seem to have evolved independently of each others: one involves lifted Markov chains techniques, see [37] for a review, the other involves accelerated first order methods in convex optimization, see [13] for a review. Another contribution of this paper is to show that Min-Sum Splitting bears similarities with both types of accelerated methods. On the one hand, Min-Sum can be seen as a process on a lifted space, which is the space of directed edges in the original graph. Here, splitting is seen to introduce a directionality in the message exchange of the ordinary Min-Sum protocol that is analogous to the directionality introduced in non-reversible random walks on lifted graphs to achieve faster convergence to stationarity. The advantage of the Min-Sum algorithm over lifted Markov chain methods is that no lifted graph needs to be constructed. On the other hand, the directionality induced on the edges by splitting translates into a memory term for the auxiliary algorithm running on the nodes. This memory term, which allows nodes to remember previous values and incorporate them into the next update, directly relates the Min-Sum Splitting algorithm to accelerated multi-step first order methods in convex optimization. In particular, we show that a proper choice of the splitting parameters recovers the same matrix that support the evolution of shift-register methods used in numerical analysis for linear solvers, and, as a consequence, we recover the same accelerated rate of convergence for consensus [45, 4, 24]. To summarize, the main contributions of this paper are: 1. First connection of Min-Sum schemes with lifted Markov chains techniques and multi-step methods in convex optimization. 2. First proof of how the directionality embedded in Belief Propagation protocols can be tuned and exploited to accelerate the convergence rate towards the problem solution. 3. First analysis of convergence rates for Min-Sum Splitting. New proof technique based on the introduction of an auxiliary process to track the evolution of the algorithm on the nodes. 4. Design of a Min-Sum protocol for the consensus problem that achieves better convergence rates than the ones established (and conjectured) for the Min-Sum method in [27]. Our results motivate further studies to generalize the acceleration due to splittings to other problems. The paper is organized as follows. In Section 2 we introduce the Min-Sum Splitting algorithm in its general form. In Section 3 we describe the consensus problem and review the classical diffusive algorithms. In Section 4 we review the main accelerated methods that have been proposed in the literature. In Section 5 we specialize the Min-Sum Splitting algorithm to the consensus problem, and show that a proper parametrization yields a linear exchange of messages supported on the directed edges of the graph. In Section 6 we derive the auxiliary message-passing algorithm that allows us to track the evolution of the Min-Sum Splitting algorithm via a linear process with memory supported on the nodes of the graph. In Section 7 we state Theorem 1, which shows that a proper choice of the tuning parameters recovers the rates of shift-registers. Proofs are given in the supplementary material. 2 The Min-Sum Splitting algorithm The Min-Sum algorithm is a distributed routine to optimize a cost function that is the sum of components supported on a given graph structure. Given a simple graph G = (V,E) with n := |V | vertices and m := |E| edges, let us assume that we are given a set of functions φv : R→ R ∪ {∞}, for each v ∈ V , and φvw = φwv : R × R → R ∪ {∞}, for each {v, w} ∈ E, and that we want to solve the following problem over the decision variables x = (xv)v∈V ∈ RV : minimize ∑ v∈V φv(xv) + ∑ {v,w}∈E φvw(xv, xw). (1) The Min-Sum algorithm describes an iterative exchange of messages—which are functions of the decision variables—associated to each directed edge in G. Let E := {(v, w) ∈ V ×V : {v, w} ∈ E} be the set of directed edges associated to the undirected edges in E (each edge in E corresponds to two edges in E). In this work we consider the synchronous implementation of the Min-Sum algorithm where at any given time step s, each directed edge (v, w) ∈ E supports two messages, ξ̂svw, µ̂ s vw : R→ R ∪ {∞}. Messages are computed iteratively. Given an initial choice of messages µ̂0 = (µ̂0vw)(v,w)∈E , the Min-Sum scheme that we investigate in this paper is given in Algorithm 1. Henceforth, for each v ∈ V , let N (v) := {w ∈ V : {v, w} ∈ E} denote the neighbors of node v. The formulation of the Min-Sum scheme given in Algorithm 1, which we refer to as Min-Sum Splitting, was introduced in [34]. This formulation admits as tuning parameters the real number δ ∈ R and the symmetric matrix Γ = (Γvw)v,w∈V ∈ RV×V . Without loss of generality, we assume that the sparsity of Γ respects the structure of the graph G, in the sense that if {v, w} 6∈ E then Γvw = 0 (note that Algorithm 1 only involves summations with respect to nearest neighbors in the graph). The choice of δ = 1 and Γ = A, where A is the adjacency matrix defined as Avw := 1 if {v, w} ∈ E and Avw := 0 otherwise, yields the ordinary Min-Sum algorithm. For Algorithm 1: Min-Sum Splitting Input: Messages µ̂0 = (µ̂0vw)(v,w)∈E ; parameters δ ∈ R and Γ ∈ RV×V symmetric; time t ≥ 1. for s ∈ {1, . . . , t} do ξ̂swv = φv/δ − µ̂s−1wv + ∑ z∈N (v) Γzvµ̂ s−1 zv , (w, v) ∈ E ; µ̂swv = minz∈R{φvw( · , z)/Γvw + (δ − 1)ξ̂swv + δξ̂svw(z)}, (w, v) ∈ E ; µtv = φv + δ ∑ w∈N (v) Γwvµ̂ t wv, v ∈ V ; Output: xtv = arg minz∈R µtv(z), v ∈ V . an arbitrary choice of strictly positive integer parameters, Algorithm 1 can be seen to correspond to the ordinary Min-Sum algorithm applied to a new formulation of the original problem, where an equivalent objective function is obtained from the original one in (1) by splitting each term φvw into Γvw ∈ N \ {0} terms, and each term φv into δ ∈ N \ {0} terms. Namely, minimize∑ v∈V ∑δ k=1 φ k v(xv) + ∑ {v,w}∈E ∑Γvw k=1 φ k vw(xv, xw), with φ k v := φv/δ and φ k vw := φvw/Γvw. 1 Hence the reason for the name “splitting” algorithm. Despite this interpretation, Algorithm 1 is defined for any real choice of parameters δ and Γ. In this paper we investigate the convergence behavior of the Min-Sum Splitting algorithm for some choices of δ and Γ, in the case of the consensus problem that we define in the next section. 3 The consensus problem and standard diffusive algorithms Given a simple graph G = (V,E) with n := |V | nodes, for each v ∈ V let φv : R→ R ∪ {∞} be a given function. The consensus problem is defined as follows: minimize ∑ v∈V φv(xv) subject to xv = xw, {v, w} ∈ E. (2) We interpret G as a communication graph where each node represents an agent, and each edge represent a communication channel between neighbor agents. Each agent v is given the function φv , and agents collaborate by iteratively exchanging information with their neighbors in G with the goal to eventually arrive to the solution of problem (2). The consensus problem amounts to designing distributed algorithms to solve problem (2) that respect the communication constraints encoded by G. A classical setting investigated in the literature is the least-square case yielding the network averaging problem, where for a given b ∈ RV we have2 φv(z) := 12z 2 − bvz and the solution of problem (2) is b̄ := 1n ∑ v∈V bv. In this setup, each agent v ∈ V is given a number bv, and agents want to exchange information with their neighbors according to a protocol that allows each of them to eventually reach consensus on the average b̄ across the entire network. Classical algorithms to solve this problem involve a linear exchange of information of the form xt = Wxt−1 with x0 = b, for a given matrix W ∈ RV×V that respects the topology of the graph G (i.e., Wvw 6= 0 only if {v, w} ∈ E or v = w), so that W t → 11T /n for t → ∞, where 1 is the all ones vector. This linear iteration allows for a distributed exchange of information among agents, as at any iteration each agent v ∈ V only receives information from his/her neighbors N (v) via the update: xtv = Wvvx t−1 v + ∑ w∈N (v)Wvwx t−1 w . The original literature on this problem investigates the case where the matrix W has non-negative coefficients and represents the transition matrix of a random walk on the nodes of the graph G, so that Wvw is interpreted as the probability that a random walk at node v visits node w in the next time step. A popular choice is given by the Metropolis-Hastings method [37], which involved the doubly-stochastic matrix WMH defined as WMHvw := 1/(2dmax) if {v, w} ∈ E, WMHvw := 1− dv/(2dmax) if w = v, and WMHvw := 0 otherwise, where dv := |N (v)| is the degree of node v, and dmax := maxv∈V dv is the maximum degree of the graph G. 1As mentioned in [34], one can also consider a more general formulation of the splitting algorithm with δ → (δv)v∈V ∈ R (possibly also with time-varying parameters). The current choice of the algorithm is motivated by the fact that in the present case the output of the algorithm can be tracked by analyzing a linear system on the nodes of the graph, as we will show in Section 5. 2In the literature, the classical choice is φv(z) := 12 ∑ v∈V (z − bv) 2, which yields the same results as the quadratic function that we define in the main text, as constant terms in the objective function do not alter the optimal point of the problem but only the optimal value of the objective function. In [44], necessary and sufficient conditions are given for a generic matrixW to satisfyW t → 11T /n, namely, 1TW = 1T , W1 = 1, and ρ(W − 11T /n) < 1, where ρ(M) denotes the spectral radius of a given matrix M . The authors show that the problem of choosing the optimal symmetric matrix W that minimizes ρ(W − 11T /n) = ‖W − 11T /n‖— where ‖M‖ denotes the spectral norm of a matrix M that coincides with ρ(M) if M is symmetric — is a convex problem and it can be cast as a semi-definite program. Typically, the optimal matrix involves negative coefficients, hence departing from the random walk interpretation. However, even the optimal choice of symmetric matrix is shown to yield a diffusive rate of convergence, which is already attained by the matrix WMH [7]. This rate corresponds to the speed of convergence to stationarity achieved by the diffusion random walk, defined as the Markov chain with transition matrix diag(d)−1A, where diag(d) ∈ RV×V is the degree matrix, i.e., diagonal with diag(d)vv := dv, and A ∈ RV×V is the adjacency matrix, i.e., symmetric with Avw := 1 if {v, w} ∈ E, and Avw := 0 otherwise. For instance, the condition ‖W − 11T /n‖t ≤ ε, where ‖ · ‖ is the `2 norm, yields a convergence time that scales like t ∼ Θ(D2 log(1/ε)) in cycle graphs and tori [33], where D is the graph diameter. The authors in [7] established that for a class of graphs with geometry (polynomial growth or finite doubling dimension) the mixing time of any reversible Markov chain scales at least like D2, and it is achieved by Metropolis-Hastings [37]. 4 Accelerated algorithms To overcome the diffusive behavior typical of classical consensus algorithms, two main types of approaches have been investigated in the literature, which seem to have been developed independently. The first approach involves the construction of a lifted graph Ĝ = (V̂ , Ê) and of a linear system supported on the nodes of it, of the form x̂t = Ŵ x̂t−1, where Ŵ ∈ RV̂×V̂ is the transition matrix of a non-reversible Markov chain on the nodes of Ĝ. This approach has its origins in the work of [8] and [5], where it was observed for the first time that certain non-reversible Markov chains on properly-constructed lifted graphs yield better mixing times than reversible chains on the original graphs. For some simple graph topologies, such as cycle graphs and two-dimensional grids, the construction of the optimal lifted graphs is well-understood already from the works in [8, 5]. A general theory of lifting in the context of Gossip algorithms has been investigated in [18, 37]. However, this construction incurs additional overhead, which yield non-optimal computational complexity, even for cycle graphs and two-dimensional grids. Typically, lifted random walks on arbitrary graph topologies are constructed on a one-by-one case, exploiting the specifics of the graph at hand. This is the case, for instance, for random geometric graphs [22, 23]. The key property that allows non-reversible lifted Markov chains to achieve subdiffusive rates is the introduction of a directionality in the process to break the diffusive nature of reversible chains. The strength of the directionality depends on global properties of the original graph, such as the number of nodes [8, 5] or the diameter [37]. See Figure 1. The second approach involves designing linear updates that are supported on the original graph G and keep track of a longer history of previous iterates. This approach relies on the fact that the original consensus update xt = Wxt−1 can be interpreted as a primal-dual gradient ascent method to solve problem (2) with a quadratic objective function [32]. This allows the implementation of accelerated gradient methods. To the best of our knowledge, this idea was first introduced in [14], and since then it has been investigated in many other papers. We refer to [13, 24], and references in there, for a review and comparison of multi-step accelerated methods for consensus. The simplest multi-step extension of gradient methods is Polyak’s “heavy ball,” which involves adding a “momentum” term to the standard update and yields a primal iterate of the form xt = Wxt−1 + γ(xt−1 − xt−2). Another popular multi-step method involves Nesterov’s acceleration, and yields xt = (1 + γ)Wxt−1 − γWxt−2. Aligned with the idea of adding a momentum term is the idea of adding a shift register term, which yields xt = (1 + γ)Wxt−1 − γxt−2. For our purposes, we note that these methods can be written as( xt xt−1 ) = K ( xt−1 xt−2 ) , (3) for a certain matrix K ∈ R2n×2n. As in the case of lifted Markov chains techniques, also multi-step methods are able to achieve accelerated rates by exploiting some form of global information: the choice of the parameter γ that yields subdiffusive rates depends on the eigenvalues of W . Remark 1. Beyond lifted Markov chains techniques and accelerated first order methods, many other algorithms have been proposed to solve the consensus problem. The literature is vast. As we focus on Min-Sum schemes, an exhaustive literature review on consensus is beyond the scope of our work. Of particular interest for our results is the distributed ADMM approach [3, 43, 38]. Recently in [12], for a class of unconstrained problems with quadratic objective functions, it has been shown that message-passing ADMM schemes can be interpreted as lifting of gradient descent techniques. This prompts for further investigation to connect Min-Sum, ADMM, and accelerated first order methods. In the next two sections we show that Min-Sum Splitting bears similarities with both types of accelerated methods described above. On the one hand, in Section 5 we show that the estimates xtv’s of Algorithm 1 applied to the network averaging problem can be interpreted as the result of a linear process supported on a lifted space, i.e., the space E of directed edges associated to the undirected edges of G. On the other hand, in Section 6 we show that the estimates xtv’s can be seen as the result of a linear multi-step process supported on the nodes of G, which can be written as in (3). Later on, in Section 7 and Section 8, we will see that the similarities just described go beyond the structure of the processes, and they extend to the acceleration mechanism itself. In particular, the choice of splitting parameters that yields subdiffusive convergence rates, matching the asymptotic rates of shift register methods, is also shown to depend on global information about G. 5 Min-Sum Splitting for consensus We apply Min-Sum Splitting to solve network averaging. We show that in this case the messagepassing protocol is a linear exchange of parameters associated to the directed edges in E . Given δ ∈ R and Γ ∈ RV×V symmetric, let ĥ(δ) ∈ RE be the vector defined as ĥ(δ)wv := bw + (1− 1/δ)bv , and let K̂(δ,Γ) ∈ RE×E be matrix defined as K̂(δ,Γ)wv,zu := δΓzw if u = w, z ∈ N (w) \ {v}, δ(Γvw − 1) if u = w, z = v, (δ − 1)Γzv if u = v, z ∈ N (v) \ {w}, (δ − 1)(Γwv − 1) if u = v, z = w, 0 otherwise. (4) Consider Algorithm 2 with initial conditions R̂0 = (R̂0vw)(v,w)∈E ∈ RE , r̂0 = (r̂0vw)(v,w)∈E ∈ RE . Algorithm 2: Min-Sum Splitting, consensus problem, quadratic case Input: R̂0, r̂0 ∈ RE ; δ ∈ R, Γ ∈ RV×V symmetric; K̂(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do R̂s = (2− 1/δ)1 + K̂(δ,Γ)R̂s−1; r̂s = ĥ(δ) + K̂(δ,Γ)r̂s−1; Output: xtv := bv+δ ∑ w∈N(v) Γwv r̂ t wv 1+δ ∑ w∈N(v) ΓwvR̂ t wv , v ∈ V . Proposition 1. Let δ ∈ R and Γ ∈ RV×V symmetric be given. Consider Algorithm 1 applied to problem (2) with φv(z) := 12z 2−bvz and with quadratic initial messages: µ̂0vw(z) = 12 R̂ 0 vwz 2−r̂0vwz, for some R̂0vw > 0 and r̂ 0 vw ∈ R. Then, the messages will remain quadratic, i.e., µ̂svw(z) = 12 R̂ s vwz 2− r̂svwz for any s ≥ 1, and the parameters evolve as in Algorithm 2. If 1 + δ ∑ w∈N (v) ΓwvR̂ t wv > 0 for any v ∈ V and t ≥ 1, then the output of Algorithm 2 coincides with the output of Algorithm 1. 6 Auxiliary message-passing scheme We show that the output of Algorithm 2 can be tracked by a new message-passing scheme that corresponds to a multi-step linear exchange of parameters associated to the nodes of G. This auxiliary algorithm represents the main tool to establish convergence rates for the Min-Sum Splitting protocol, i.e., Theorem 1 below. The intuition behind the auxiliary process is that while Algorithm 1 (hence, Algorithm 2) involves an exchange of messages supported on the directed edges E , the computation of the estimates xtv’s only involve the belief functions µ t v’s, which are supported on the nodes of G. Due to the simple nature of the pairwise equality constraints in the consensus problem, in the present case a reparametrization allows to track the output of Min-Sum via an algorithm that directly updates the belief functions on the nodes of the graph, which yields Algorithm 3. Given δ ∈ R and Γ ∈ Rn×n symmetric, define the matrix K(δ,Γ) ∈ R2n×2n as K(δ,Γ) := ( (1− δ)I − (1− δ)diag(Γ1) + δΓ δI δI − δdiag(Γ1) + (1− δ)Γ (1− δ)I ) , (5) where I ∈ RV×V is the identity matrix and diag(Γ1) ∈ RV×V is diagonal with (diag(Γ1))vv = (Γ1)v = ∑ w∈N (v) Γvw. Consider Algorithm 3 with initial conditions R 0, r0, Q0, q0 ∈ RV . Algorithm 3: Auxiliary message-passing Input: R0, r0, Q0, q0 ∈ RV ; δ ∈ R, Γ ∈ RV×V symmetric; K(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do( rs qs ) = K(δ,Γ) ( rs−1 qs−1 ) ; ( Rs Qs ) = K(δ,Γ) ( Rs−1 Qs−1 ) ; Output: xtv := rtv/Rtv, v ∈ V . Proposition 2. Let δ ∈ R and Γ ∈ RV×V symmetric be given. The output of Algorithm 2 with initial conditions R̂0, r̂0 ∈ RE is the output of Algorithm 3 with R0v := 1 + δ ∑ w∈N (v) ΓwvR̂ 0 wv, Q 0 v := 1− δ ∑ w∈N (v) ΓwvR̂ 0 wv , r 0 v := bv + δ ∑ w∈N (v) Γwv r̂ 0 wv , and q 0 v := bv − δ ∑ w∈N (v) Γvw r̂ 0 vw. Proposition 2 shows that upon proper initialization, the outputs of Algorithm 2 and Algorithm 3 are equivalent. Hence, Algorithm 3 represents a tool to investigate the convergence behavior of the Min-Sum Splitting algorithm. Analytically, the advantage of the formulation given in Algorithm 3 over the one given in Algorithm 2 is that the former involves two coupled systems of n equations whose convergence behavior can explicitly be linked to the spectral properties of the n× n matrix Γ, as we will see in Theorem 1 below. On the contrary, the linear system of 2m equations in Algorithm 2 does not seem to exhibit an immediate link to the spectral properties of Γ. In this respect, we note that the previous paper that investigated Min-Sum schemes for consensus, i.e., [27], characterized the convergence rate of the algorithm under consideration — albeit only in the case of d-regular graphs, and upon initializing the quadratic terms to the fix point — in terms of the spectral gap of a matrix that controls a linear system of 2m equations. However, the authors only list results on the behavior of this spectral gap in the case of cycle graphs, i.e., d = 2, and present a conjecture for 2d-tori. 7 Accelerated convergence rates for Min-Sum Splitting We investigate the convergence behavior of the Min-Sum Splitting algorithm to solve problem (2) with quadratic objective functions. Henceforth, without loss of generality, let b ∈ RV be given with 0 < bv < 1 for each v ∈ V , and let φv(z) := 12z 2 − bvz. Define b̄ := ∑ v∈V bv/n. Recall from [27] that the ordinary Min-Sum algorithm (i.e., Algorithm 2 with δ = 1 and Γ = A, where A is the adjacency matrix of the graph G) does not converge if the graph G has a cycle. We now show that a proper choice of the tuning parameters allows Min-Sum Splitting to converge to the problem solution in a subdiffusive way. The proof of this result, which is contained in the supplementary material, relies on the use of the auxiliary method defined in Algorithm 3 to track the evolution of the Min-Sum Splitting scheme. Here, recall that ‖x‖ denotes the `2 norm of a given vector x, ‖M‖ denotes the `2 matrix norm of the given matrix M , and ρ(M) its spectral radius. Theorem 1. Let W ∈ RV×V be a symmetric matrix with W1 = 1 and ρW := ρ(W − 11T /n) < 1. Let δ = 1 and Γ = γW , with γ = 2/(1 + √ 1− ρ2W ). Let xt be the output at time t of Algorithm 2 with initial conditions R̂0 = r̂0 = 0. Define K := ( γW I (1− γ)I 0 ) , K∞ := 1 (2− γ)n ( 11T 11T (1− γ)11T (1− γ)11T ) . (6) Then, for any v ∈ V we have limt→∞ xtv = b̄ and ‖xt − b̄1‖ ≤ 4 √ 2n 2−γ ‖(K −K ∞)t‖. The asymptotic rate of convergence is given by ρK := ρ(K −K∞) = limt→∞ ‖(K −K∞)t‖1/t = √ (1− √ 1−ρ2W )/(1+ √ 1−ρ2W ) < ρW < 1, which satisfies 12 √ 1/(1− ρW ) ≤ 1/(1− ρK) ≤ √ 1/(1− ρW ). Theorem 1 shows that the choice of splitting parameters δ = 1 and Γ = γW , where γ and W are defined as in the statement of the theorem, allows the Min-Sum Splitting scheme to achieve the asymptotic rate of convergence that is given by the second largest eigenvalue in magnitude of the matrix K defined in (6), i.e., the quantity ρK . The matrix K is the same matrix that describes shift-register methods for consensus [45, 4, 24]. In fact, the proof of Theorem 1 relies on the spectral analysis previously established for shift-registers, which can be traced back to [15]. See also [13, 24]. Following [27], let us consider the absolute measure of error given by ‖xt − b̄1‖/ √ n (recall that we assume 0 < bv < 1 so that ‖b‖ ≤ √ n). From Theorem 1 it follows that, asymptotically, we have ‖xt − b̄1‖/ √ n . 4 √ 2ρtK/(2− γ). If we define the asymptotic convergence time as the minimum time t so that, asymptotically, ‖xt− b̄1‖/ √ n . ε, then the Min-Sum Splitting scheme investigated in Theorem 1 has an asymptotic convergence time that isO(1/(1−ρK) log{[1/(1−ρK)]/ε}). Given the last bound in Theorem 1, this result achieves (modulo logarithmic terms) a square-root improvement over the convergence time of diffusive methods, which scale like Θ(1/(1− ρW ) log 1/ε). For cycle graphs and, more generally, for higher-dimensional tori — where 1/(1 − ρW ) is Θ(D2) so that 1/(1−ρK) is Θ(D) [33, 1] — the convergence time isO(D logD/ε), whereD is the graph diameter. As prescribed by Theorem 1, the choice of γ that makes the Min-Sum scheme achieve a subdiffusive rate depends on global properties of the graph G. Namely, γ depends on the quantity ρW , the second largest eigenvalue in magnitude of the matrix W . This fact connects the acceleration mechanism induced by splitting in the Min-Sum scheme to the acceleration mechanism of lifted Markov chains techniques (see Figure 1) and multi-step first order methods, as described in Section 4. It remains to be investigated how choices of splitting parameters different than the ones investigated in Theorem 1 affect the convergence behavior of the Min-Sum Splitting algorithm. 8 Conclusions The Min-Sum Splitting algorithm has been previously observed to yield convergence in settings where the ordinary Min-Sum protocol does not converge [35]. In this paper we proved that the introduction of splitting parameters is not only fundamental to guarantee the convergence of the Min-Sum scheme applied to the consensus problem, but that proper tuning of these parameters yields accelerated convergence rates. As prescribed by Theorem 1, the choice of splitting parameters that yields subdiffusive rates involves global type of information, via the spectral gap of a matrix associated to the original graph (see the choice of γ in Theorem 1). The acceleration mechanism exploited by Min-Sum Splitting is analogous to the acceleration mechanism exploited by lifted Markov chain techniques — where the transition matrix of the lifted random walks is typically chosen to depend on the total number of nodes in the graph [8, 5] or on its diameter [37] (global pieces of information) — and to the acceleration mechanism exploited by multi-step gradient methods — where the momentum/shift-register term is chosen as a function of the eigenvalues of a matrix supported on the original graph [13] (again, a global information). Prior to our results, this connection seems to have not been established in the literature. Our findings motivate further studies to generalize the acceleration due to splittings to other problem instances, beyond consensus. Acknowledgements This work was partially supported by the NSF under Grant EECS-1609484.
1. What is the focus of the paper regarding distributed consensus problems? 2. What is the novelty of the proposed method in comparison to prior works? 3. What is unclear in the description of the equivalent objective function?
Review
Review This paper applies an accelerated variant of the min-sum algorithm, called min-sum splitting, to the distributed consensus problem. The paper is very well written, with the contribution clearly placed in the context of the state of the art in the topic. To the best of my knowledge (although I am not an expert on the topic), the results are novel and constitute a qualitative advance. In particular, the paper presents a novel connection between min-sum algorithms and lifted Markov chain techniques. There is a detail which is not clear in the presentation. In page 4, when describing the equivalent objective function that is minimized by the min-sum algorithm to yield the min-sum splitting scheme, the authors write: "...splitting each term $\phi_{vw}$ into $\Gamma_{vw}$ terms, and each term $\phi_v$ into $\delta$ terms,..." However, it is not clear what this means, since $\delta$ and $\Gamma_{vw}$, as introduced on the previous page are real numbers.
NIPS
Title Accelerated consensus via Min-Sum Splitting Abstract We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization. 1 Introduction Min-Sum is a local message-passing algorithm designed to distributedly optimize an objective function that can be written as a sum of component functions, each of which depends on a subset of the decision variables. Due to its simplicity, Min-Sum has emerged as canonical protocol to address large scale problems in a variety of domains, including signal processing, statistics, and machine learning. For problems supported on tree graphs, the Min-Sum algorithm corresponds to dynamic programming and is guaranteed to converge to the problem solution. For arbitrary graphs, the ordinary Min-Sum algorithm may fail to converge, or it may converge to something different than the problem solution [28]. In the case of strictly convex objective functions, there are known sufficient conditions to guarantee the convergence and correctness of the algorithm. The most general condition requires the Hessian of the objective function to be scaled diagonally dominant [28, 25]. While the Min-Sum scheme can be applied to optimization problems with constraints, by incorporating the constraints into the objective function as hard barriers, the known sufficient conditions do not apply in this case. In [34], a generalization of the traditional Min-Sum scheme has been proposed, based on a reparametrization of the original objective function. This algorithm is called Splitting, as it can be derived by creating equivalent graph representations for the objective function by “splitting” the nodes of the original graph. In the case of unconstrained problems with quadratic objective functions, where Min-Sum is also known as Gaussian Belief Propagation, the algorithm with splitting has been shown to yield convergence in settings where the ordinary Min-Sum does not converge [35]. To date, a theoretical investigation of the rates of convergence of Min-Sum Splitting has not been established. In this paper we establish rates of convergence for the Min-Sum Splitting algorithm applied to solve the consensus problem, which can be formulated as an equality-constrained problem in optimization. The basic version of the consensus problem is the network averaging problem. In this setting, each node in a graph is assigned a real number, and the goal is to design a distributed protocol that allows the nodes to iteratively exchange information with their neighbors so to arrive at consensus on the average across the network. Early work include [42, 41]. The design of distributed algorithms to solve the averaging problem has received a lot of attention recently, as consensus represents a widely-used primitive to compute aggregate statistics in a variety of fields. Applications include, for instance, estimation problems in sensor networks, distributed tracking and localization, multi-agents coordination, and distributed inference [20, 21, 9, 19]. Consensus is typically combined with some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. form of local optimization over a peer-to-peer network, as in the case of iterative subgradient methods [29, 40, 17, 10, 6, 16, 39]. In large-scale machine learning, consensus is used as a tool to distribute the minimization of a loss function over a large dataset into a network of processors that can exchange and aggregate information, and only have access to a subset of the data [31, 11, 26, 3]. Classical algorithms to solve the network averaging problem involve linear dynamical systems supported on the nodes of the graph. Even when the coefficients that control the dynamics are optimized, these methods are known to suffer from a “diffusive” rate of convergence, which corresponds to the rate of convergence to stationarity exhibited by the “diffusion” random walk naturally associated to a graph [44, 2]. This rate is optimal for graphs with good expansion properties, such as complete graphs or expanders. In this case the convergence time, i.e., the number of iterations required to reach a prescribed level of error accuracy ε > 0 in the `2 norm relative to the initial condition, scales independently of the dimension of the problem, as Θ(log 1/ε). For graphs with geometry this rate is suboptimal [7], and it does not yield a convergence time that matches the lower bound Ω(D log 1/ε), where D is the graph diameter [37, 36]. For example, in both cycle graphs and in grid-like topologies the number of iterations scale like Θ(D2 log 1/ε) (if n is the number of nodes, D ∼ n in a cycle and D ∼ √ n in a two-dimensional torus). Θ(D2 log 1/ε) is also the convergence time exhibited in random geometric graphs, which represent the relevant topologies for many applications in sensor networks [9]. In [7] it was established that for a class of graphs with geometry (polynomial growth or finite doubling dimension), the mixing time of any reversible Markov chain scales at least like D2, embodying the fact that symmetric walks on these graphs take D2 steps to travel distances of orderD. Min-Sum schemes to solve the consensus problem have been previously investigated in [27]. The authors show that the ordinary Min-Sum algorithm does not converge in graphs with cycles. They investigate a modified version of it that uses a soft barrier function to incorporate the equality constrains into the objective function. In the case of d-regular graphs, upon a proper choice of initial conditions, the authors show that the algorithm they propose reduces to a linear process supported on the directed edges of the graph, and they characterize the convergence time of the algorithm in terms of the Cesàro mixing time of a Markov chain defined on the set of directed edges of the original graph. In the case of cycle graphs (i.e., d = 2), they prove that the mixing time scales like O(D), which yields the convergence time O(D/ε log 1/ε). See Theorem 4 and Theorem 5 in [27]. In the case of (d/2)-dimensional tori (D ∼ n2/d), they conjecture that the mixing time is Θ(D2(d−1)/d), but do not present bounds for the convergence time. See Conjecture 1 in [27]. For other graph topologies, they leave the mixing time (and convergence time) achieved by their method as an open question. In this paper we show that the Min-Sum scheme based on splitting yields convergence to the consensus solution, and we analytically establish rates of convergence for any graph topology. First, we show that a certain parametrization of the Min-Sum protocol for consensus yields a linear message-passing update for any graph and for any choice of initial conditions. Second, we show that the introduction of the splitting parameters is not only fundamental to guarantee the convergence and correctness of the Min-Sum scheme in the consensus problem, but that proper tuning of these parameters yields accelerated (i.e., “subdiffusive”) asymptotic rates of convergence. We establish a square-root improvement for the asymptotic convergence time over diffusive methods, which allows Min-Sum Splitting to scale like O(D log(D/ε)) for cycles and tori. Our results show that Min-Sum schemes are competitive and get close to the optimal rate O(D log(1/ε)) recently established for some algorithms based on Nesterov’s acceleration [30, 36]. The main tool used for the analysis involves the construction of an auxiliary linear process supported on the nodes of the original graph to track the evolution of the Min-Sum Splitting algorithm, which is instead supported on the directed edges. This construction allows us to relate the convergence time of the Min-Sum scheme to the spectral gap of the matrix describing the dynamics of the auxiliary process, which is easier to analyze than the matrix describing the dynamics on the edges as in [27]. In the literature, overcoming the suboptimal convergence rate of classical algorithms for network averaging consensus has motivated the design of several accelerated methods. Two main lines of research have been developed, and seem to have evolved independently of each others: one involves lifted Markov chains techniques, see [37] for a review, the other involves accelerated first order methods in convex optimization, see [13] for a review. Another contribution of this paper is to show that Min-Sum Splitting bears similarities with both types of accelerated methods. On the one hand, Min-Sum can be seen as a process on a lifted space, which is the space of directed edges in the original graph. Here, splitting is seen to introduce a directionality in the message exchange of the ordinary Min-Sum protocol that is analogous to the directionality introduced in non-reversible random walks on lifted graphs to achieve faster convergence to stationarity. The advantage of the Min-Sum algorithm over lifted Markov chain methods is that no lifted graph needs to be constructed. On the other hand, the directionality induced on the edges by splitting translates into a memory term for the auxiliary algorithm running on the nodes. This memory term, which allows nodes to remember previous values and incorporate them into the next update, directly relates the Min-Sum Splitting algorithm to accelerated multi-step first order methods in convex optimization. In particular, we show that a proper choice of the splitting parameters recovers the same matrix that support the evolution of shift-register methods used in numerical analysis for linear solvers, and, as a consequence, we recover the same accelerated rate of convergence for consensus [45, 4, 24]. To summarize, the main contributions of this paper are: 1. First connection of Min-Sum schemes with lifted Markov chains techniques and multi-step methods in convex optimization. 2. First proof of how the directionality embedded in Belief Propagation protocols can be tuned and exploited to accelerate the convergence rate towards the problem solution. 3. First analysis of convergence rates for Min-Sum Splitting. New proof technique based on the introduction of an auxiliary process to track the evolution of the algorithm on the nodes. 4. Design of a Min-Sum protocol for the consensus problem that achieves better convergence rates than the ones established (and conjectured) for the Min-Sum method in [27]. Our results motivate further studies to generalize the acceleration due to splittings to other problems. The paper is organized as follows. In Section 2 we introduce the Min-Sum Splitting algorithm in its general form. In Section 3 we describe the consensus problem and review the classical diffusive algorithms. In Section 4 we review the main accelerated methods that have been proposed in the literature. In Section 5 we specialize the Min-Sum Splitting algorithm to the consensus problem, and show that a proper parametrization yields a linear exchange of messages supported on the directed edges of the graph. In Section 6 we derive the auxiliary message-passing algorithm that allows us to track the evolution of the Min-Sum Splitting algorithm via a linear process with memory supported on the nodes of the graph. In Section 7 we state Theorem 1, which shows that a proper choice of the tuning parameters recovers the rates of shift-registers. Proofs are given in the supplementary material. 2 The Min-Sum Splitting algorithm The Min-Sum algorithm is a distributed routine to optimize a cost function that is the sum of components supported on a given graph structure. Given a simple graph G = (V,E) with n := |V | vertices and m := |E| edges, let us assume that we are given a set of functions φv : R→ R ∪ {∞}, for each v ∈ V , and φvw = φwv : R × R → R ∪ {∞}, for each {v, w} ∈ E, and that we want to solve the following problem over the decision variables x = (xv)v∈V ∈ RV : minimize ∑ v∈V φv(xv) + ∑ {v,w}∈E φvw(xv, xw). (1) The Min-Sum algorithm describes an iterative exchange of messages—which are functions of the decision variables—associated to each directed edge in G. Let E := {(v, w) ∈ V ×V : {v, w} ∈ E} be the set of directed edges associated to the undirected edges in E (each edge in E corresponds to two edges in E). In this work we consider the synchronous implementation of the Min-Sum algorithm where at any given time step s, each directed edge (v, w) ∈ E supports two messages, ξ̂svw, µ̂ s vw : R→ R ∪ {∞}. Messages are computed iteratively. Given an initial choice of messages µ̂0 = (µ̂0vw)(v,w)∈E , the Min-Sum scheme that we investigate in this paper is given in Algorithm 1. Henceforth, for each v ∈ V , let N (v) := {w ∈ V : {v, w} ∈ E} denote the neighbors of node v. The formulation of the Min-Sum scheme given in Algorithm 1, which we refer to as Min-Sum Splitting, was introduced in [34]. This formulation admits as tuning parameters the real number δ ∈ R and the symmetric matrix Γ = (Γvw)v,w∈V ∈ RV×V . Without loss of generality, we assume that the sparsity of Γ respects the structure of the graph G, in the sense that if {v, w} 6∈ E then Γvw = 0 (note that Algorithm 1 only involves summations with respect to nearest neighbors in the graph). The choice of δ = 1 and Γ = A, where A is the adjacency matrix defined as Avw := 1 if {v, w} ∈ E and Avw := 0 otherwise, yields the ordinary Min-Sum algorithm. For Algorithm 1: Min-Sum Splitting Input: Messages µ̂0 = (µ̂0vw)(v,w)∈E ; parameters δ ∈ R and Γ ∈ RV×V symmetric; time t ≥ 1. for s ∈ {1, . . . , t} do ξ̂swv = φv/δ − µ̂s−1wv + ∑ z∈N (v) Γzvµ̂ s−1 zv , (w, v) ∈ E ; µ̂swv = minz∈R{φvw( · , z)/Γvw + (δ − 1)ξ̂swv + δξ̂svw(z)}, (w, v) ∈ E ; µtv = φv + δ ∑ w∈N (v) Γwvµ̂ t wv, v ∈ V ; Output: xtv = arg minz∈R µtv(z), v ∈ V . an arbitrary choice of strictly positive integer parameters, Algorithm 1 can be seen to correspond to the ordinary Min-Sum algorithm applied to a new formulation of the original problem, where an equivalent objective function is obtained from the original one in (1) by splitting each term φvw into Γvw ∈ N \ {0} terms, and each term φv into δ ∈ N \ {0} terms. Namely, minimize∑ v∈V ∑δ k=1 φ k v(xv) + ∑ {v,w}∈E ∑Γvw k=1 φ k vw(xv, xw), with φ k v := φv/δ and φ k vw := φvw/Γvw. 1 Hence the reason for the name “splitting” algorithm. Despite this interpretation, Algorithm 1 is defined for any real choice of parameters δ and Γ. In this paper we investigate the convergence behavior of the Min-Sum Splitting algorithm for some choices of δ and Γ, in the case of the consensus problem that we define in the next section. 3 The consensus problem and standard diffusive algorithms Given a simple graph G = (V,E) with n := |V | nodes, for each v ∈ V let φv : R→ R ∪ {∞} be a given function. The consensus problem is defined as follows: minimize ∑ v∈V φv(xv) subject to xv = xw, {v, w} ∈ E. (2) We interpret G as a communication graph where each node represents an agent, and each edge represent a communication channel between neighbor agents. Each agent v is given the function φv , and agents collaborate by iteratively exchanging information with their neighbors in G with the goal to eventually arrive to the solution of problem (2). The consensus problem amounts to designing distributed algorithms to solve problem (2) that respect the communication constraints encoded by G. A classical setting investigated in the literature is the least-square case yielding the network averaging problem, where for a given b ∈ RV we have2 φv(z) := 12z 2 − bvz and the solution of problem (2) is b̄ := 1n ∑ v∈V bv. In this setup, each agent v ∈ V is given a number bv, and agents want to exchange information with their neighbors according to a protocol that allows each of them to eventually reach consensus on the average b̄ across the entire network. Classical algorithms to solve this problem involve a linear exchange of information of the form xt = Wxt−1 with x0 = b, for a given matrix W ∈ RV×V that respects the topology of the graph G (i.e., Wvw 6= 0 only if {v, w} ∈ E or v = w), so that W t → 11T /n for t → ∞, where 1 is the all ones vector. This linear iteration allows for a distributed exchange of information among agents, as at any iteration each agent v ∈ V only receives information from his/her neighbors N (v) via the update: xtv = Wvvx t−1 v + ∑ w∈N (v)Wvwx t−1 w . The original literature on this problem investigates the case where the matrix W has non-negative coefficients and represents the transition matrix of a random walk on the nodes of the graph G, so that Wvw is interpreted as the probability that a random walk at node v visits node w in the next time step. A popular choice is given by the Metropolis-Hastings method [37], which involved the doubly-stochastic matrix WMH defined as WMHvw := 1/(2dmax) if {v, w} ∈ E, WMHvw := 1− dv/(2dmax) if w = v, and WMHvw := 0 otherwise, where dv := |N (v)| is the degree of node v, and dmax := maxv∈V dv is the maximum degree of the graph G. 1As mentioned in [34], one can also consider a more general formulation of the splitting algorithm with δ → (δv)v∈V ∈ R (possibly also with time-varying parameters). The current choice of the algorithm is motivated by the fact that in the present case the output of the algorithm can be tracked by analyzing a linear system on the nodes of the graph, as we will show in Section 5. 2In the literature, the classical choice is φv(z) := 12 ∑ v∈V (z − bv) 2, which yields the same results as the quadratic function that we define in the main text, as constant terms in the objective function do not alter the optimal point of the problem but only the optimal value of the objective function. In [44], necessary and sufficient conditions are given for a generic matrixW to satisfyW t → 11T /n, namely, 1TW = 1T , W1 = 1, and ρ(W − 11T /n) < 1, where ρ(M) denotes the spectral radius of a given matrix M . The authors show that the problem of choosing the optimal symmetric matrix W that minimizes ρ(W − 11T /n) = ‖W − 11T /n‖— where ‖M‖ denotes the spectral norm of a matrix M that coincides with ρ(M) if M is symmetric — is a convex problem and it can be cast as a semi-definite program. Typically, the optimal matrix involves negative coefficients, hence departing from the random walk interpretation. However, even the optimal choice of symmetric matrix is shown to yield a diffusive rate of convergence, which is already attained by the matrix WMH [7]. This rate corresponds to the speed of convergence to stationarity achieved by the diffusion random walk, defined as the Markov chain with transition matrix diag(d)−1A, where diag(d) ∈ RV×V is the degree matrix, i.e., diagonal with diag(d)vv := dv, and A ∈ RV×V is the adjacency matrix, i.e., symmetric with Avw := 1 if {v, w} ∈ E, and Avw := 0 otherwise. For instance, the condition ‖W − 11T /n‖t ≤ ε, where ‖ · ‖ is the `2 norm, yields a convergence time that scales like t ∼ Θ(D2 log(1/ε)) in cycle graphs and tori [33], where D is the graph diameter. The authors in [7] established that for a class of graphs with geometry (polynomial growth or finite doubling dimension) the mixing time of any reversible Markov chain scales at least like D2, and it is achieved by Metropolis-Hastings [37]. 4 Accelerated algorithms To overcome the diffusive behavior typical of classical consensus algorithms, two main types of approaches have been investigated in the literature, which seem to have been developed independently. The first approach involves the construction of a lifted graph Ĝ = (V̂ , Ê) and of a linear system supported on the nodes of it, of the form x̂t = Ŵ x̂t−1, where Ŵ ∈ RV̂×V̂ is the transition matrix of a non-reversible Markov chain on the nodes of Ĝ. This approach has its origins in the work of [8] and [5], where it was observed for the first time that certain non-reversible Markov chains on properly-constructed lifted graphs yield better mixing times than reversible chains on the original graphs. For some simple graph topologies, such as cycle graphs and two-dimensional grids, the construction of the optimal lifted graphs is well-understood already from the works in [8, 5]. A general theory of lifting in the context of Gossip algorithms has been investigated in [18, 37]. However, this construction incurs additional overhead, which yield non-optimal computational complexity, even for cycle graphs and two-dimensional grids. Typically, lifted random walks on arbitrary graph topologies are constructed on a one-by-one case, exploiting the specifics of the graph at hand. This is the case, for instance, for random geometric graphs [22, 23]. The key property that allows non-reversible lifted Markov chains to achieve subdiffusive rates is the introduction of a directionality in the process to break the diffusive nature of reversible chains. The strength of the directionality depends on global properties of the original graph, such as the number of nodes [8, 5] or the diameter [37]. See Figure 1. The second approach involves designing linear updates that are supported on the original graph G and keep track of a longer history of previous iterates. This approach relies on the fact that the original consensus update xt = Wxt−1 can be interpreted as a primal-dual gradient ascent method to solve problem (2) with a quadratic objective function [32]. This allows the implementation of accelerated gradient methods. To the best of our knowledge, this idea was first introduced in [14], and since then it has been investigated in many other papers. We refer to [13, 24], and references in there, for a review and comparison of multi-step accelerated methods for consensus. The simplest multi-step extension of gradient methods is Polyak’s “heavy ball,” which involves adding a “momentum” term to the standard update and yields a primal iterate of the form xt = Wxt−1 + γ(xt−1 − xt−2). Another popular multi-step method involves Nesterov’s acceleration, and yields xt = (1 + γ)Wxt−1 − γWxt−2. Aligned with the idea of adding a momentum term is the idea of adding a shift register term, which yields xt = (1 + γ)Wxt−1 − γxt−2. For our purposes, we note that these methods can be written as( xt xt−1 ) = K ( xt−1 xt−2 ) , (3) for a certain matrix K ∈ R2n×2n. As in the case of lifted Markov chains techniques, also multi-step methods are able to achieve accelerated rates by exploiting some form of global information: the choice of the parameter γ that yields subdiffusive rates depends on the eigenvalues of W . Remark 1. Beyond lifted Markov chains techniques and accelerated first order methods, many other algorithms have been proposed to solve the consensus problem. The literature is vast. As we focus on Min-Sum schemes, an exhaustive literature review on consensus is beyond the scope of our work. Of particular interest for our results is the distributed ADMM approach [3, 43, 38]. Recently in [12], for a class of unconstrained problems with quadratic objective functions, it has been shown that message-passing ADMM schemes can be interpreted as lifting of gradient descent techniques. This prompts for further investigation to connect Min-Sum, ADMM, and accelerated first order methods. In the next two sections we show that Min-Sum Splitting bears similarities with both types of accelerated methods described above. On the one hand, in Section 5 we show that the estimates xtv’s of Algorithm 1 applied to the network averaging problem can be interpreted as the result of a linear process supported on a lifted space, i.e., the space E of directed edges associated to the undirected edges of G. On the other hand, in Section 6 we show that the estimates xtv’s can be seen as the result of a linear multi-step process supported on the nodes of G, which can be written as in (3). Later on, in Section 7 and Section 8, we will see that the similarities just described go beyond the structure of the processes, and they extend to the acceleration mechanism itself. In particular, the choice of splitting parameters that yields subdiffusive convergence rates, matching the asymptotic rates of shift register methods, is also shown to depend on global information about G. 5 Min-Sum Splitting for consensus We apply Min-Sum Splitting to solve network averaging. We show that in this case the messagepassing protocol is a linear exchange of parameters associated to the directed edges in E . Given δ ∈ R and Γ ∈ RV×V symmetric, let ĥ(δ) ∈ RE be the vector defined as ĥ(δ)wv := bw + (1− 1/δ)bv , and let K̂(δ,Γ) ∈ RE×E be matrix defined as K̂(δ,Γ)wv,zu := δΓzw if u = w, z ∈ N (w) \ {v}, δ(Γvw − 1) if u = w, z = v, (δ − 1)Γzv if u = v, z ∈ N (v) \ {w}, (δ − 1)(Γwv − 1) if u = v, z = w, 0 otherwise. (4) Consider Algorithm 2 with initial conditions R̂0 = (R̂0vw)(v,w)∈E ∈ RE , r̂0 = (r̂0vw)(v,w)∈E ∈ RE . Algorithm 2: Min-Sum Splitting, consensus problem, quadratic case Input: R̂0, r̂0 ∈ RE ; δ ∈ R, Γ ∈ RV×V symmetric; K̂(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do R̂s = (2− 1/δ)1 + K̂(δ,Γ)R̂s−1; r̂s = ĥ(δ) + K̂(δ,Γ)r̂s−1; Output: xtv := bv+δ ∑ w∈N(v) Γwv r̂ t wv 1+δ ∑ w∈N(v) ΓwvR̂ t wv , v ∈ V . Proposition 1. Let δ ∈ R and Γ ∈ RV×V symmetric be given. Consider Algorithm 1 applied to problem (2) with φv(z) := 12z 2−bvz and with quadratic initial messages: µ̂0vw(z) = 12 R̂ 0 vwz 2−r̂0vwz, for some R̂0vw > 0 and r̂ 0 vw ∈ R. Then, the messages will remain quadratic, i.e., µ̂svw(z) = 12 R̂ s vwz 2− r̂svwz for any s ≥ 1, and the parameters evolve as in Algorithm 2. If 1 + δ ∑ w∈N (v) ΓwvR̂ t wv > 0 for any v ∈ V and t ≥ 1, then the output of Algorithm 2 coincides with the output of Algorithm 1. 6 Auxiliary message-passing scheme We show that the output of Algorithm 2 can be tracked by a new message-passing scheme that corresponds to a multi-step linear exchange of parameters associated to the nodes of G. This auxiliary algorithm represents the main tool to establish convergence rates for the Min-Sum Splitting protocol, i.e., Theorem 1 below. The intuition behind the auxiliary process is that while Algorithm 1 (hence, Algorithm 2) involves an exchange of messages supported on the directed edges E , the computation of the estimates xtv’s only involve the belief functions µ t v’s, which are supported on the nodes of G. Due to the simple nature of the pairwise equality constraints in the consensus problem, in the present case a reparametrization allows to track the output of Min-Sum via an algorithm that directly updates the belief functions on the nodes of the graph, which yields Algorithm 3. Given δ ∈ R and Γ ∈ Rn×n symmetric, define the matrix K(δ,Γ) ∈ R2n×2n as K(δ,Γ) := ( (1− δ)I − (1− δ)diag(Γ1) + δΓ δI δI − δdiag(Γ1) + (1− δ)Γ (1− δ)I ) , (5) where I ∈ RV×V is the identity matrix and diag(Γ1) ∈ RV×V is diagonal with (diag(Γ1))vv = (Γ1)v = ∑ w∈N (v) Γvw. Consider Algorithm 3 with initial conditions R 0, r0, Q0, q0 ∈ RV . Algorithm 3: Auxiliary message-passing Input: R0, r0, Q0, q0 ∈ RV ; δ ∈ R, Γ ∈ RV×V symmetric; K(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do( rs qs ) = K(δ,Γ) ( rs−1 qs−1 ) ; ( Rs Qs ) = K(δ,Γ) ( Rs−1 Qs−1 ) ; Output: xtv := rtv/Rtv, v ∈ V . Proposition 2. Let δ ∈ R and Γ ∈ RV×V symmetric be given. The output of Algorithm 2 with initial conditions R̂0, r̂0 ∈ RE is the output of Algorithm 3 with R0v := 1 + δ ∑ w∈N (v) ΓwvR̂ 0 wv, Q 0 v := 1− δ ∑ w∈N (v) ΓwvR̂ 0 wv , r 0 v := bv + δ ∑ w∈N (v) Γwv r̂ 0 wv , and q 0 v := bv − δ ∑ w∈N (v) Γvw r̂ 0 vw. Proposition 2 shows that upon proper initialization, the outputs of Algorithm 2 and Algorithm 3 are equivalent. Hence, Algorithm 3 represents a tool to investigate the convergence behavior of the Min-Sum Splitting algorithm. Analytically, the advantage of the formulation given in Algorithm 3 over the one given in Algorithm 2 is that the former involves two coupled systems of n equations whose convergence behavior can explicitly be linked to the spectral properties of the n× n matrix Γ, as we will see in Theorem 1 below. On the contrary, the linear system of 2m equations in Algorithm 2 does not seem to exhibit an immediate link to the spectral properties of Γ. In this respect, we note that the previous paper that investigated Min-Sum schemes for consensus, i.e., [27], characterized the convergence rate of the algorithm under consideration — albeit only in the case of d-regular graphs, and upon initializing the quadratic terms to the fix point — in terms of the spectral gap of a matrix that controls a linear system of 2m equations. However, the authors only list results on the behavior of this spectral gap in the case of cycle graphs, i.e., d = 2, and present a conjecture for 2d-tori. 7 Accelerated convergence rates for Min-Sum Splitting We investigate the convergence behavior of the Min-Sum Splitting algorithm to solve problem (2) with quadratic objective functions. Henceforth, without loss of generality, let b ∈ RV be given with 0 < bv < 1 for each v ∈ V , and let φv(z) := 12z 2 − bvz. Define b̄ := ∑ v∈V bv/n. Recall from [27] that the ordinary Min-Sum algorithm (i.e., Algorithm 2 with δ = 1 and Γ = A, where A is the adjacency matrix of the graph G) does not converge if the graph G has a cycle. We now show that a proper choice of the tuning parameters allows Min-Sum Splitting to converge to the problem solution in a subdiffusive way. The proof of this result, which is contained in the supplementary material, relies on the use of the auxiliary method defined in Algorithm 3 to track the evolution of the Min-Sum Splitting scheme. Here, recall that ‖x‖ denotes the `2 norm of a given vector x, ‖M‖ denotes the `2 matrix norm of the given matrix M , and ρ(M) its spectral radius. Theorem 1. Let W ∈ RV×V be a symmetric matrix with W1 = 1 and ρW := ρ(W − 11T /n) < 1. Let δ = 1 and Γ = γW , with γ = 2/(1 + √ 1− ρ2W ). Let xt be the output at time t of Algorithm 2 with initial conditions R̂0 = r̂0 = 0. Define K := ( γW I (1− γ)I 0 ) , K∞ := 1 (2− γ)n ( 11T 11T (1− γ)11T (1− γ)11T ) . (6) Then, for any v ∈ V we have limt→∞ xtv = b̄ and ‖xt − b̄1‖ ≤ 4 √ 2n 2−γ ‖(K −K ∞)t‖. The asymptotic rate of convergence is given by ρK := ρ(K −K∞) = limt→∞ ‖(K −K∞)t‖1/t = √ (1− √ 1−ρ2W )/(1+ √ 1−ρ2W ) < ρW < 1, which satisfies 12 √ 1/(1− ρW ) ≤ 1/(1− ρK) ≤ √ 1/(1− ρW ). Theorem 1 shows that the choice of splitting parameters δ = 1 and Γ = γW , where γ and W are defined as in the statement of the theorem, allows the Min-Sum Splitting scheme to achieve the asymptotic rate of convergence that is given by the second largest eigenvalue in magnitude of the matrix K defined in (6), i.e., the quantity ρK . The matrix K is the same matrix that describes shift-register methods for consensus [45, 4, 24]. In fact, the proof of Theorem 1 relies on the spectral analysis previously established for shift-registers, which can be traced back to [15]. See also [13, 24]. Following [27], let us consider the absolute measure of error given by ‖xt − b̄1‖/ √ n (recall that we assume 0 < bv < 1 so that ‖b‖ ≤ √ n). From Theorem 1 it follows that, asymptotically, we have ‖xt − b̄1‖/ √ n . 4 √ 2ρtK/(2− γ). If we define the asymptotic convergence time as the minimum time t so that, asymptotically, ‖xt− b̄1‖/ √ n . ε, then the Min-Sum Splitting scheme investigated in Theorem 1 has an asymptotic convergence time that isO(1/(1−ρK) log{[1/(1−ρK)]/ε}). Given the last bound in Theorem 1, this result achieves (modulo logarithmic terms) a square-root improvement over the convergence time of diffusive methods, which scale like Θ(1/(1− ρW ) log 1/ε). For cycle graphs and, more generally, for higher-dimensional tori — where 1/(1 − ρW ) is Θ(D2) so that 1/(1−ρK) is Θ(D) [33, 1] — the convergence time isO(D logD/ε), whereD is the graph diameter. As prescribed by Theorem 1, the choice of γ that makes the Min-Sum scheme achieve a subdiffusive rate depends on global properties of the graph G. Namely, γ depends on the quantity ρW , the second largest eigenvalue in magnitude of the matrix W . This fact connects the acceleration mechanism induced by splitting in the Min-Sum scheme to the acceleration mechanism of lifted Markov chains techniques (see Figure 1) and multi-step first order methods, as described in Section 4. It remains to be investigated how choices of splitting parameters different than the ones investigated in Theorem 1 affect the convergence behavior of the Min-Sum Splitting algorithm. 8 Conclusions The Min-Sum Splitting algorithm has been previously observed to yield convergence in settings where the ordinary Min-Sum protocol does not converge [35]. In this paper we proved that the introduction of splitting parameters is not only fundamental to guarantee the convergence of the Min-Sum scheme applied to the consensus problem, but that proper tuning of these parameters yields accelerated convergence rates. As prescribed by Theorem 1, the choice of splitting parameters that yields subdiffusive rates involves global type of information, via the spectral gap of a matrix associated to the original graph (see the choice of γ in Theorem 1). The acceleration mechanism exploited by Min-Sum Splitting is analogous to the acceleration mechanism exploited by lifted Markov chain techniques — where the transition matrix of the lifted random walks is typically chosen to depend on the total number of nodes in the graph [8, 5] or on its diameter [37] (global pieces of information) — and to the acceleration mechanism exploited by multi-step gradient methods — where the momentum/shift-register term is chosen as a function of the eigenvalues of a matrix supported on the original graph [13] (again, a global information). Prior to our results, this connection seems to have not been established in the literature. Our findings motivate further studies to generalize the acceleration due to splittings to other problem instances, beyond consensus. Acknowledgements This work was partially supported by the NSF under Grant EECS-1609484.
1. What is the main contribution of the paper regarding convergence rate and average consensus problem? 2. Are there any concerns about the improvement of the result compared to prior works? 3. How does the paper address the literature review and citation of relevant references? 4. What are the strengths and weaknesses of the proposed method in terms of lifted graph, Markov chain, and Nesterov's acceleration? 5. Is there any concern about the triviality of Proposition 3? 6. How does the paper handle the worst-case scenario and the connection between variants and Heavy ball/Nesterov/Polyak? 7. Are there any other important references that the authors have ignored? 8. Does the paper provide a clear contribution given the existing literature? 9. Is there any confusion in the paper regarding directed edges and simple graphs? 10. Any comments on typos or minor issues in the paper?
Review
Review This paper studies the convergence rate of a so-called min-sum splitting method on the average consensus problem. In general he paper reads fine but the improvement of the result seems not impressive. Detailed comments are as follows. (1) It writes that ``This rate is optimal for graphs with good expansion properties, such as the complete graph. In this case the convergence time, i.e., the number of iterations required to reach a prescribed level of error accuracy in the… of the dimension of the problem, as…’’. For complete graphs, the linear rate is 0 because everyone converges to the average in 1 step. Also complete graphs are too special to be representative. So for which general category of graphs the complexity does not depend on the dimension (number of nodes)? Which general category of graphs is considered as good? (2) In this paragraph (same as comment 1), the literature review should include ‘’Linear Time Average Consensus on Fixed Graphs and Implications for Decentralized Optimization and Multi-Agent Control’’ by Olshevsky. Its convergence rate should be reported properly (more explanation will be given in comment 8). The reference mentioned here has reached a rather competitive or ever better bound compared the result of the submission. (3) At the top of page 2, for consensus optimization, important references like ``On the Linear Convergence of the ADMM in Decentralized Consensus Optimization’’ by Shi, Ling, Kun, Wu, and Yin, ``Optimal algorithms for smooth and strongly convex distributed optimization in networks’’ by Scaman, Bach, Bubeck, Lee, Massoulié should be cited. Also the authors should report the state-of-the-art algorithms for consensus optimization and their corresponding (linear) convergence rates. (4) When discussing lifted graph and Markov chain, this paper ignored a very related paper ``Markov Chain Lifting and Distributed ADMM’’ by Franca and Bento. (5) The content of the the last paragraph of page 5 is a long known fact. Should refer to ``Generalized consensus computation in networked systems with erasure links’’ by Rabbat, Nowak, and Bucklew. In the sequel, the connection between those variants and Heavy ball/Nesterov/Polyak is known to the field. (6) There are many important references regarding consensus optimization the authors have ignored. For example, ``Extra: An exact first-order algorithm for decentralized consensus optimization’’ by Shi, Ling, Wu, and Yin. ``Fast distributed gradient methods’’ by Jakovetic, J Xavier, and Moura. (7) Proposition 3 seems to be trivial and is a supplementary contribution. (8) The rate has reached by this paper, D log(D/eps), does not seem to have a significant improvement on the rate D log(1/eps) that has been reached by Linear Time Average Consensus on Fixed Graphs and Implications for Decentralized Optimization and Multi-Agent Control (see comment 2). Especially in the worst case scenario (holds for all graphs), D~n, the bound is even worse than that has been achieved in ``Linear Time Average Consensus….’’. (9) The paper``Linear Time Average Consensus…’’ improves the bound through Nesterov’s acceleration. The reviewer suspects that the so-called ``Auxiliary message-passing scheme’’ proposed by the authors is again a Nestov’s acceleration applied to min-sum algorithm. This is fine but the analysis is done for consensus which boils down to analyzing a linear system and is supposed to be not hard. The contribution of the paper becomes not clear given such situation. (10) The tiny improvement may come from a careful handle on the spectral gap of graphs. Eventually the worst case bound is still O(n) because O(n)=O(D) for the set of all graphs with n nodes. (11) Line 243 of page 6. The graph is simple but the author is using directed edges. This is confusing. (12) Typo at line 220 of page 6. Laplacian—> Lagrangian. After rebuttal: The reviewer is satisfied with the authors' response. But the evaluation score from this reviewer stays the same.
NIPS
Title Accelerated consensus via Min-Sum Splitting Abstract We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization. 1 Introduction Min-Sum is a local message-passing algorithm designed to distributedly optimize an objective function that can be written as a sum of component functions, each of which depends on a subset of the decision variables. Due to its simplicity, Min-Sum has emerged as canonical protocol to address large scale problems in a variety of domains, including signal processing, statistics, and machine learning. For problems supported on tree graphs, the Min-Sum algorithm corresponds to dynamic programming and is guaranteed to converge to the problem solution. For arbitrary graphs, the ordinary Min-Sum algorithm may fail to converge, or it may converge to something different than the problem solution [28]. In the case of strictly convex objective functions, there are known sufficient conditions to guarantee the convergence and correctness of the algorithm. The most general condition requires the Hessian of the objective function to be scaled diagonally dominant [28, 25]. While the Min-Sum scheme can be applied to optimization problems with constraints, by incorporating the constraints into the objective function as hard barriers, the known sufficient conditions do not apply in this case. In [34], a generalization of the traditional Min-Sum scheme has been proposed, based on a reparametrization of the original objective function. This algorithm is called Splitting, as it can be derived by creating equivalent graph representations for the objective function by “splitting” the nodes of the original graph. In the case of unconstrained problems with quadratic objective functions, where Min-Sum is also known as Gaussian Belief Propagation, the algorithm with splitting has been shown to yield convergence in settings where the ordinary Min-Sum does not converge [35]. To date, a theoretical investigation of the rates of convergence of Min-Sum Splitting has not been established. In this paper we establish rates of convergence for the Min-Sum Splitting algorithm applied to solve the consensus problem, which can be formulated as an equality-constrained problem in optimization. The basic version of the consensus problem is the network averaging problem. In this setting, each node in a graph is assigned a real number, and the goal is to design a distributed protocol that allows the nodes to iteratively exchange information with their neighbors so to arrive at consensus on the average across the network. Early work include [42, 41]. The design of distributed algorithms to solve the averaging problem has received a lot of attention recently, as consensus represents a widely-used primitive to compute aggregate statistics in a variety of fields. Applications include, for instance, estimation problems in sensor networks, distributed tracking and localization, multi-agents coordination, and distributed inference [20, 21, 9, 19]. Consensus is typically combined with some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. form of local optimization over a peer-to-peer network, as in the case of iterative subgradient methods [29, 40, 17, 10, 6, 16, 39]. In large-scale machine learning, consensus is used as a tool to distribute the minimization of a loss function over a large dataset into a network of processors that can exchange and aggregate information, and only have access to a subset of the data [31, 11, 26, 3]. Classical algorithms to solve the network averaging problem involve linear dynamical systems supported on the nodes of the graph. Even when the coefficients that control the dynamics are optimized, these methods are known to suffer from a “diffusive” rate of convergence, which corresponds to the rate of convergence to stationarity exhibited by the “diffusion” random walk naturally associated to a graph [44, 2]. This rate is optimal for graphs with good expansion properties, such as complete graphs or expanders. In this case the convergence time, i.e., the number of iterations required to reach a prescribed level of error accuracy ε > 0 in the `2 norm relative to the initial condition, scales independently of the dimension of the problem, as Θ(log 1/ε). For graphs with geometry this rate is suboptimal [7], and it does not yield a convergence time that matches the lower bound Ω(D log 1/ε), where D is the graph diameter [37, 36]. For example, in both cycle graphs and in grid-like topologies the number of iterations scale like Θ(D2 log 1/ε) (if n is the number of nodes, D ∼ n in a cycle and D ∼ √ n in a two-dimensional torus). Θ(D2 log 1/ε) is also the convergence time exhibited in random geometric graphs, which represent the relevant topologies for many applications in sensor networks [9]. In [7] it was established that for a class of graphs with geometry (polynomial growth or finite doubling dimension), the mixing time of any reversible Markov chain scales at least like D2, embodying the fact that symmetric walks on these graphs take D2 steps to travel distances of orderD. Min-Sum schemes to solve the consensus problem have been previously investigated in [27]. The authors show that the ordinary Min-Sum algorithm does not converge in graphs with cycles. They investigate a modified version of it that uses a soft barrier function to incorporate the equality constrains into the objective function. In the case of d-regular graphs, upon a proper choice of initial conditions, the authors show that the algorithm they propose reduces to a linear process supported on the directed edges of the graph, and they characterize the convergence time of the algorithm in terms of the Cesàro mixing time of a Markov chain defined on the set of directed edges of the original graph. In the case of cycle graphs (i.e., d = 2), they prove that the mixing time scales like O(D), which yields the convergence time O(D/ε log 1/ε). See Theorem 4 and Theorem 5 in [27]. In the case of (d/2)-dimensional tori (D ∼ n2/d), they conjecture that the mixing time is Θ(D2(d−1)/d), but do not present bounds for the convergence time. See Conjecture 1 in [27]. For other graph topologies, they leave the mixing time (and convergence time) achieved by their method as an open question. In this paper we show that the Min-Sum scheme based on splitting yields convergence to the consensus solution, and we analytically establish rates of convergence for any graph topology. First, we show that a certain parametrization of the Min-Sum protocol for consensus yields a linear message-passing update for any graph and for any choice of initial conditions. Second, we show that the introduction of the splitting parameters is not only fundamental to guarantee the convergence and correctness of the Min-Sum scheme in the consensus problem, but that proper tuning of these parameters yields accelerated (i.e., “subdiffusive”) asymptotic rates of convergence. We establish a square-root improvement for the asymptotic convergence time over diffusive methods, which allows Min-Sum Splitting to scale like O(D log(D/ε)) for cycles and tori. Our results show that Min-Sum schemes are competitive and get close to the optimal rate O(D log(1/ε)) recently established for some algorithms based on Nesterov’s acceleration [30, 36]. The main tool used for the analysis involves the construction of an auxiliary linear process supported on the nodes of the original graph to track the evolution of the Min-Sum Splitting algorithm, which is instead supported on the directed edges. This construction allows us to relate the convergence time of the Min-Sum scheme to the spectral gap of the matrix describing the dynamics of the auxiliary process, which is easier to analyze than the matrix describing the dynamics on the edges as in [27]. In the literature, overcoming the suboptimal convergence rate of classical algorithms for network averaging consensus has motivated the design of several accelerated methods. Two main lines of research have been developed, and seem to have evolved independently of each others: one involves lifted Markov chains techniques, see [37] for a review, the other involves accelerated first order methods in convex optimization, see [13] for a review. Another contribution of this paper is to show that Min-Sum Splitting bears similarities with both types of accelerated methods. On the one hand, Min-Sum can be seen as a process on a lifted space, which is the space of directed edges in the original graph. Here, splitting is seen to introduce a directionality in the message exchange of the ordinary Min-Sum protocol that is analogous to the directionality introduced in non-reversible random walks on lifted graphs to achieve faster convergence to stationarity. The advantage of the Min-Sum algorithm over lifted Markov chain methods is that no lifted graph needs to be constructed. On the other hand, the directionality induced on the edges by splitting translates into a memory term for the auxiliary algorithm running on the nodes. This memory term, which allows nodes to remember previous values and incorporate them into the next update, directly relates the Min-Sum Splitting algorithm to accelerated multi-step first order methods in convex optimization. In particular, we show that a proper choice of the splitting parameters recovers the same matrix that support the evolution of shift-register methods used in numerical analysis for linear solvers, and, as a consequence, we recover the same accelerated rate of convergence for consensus [45, 4, 24]. To summarize, the main contributions of this paper are: 1. First connection of Min-Sum schemes with lifted Markov chains techniques and multi-step methods in convex optimization. 2. First proof of how the directionality embedded in Belief Propagation protocols can be tuned and exploited to accelerate the convergence rate towards the problem solution. 3. First analysis of convergence rates for Min-Sum Splitting. New proof technique based on the introduction of an auxiliary process to track the evolution of the algorithm on the nodes. 4. Design of a Min-Sum protocol for the consensus problem that achieves better convergence rates than the ones established (and conjectured) for the Min-Sum method in [27]. Our results motivate further studies to generalize the acceleration due to splittings to other problems. The paper is organized as follows. In Section 2 we introduce the Min-Sum Splitting algorithm in its general form. In Section 3 we describe the consensus problem and review the classical diffusive algorithms. In Section 4 we review the main accelerated methods that have been proposed in the literature. In Section 5 we specialize the Min-Sum Splitting algorithm to the consensus problem, and show that a proper parametrization yields a linear exchange of messages supported on the directed edges of the graph. In Section 6 we derive the auxiliary message-passing algorithm that allows us to track the evolution of the Min-Sum Splitting algorithm via a linear process with memory supported on the nodes of the graph. In Section 7 we state Theorem 1, which shows that a proper choice of the tuning parameters recovers the rates of shift-registers. Proofs are given in the supplementary material. 2 The Min-Sum Splitting algorithm The Min-Sum algorithm is a distributed routine to optimize a cost function that is the sum of components supported on a given graph structure. Given a simple graph G = (V,E) with n := |V | vertices and m := |E| edges, let us assume that we are given a set of functions φv : R→ R ∪ {∞}, for each v ∈ V , and φvw = φwv : R × R → R ∪ {∞}, for each {v, w} ∈ E, and that we want to solve the following problem over the decision variables x = (xv)v∈V ∈ RV : minimize ∑ v∈V φv(xv) + ∑ {v,w}∈E φvw(xv, xw). (1) The Min-Sum algorithm describes an iterative exchange of messages—which are functions of the decision variables—associated to each directed edge in G. Let E := {(v, w) ∈ V ×V : {v, w} ∈ E} be the set of directed edges associated to the undirected edges in E (each edge in E corresponds to two edges in E). In this work we consider the synchronous implementation of the Min-Sum algorithm where at any given time step s, each directed edge (v, w) ∈ E supports two messages, ξ̂svw, µ̂ s vw : R→ R ∪ {∞}. Messages are computed iteratively. Given an initial choice of messages µ̂0 = (µ̂0vw)(v,w)∈E , the Min-Sum scheme that we investigate in this paper is given in Algorithm 1. Henceforth, for each v ∈ V , let N (v) := {w ∈ V : {v, w} ∈ E} denote the neighbors of node v. The formulation of the Min-Sum scheme given in Algorithm 1, which we refer to as Min-Sum Splitting, was introduced in [34]. This formulation admits as tuning parameters the real number δ ∈ R and the symmetric matrix Γ = (Γvw)v,w∈V ∈ RV×V . Without loss of generality, we assume that the sparsity of Γ respects the structure of the graph G, in the sense that if {v, w} 6∈ E then Γvw = 0 (note that Algorithm 1 only involves summations with respect to nearest neighbors in the graph). The choice of δ = 1 and Γ = A, where A is the adjacency matrix defined as Avw := 1 if {v, w} ∈ E and Avw := 0 otherwise, yields the ordinary Min-Sum algorithm. For Algorithm 1: Min-Sum Splitting Input: Messages µ̂0 = (µ̂0vw)(v,w)∈E ; parameters δ ∈ R and Γ ∈ RV×V symmetric; time t ≥ 1. for s ∈ {1, . . . , t} do ξ̂swv = φv/δ − µ̂s−1wv + ∑ z∈N (v) Γzvµ̂ s−1 zv , (w, v) ∈ E ; µ̂swv = minz∈R{φvw( · , z)/Γvw + (δ − 1)ξ̂swv + δξ̂svw(z)}, (w, v) ∈ E ; µtv = φv + δ ∑ w∈N (v) Γwvµ̂ t wv, v ∈ V ; Output: xtv = arg minz∈R µtv(z), v ∈ V . an arbitrary choice of strictly positive integer parameters, Algorithm 1 can be seen to correspond to the ordinary Min-Sum algorithm applied to a new formulation of the original problem, where an equivalent objective function is obtained from the original one in (1) by splitting each term φvw into Γvw ∈ N \ {0} terms, and each term φv into δ ∈ N \ {0} terms. Namely, minimize∑ v∈V ∑δ k=1 φ k v(xv) + ∑ {v,w}∈E ∑Γvw k=1 φ k vw(xv, xw), with φ k v := φv/δ and φ k vw := φvw/Γvw. 1 Hence the reason for the name “splitting” algorithm. Despite this interpretation, Algorithm 1 is defined for any real choice of parameters δ and Γ. In this paper we investigate the convergence behavior of the Min-Sum Splitting algorithm for some choices of δ and Γ, in the case of the consensus problem that we define in the next section. 3 The consensus problem and standard diffusive algorithms Given a simple graph G = (V,E) with n := |V | nodes, for each v ∈ V let φv : R→ R ∪ {∞} be a given function. The consensus problem is defined as follows: minimize ∑ v∈V φv(xv) subject to xv = xw, {v, w} ∈ E. (2) We interpret G as a communication graph where each node represents an agent, and each edge represent a communication channel between neighbor agents. Each agent v is given the function φv , and agents collaborate by iteratively exchanging information with their neighbors in G with the goal to eventually arrive to the solution of problem (2). The consensus problem amounts to designing distributed algorithms to solve problem (2) that respect the communication constraints encoded by G. A classical setting investigated in the literature is the least-square case yielding the network averaging problem, where for a given b ∈ RV we have2 φv(z) := 12z 2 − bvz and the solution of problem (2) is b̄ := 1n ∑ v∈V bv. In this setup, each agent v ∈ V is given a number bv, and agents want to exchange information with their neighbors according to a protocol that allows each of them to eventually reach consensus on the average b̄ across the entire network. Classical algorithms to solve this problem involve a linear exchange of information of the form xt = Wxt−1 with x0 = b, for a given matrix W ∈ RV×V that respects the topology of the graph G (i.e., Wvw 6= 0 only if {v, w} ∈ E or v = w), so that W t → 11T /n for t → ∞, where 1 is the all ones vector. This linear iteration allows for a distributed exchange of information among agents, as at any iteration each agent v ∈ V only receives information from his/her neighbors N (v) via the update: xtv = Wvvx t−1 v + ∑ w∈N (v)Wvwx t−1 w . The original literature on this problem investigates the case where the matrix W has non-negative coefficients and represents the transition matrix of a random walk on the nodes of the graph G, so that Wvw is interpreted as the probability that a random walk at node v visits node w in the next time step. A popular choice is given by the Metropolis-Hastings method [37], which involved the doubly-stochastic matrix WMH defined as WMHvw := 1/(2dmax) if {v, w} ∈ E, WMHvw := 1− dv/(2dmax) if w = v, and WMHvw := 0 otherwise, where dv := |N (v)| is the degree of node v, and dmax := maxv∈V dv is the maximum degree of the graph G. 1As mentioned in [34], one can also consider a more general formulation of the splitting algorithm with δ → (δv)v∈V ∈ R (possibly also with time-varying parameters). The current choice of the algorithm is motivated by the fact that in the present case the output of the algorithm can be tracked by analyzing a linear system on the nodes of the graph, as we will show in Section 5. 2In the literature, the classical choice is φv(z) := 12 ∑ v∈V (z − bv) 2, which yields the same results as the quadratic function that we define in the main text, as constant terms in the objective function do not alter the optimal point of the problem but only the optimal value of the objective function. In [44], necessary and sufficient conditions are given for a generic matrixW to satisfyW t → 11T /n, namely, 1TW = 1T , W1 = 1, and ρ(W − 11T /n) < 1, where ρ(M) denotes the spectral radius of a given matrix M . The authors show that the problem of choosing the optimal symmetric matrix W that minimizes ρ(W − 11T /n) = ‖W − 11T /n‖— where ‖M‖ denotes the spectral norm of a matrix M that coincides with ρ(M) if M is symmetric — is a convex problem and it can be cast as a semi-definite program. Typically, the optimal matrix involves negative coefficients, hence departing from the random walk interpretation. However, even the optimal choice of symmetric matrix is shown to yield a diffusive rate of convergence, which is already attained by the matrix WMH [7]. This rate corresponds to the speed of convergence to stationarity achieved by the diffusion random walk, defined as the Markov chain with transition matrix diag(d)−1A, where diag(d) ∈ RV×V is the degree matrix, i.e., diagonal with diag(d)vv := dv, and A ∈ RV×V is the adjacency matrix, i.e., symmetric with Avw := 1 if {v, w} ∈ E, and Avw := 0 otherwise. For instance, the condition ‖W − 11T /n‖t ≤ ε, where ‖ · ‖ is the `2 norm, yields a convergence time that scales like t ∼ Θ(D2 log(1/ε)) in cycle graphs and tori [33], where D is the graph diameter. The authors in [7] established that for a class of graphs with geometry (polynomial growth or finite doubling dimension) the mixing time of any reversible Markov chain scales at least like D2, and it is achieved by Metropolis-Hastings [37]. 4 Accelerated algorithms To overcome the diffusive behavior typical of classical consensus algorithms, two main types of approaches have been investigated in the literature, which seem to have been developed independently. The first approach involves the construction of a lifted graph Ĝ = (V̂ , Ê) and of a linear system supported on the nodes of it, of the form x̂t = Ŵ x̂t−1, where Ŵ ∈ RV̂×V̂ is the transition matrix of a non-reversible Markov chain on the nodes of Ĝ. This approach has its origins in the work of [8] and [5], where it was observed for the first time that certain non-reversible Markov chains on properly-constructed lifted graphs yield better mixing times than reversible chains on the original graphs. For some simple graph topologies, such as cycle graphs and two-dimensional grids, the construction of the optimal lifted graphs is well-understood already from the works in [8, 5]. A general theory of lifting in the context of Gossip algorithms has been investigated in [18, 37]. However, this construction incurs additional overhead, which yield non-optimal computational complexity, even for cycle graphs and two-dimensional grids. Typically, lifted random walks on arbitrary graph topologies are constructed on a one-by-one case, exploiting the specifics of the graph at hand. This is the case, for instance, for random geometric graphs [22, 23]. The key property that allows non-reversible lifted Markov chains to achieve subdiffusive rates is the introduction of a directionality in the process to break the diffusive nature of reversible chains. The strength of the directionality depends on global properties of the original graph, such as the number of nodes [8, 5] or the diameter [37]. See Figure 1. The second approach involves designing linear updates that are supported on the original graph G and keep track of a longer history of previous iterates. This approach relies on the fact that the original consensus update xt = Wxt−1 can be interpreted as a primal-dual gradient ascent method to solve problem (2) with a quadratic objective function [32]. This allows the implementation of accelerated gradient methods. To the best of our knowledge, this idea was first introduced in [14], and since then it has been investigated in many other papers. We refer to [13, 24], and references in there, for a review and comparison of multi-step accelerated methods for consensus. The simplest multi-step extension of gradient methods is Polyak’s “heavy ball,” which involves adding a “momentum” term to the standard update and yields a primal iterate of the form xt = Wxt−1 + γ(xt−1 − xt−2). Another popular multi-step method involves Nesterov’s acceleration, and yields xt = (1 + γ)Wxt−1 − γWxt−2. Aligned with the idea of adding a momentum term is the idea of adding a shift register term, which yields xt = (1 + γ)Wxt−1 − γxt−2. For our purposes, we note that these methods can be written as( xt xt−1 ) = K ( xt−1 xt−2 ) , (3) for a certain matrix K ∈ R2n×2n. As in the case of lifted Markov chains techniques, also multi-step methods are able to achieve accelerated rates by exploiting some form of global information: the choice of the parameter γ that yields subdiffusive rates depends on the eigenvalues of W . Remark 1. Beyond lifted Markov chains techniques and accelerated first order methods, many other algorithms have been proposed to solve the consensus problem. The literature is vast. As we focus on Min-Sum schemes, an exhaustive literature review on consensus is beyond the scope of our work. Of particular interest for our results is the distributed ADMM approach [3, 43, 38]. Recently in [12], for a class of unconstrained problems with quadratic objective functions, it has been shown that message-passing ADMM schemes can be interpreted as lifting of gradient descent techniques. This prompts for further investigation to connect Min-Sum, ADMM, and accelerated first order methods. In the next two sections we show that Min-Sum Splitting bears similarities with both types of accelerated methods described above. On the one hand, in Section 5 we show that the estimates xtv’s of Algorithm 1 applied to the network averaging problem can be interpreted as the result of a linear process supported on a lifted space, i.e., the space E of directed edges associated to the undirected edges of G. On the other hand, in Section 6 we show that the estimates xtv’s can be seen as the result of a linear multi-step process supported on the nodes of G, which can be written as in (3). Later on, in Section 7 and Section 8, we will see that the similarities just described go beyond the structure of the processes, and they extend to the acceleration mechanism itself. In particular, the choice of splitting parameters that yields subdiffusive convergence rates, matching the asymptotic rates of shift register methods, is also shown to depend on global information about G. 5 Min-Sum Splitting for consensus We apply Min-Sum Splitting to solve network averaging. We show that in this case the messagepassing protocol is a linear exchange of parameters associated to the directed edges in E . Given δ ∈ R and Γ ∈ RV×V symmetric, let ĥ(δ) ∈ RE be the vector defined as ĥ(δ)wv := bw + (1− 1/δ)bv , and let K̂(δ,Γ) ∈ RE×E be matrix defined as K̂(δ,Γ)wv,zu := δΓzw if u = w, z ∈ N (w) \ {v}, δ(Γvw − 1) if u = w, z = v, (δ − 1)Γzv if u = v, z ∈ N (v) \ {w}, (δ − 1)(Γwv − 1) if u = v, z = w, 0 otherwise. (4) Consider Algorithm 2 with initial conditions R̂0 = (R̂0vw)(v,w)∈E ∈ RE , r̂0 = (r̂0vw)(v,w)∈E ∈ RE . Algorithm 2: Min-Sum Splitting, consensus problem, quadratic case Input: R̂0, r̂0 ∈ RE ; δ ∈ R, Γ ∈ RV×V symmetric; K̂(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do R̂s = (2− 1/δ)1 + K̂(δ,Γ)R̂s−1; r̂s = ĥ(δ) + K̂(δ,Γ)r̂s−1; Output: xtv := bv+δ ∑ w∈N(v) Γwv r̂ t wv 1+δ ∑ w∈N(v) ΓwvR̂ t wv , v ∈ V . Proposition 1. Let δ ∈ R and Γ ∈ RV×V symmetric be given. Consider Algorithm 1 applied to problem (2) with φv(z) := 12z 2−bvz and with quadratic initial messages: µ̂0vw(z) = 12 R̂ 0 vwz 2−r̂0vwz, for some R̂0vw > 0 and r̂ 0 vw ∈ R. Then, the messages will remain quadratic, i.e., µ̂svw(z) = 12 R̂ s vwz 2− r̂svwz for any s ≥ 1, and the parameters evolve as in Algorithm 2. If 1 + δ ∑ w∈N (v) ΓwvR̂ t wv > 0 for any v ∈ V and t ≥ 1, then the output of Algorithm 2 coincides with the output of Algorithm 1. 6 Auxiliary message-passing scheme We show that the output of Algorithm 2 can be tracked by a new message-passing scheme that corresponds to a multi-step linear exchange of parameters associated to the nodes of G. This auxiliary algorithm represents the main tool to establish convergence rates for the Min-Sum Splitting protocol, i.e., Theorem 1 below. The intuition behind the auxiliary process is that while Algorithm 1 (hence, Algorithm 2) involves an exchange of messages supported on the directed edges E , the computation of the estimates xtv’s only involve the belief functions µ t v’s, which are supported on the nodes of G. Due to the simple nature of the pairwise equality constraints in the consensus problem, in the present case a reparametrization allows to track the output of Min-Sum via an algorithm that directly updates the belief functions on the nodes of the graph, which yields Algorithm 3. Given δ ∈ R and Γ ∈ Rn×n symmetric, define the matrix K(δ,Γ) ∈ R2n×2n as K(δ,Γ) := ( (1− δ)I − (1− δ)diag(Γ1) + δΓ δI δI − δdiag(Γ1) + (1− δ)Γ (1− δ)I ) , (5) where I ∈ RV×V is the identity matrix and diag(Γ1) ∈ RV×V is diagonal with (diag(Γ1))vv = (Γ1)v = ∑ w∈N (v) Γvw. Consider Algorithm 3 with initial conditions R 0, r0, Q0, q0 ∈ RV . Algorithm 3: Auxiliary message-passing Input: R0, r0, Q0, q0 ∈ RV ; δ ∈ R, Γ ∈ RV×V symmetric; K(δ,Γ) defined in (5); t ≥ 1. for s ∈ {1, . . . , t} do( rs qs ) = K(δ,Γ) ( rs−1 qs−1 ) ; ( Rs Qs ) = K(δ,Γ) ( Rs−1 Qs−1 ) ; Output: xtv := rtv/Rtv, v ∈ V . Proposition 2. Let δ ∈ R and Γ ∈ RV×V symmetric be given. The output of Algorithm 2 with initial conditions R̂0, r̂0 ∈ RE is the output of Algorithm 3 with R0v := 1 + δ ∑ w∈N (v) ΓwvR̂ 0 wv, Q 0 v := 1− δ ∑ w∈N (v) ΓwvR̂ 0 wv , r 0 v := bv + δ ∑ w∈N (v) Γwv r̂ 0 wv , and q 0 v := bv − δ ∑ w∈N (v) Γvw r̂ 0 vw. Proposition 2 shows that upon proper initialization, the outputs of Algorithm 2 and Algorithm 3 are equivalent. Hence, Algorithm 3 represents a tool to investigate the convergence behavior of the Min-Sum Splitting algorithm. Analytically, the advantage of the formulation given in Algorithm 3 over the one given in Algorithm 2 is that the former involves two coupled systems of n equations whose convergence behavior can explicitly be linked to the spectral properties of the n× n matrix Γ, as we will see in Theorem 1 below. On the contrary, the linear system of 2m equations in Algorithm 2 does not seem to exhibit an immediate link to the spectral properties of Γ. In this respect, we note that the previous paper that investigated Min-Sum schemes for consensus, i.e., [27], characterized the convergence rate of the algorithm under consideration — albeit only in the case of d-regular graphs, and upon initializing the quadratic terms to the fix point — in terms of the spectral gap of a matrix that controls a linear system of 2m equations. However, the authors only list results on the behavior of this spectral gap in the case of cycle graphs, i.e., d = 2, and present a conjecture for 2d-tori. 7 Accelerated convergence rates for Min-Sum Splitting We investigate the convergence behavior of the Min-Sum Splitting algorithm to solve problem (2) with quadratic objective functions. Henceforth, without loss of generality, let b ∈ RV be given with 0 < bv < 1 for each v ∈ V , and let φv(z) := 12z 2 − bvz. Define b̄ := ∑ v∈V bv/n. Recall from [27] that the ordinary Min-Sum algorithm (i.e., Algorithm 2 with δ = 1 and Γ = A, where A is the adjacency matrix of the graph G) does not converge if the graph G has a cycle. We now show that a proper choice of the tuning parameters allows Min-Sum Splitting to converge to the problem solution in a subdiffusive way. The proof of this result, which is contained in the supplementary material, relies on the use of the auxiliary method defined in Algorithm 3 to track the evolution of the Min-Sum Splitting scheme. Here, recall that ‖x‖ denotes the `2 norm of a given vector x, ‖M‖ denotes the `2 matrix norm of the given matrix M , and ρ(M) its spectral radius. Theorem 1. Let W ∈ RV×V be a symmetric matrix with W1 = 1 and ρW := ρ(W − 11T /n) < 1. Let δ = 1 and Γ = γW , with γ = 2/(1 + √ 1− ρ2W ). Let xt be the output at time t of Algorithm 2 with initial conditions R̂0 = r̂0 = 0. Define K := ( γW I (1− γ)I 0 ) , K∞ := 1 (2− γ)n ( 11T 11T (1− γ)11T (1− γ)11T ) . (6) Then, for any v ∈ V we have limt→∞ xtv = b̄ and ‖xt − b̄1‖ ≤ 4 √ 2n 2−γ ‖(K −K ∞)t‖. The asymptotic rate of convergence is given by ρK := ρ(K −K∞) = limt→∞ ‖(K −K∞)t‖1/t = √ (1− √ 1−ρ2W )/(1+ √ 1−ρ2W ) < ρW < 1, which satisfies 12 √ 1/(1− ρW ) ≤ 1/(1− ρK) ≤ √ 1/(1− ρW ). Theorem 1 shows that the choice of splitting parameters δ = 1 and Γ = γW , where γ and W are defined as in the statement of the theorem, allows the Min-Sum Splitting scheme to achieve the asymptotic rate of convergence that is given by the second largest eigenvalue in magnitude of the matrix K defined in (6), i.e., the quantity ρK . The matrix K is the same matrix that describes shift-register methods for consensus [45, 4, 24]. In fact, the proof of Theorem 1 relies on the spectral analysis previously established for shift-registers, which can be traced back to [15]. See also [13, 24]. Following [27], let us consider the absolute measure of error given by ‖xt − b̄1‖/ √ n (recall that we assume 0 < bv < 1 so that ‖b‖ ≤ √ n). From Theorem 1 it follows that, asymptotically, we have ‖xt − b̄1‖/ √ n . 4 √ 2ρtK/(2− γ). If we define the asymptotic convergence time as the minimum time t so that, asymptotically, ‖xt− b̄1‖/ √ n . ε, then the Min-Sum Splitting scheme investigated in Theorem 1 has an asymptotic convergence time that isO(1/(1−ρK) log{[1/(1−ρK)]/ε}). Given the last bound in Theorem 1, this result achieves (modulo logarithmic terms) a square-root improvement over the convergence time of diffusive methods, which scale like Θ(1/(1− ρW ) log 1/ε). For cycle graphs and, more generally, for higher-dimensional tori — where 1/(1 − ρW ) is Θ(D2) so that 1/(1−ρK) is Θ(D) [33, 1] — the convergence time isO(D logD/ε), whereD is the graph diameter. As prescribed by Theorem 1, the choice of γ that makes the Min-Sum scheme achieve a subdiffusive rate depends on global properties of the graph G. Namely, γ depends on the quantity ρW , the second largest eigenvalue in magnitude of the matrix W . This fact connects the acceleration mechanism induced by splitting in the Min-Sum scheme to the acceleration mechanism of lifted Markov chains techniques (see Figure 1) and multi-step first order methods, as described in Section 4. It remains to be investigated how choices of splitting parameters different than the ones investigated in Theorem 1 affect the convergence behavior of the Min-Sum Splitting algorithm. 8 Conclusions The Min-Sum Splitting algorithm has been previously observed to yield convergence in settings where the ordinary Min-Sum protocol does not converge [35]. In this paper we proved that the introduction of splitting parameters is not only fundamental to guarantee the convergence of the Min-Sum scheme applied to the consensus problem, but that proper tuning of these parameters yields accelerated convergence rates. As prescribed by Theorem 1, the choice of splitting parameters that yields subdiffusive rates involves global type of information, via the spectral gap of a matrix associated to the original graph (see the choice of γ in Theorem 1). The acceleration mechanism exploited by Min-Sum Splitting is analogous to the acceleration mechanism exploited by lifted Markov chain techniques — where the transition matrix of the lifted random walks is typically chosen to depend on the total number of nodes in the graph [8, 5] or on its diameter [37] (global pieces of information) — and to the acceleration mechanism exploited by multi-step gradient methods — where the momentum/shift-register term is chosen as a function of the eigenvalues of a matrix supported on the original graph [13] (again, a global information). Prior to our results, this connection seems to have not been established in the literature. Our findings motivate further studies to generalize the acceleration due to splittings to other problem instances, beyond consensus. Acknowledgements This work was partially supported by the NSF under Grant EECS-1609484.
1. What is the main contribution of the paper regarding the accelerated variant of the Min-Sum message-passing protocol? 2. How does the proposed method compare to previously established results in terms of convergence rate? 3. Can you explain the intuition behind using the auxiliary linear process in the Min-Sum Splitting method? 4. Are there any limitations to the Min-Sum Splitting method, and how do they impact its practicality? 5. How does the proposed method connect to other distributed methods for consensus optimization, and when should it be preferred over them? 6. Can the framework developed in the paper be extended to solve more general consensus problems that arise in machine learning? 7. Why is this approach important or relevant to the machine learning community?
Review
Review In this paper, the authors present an accelerated variant of the Min-Sum message-passing protocol for solving consensus problems in distributed optimization. The authors use the reparametrization techniques proposed in [Ruozzi and Tatikonda, 2013] and establish rates of convergence for the Min-Sum Splitting algorithm for solving consensus problems with quadratic objective functions. The main tool used for the analysis is the construction of an auxiliary linear process that tracks the evolution of the Min-Sum Splitting algorithm. The main contributions of the paper can be summarized as follows: (i) provide analysis for the Min-Sum splitting algorithm using a new proof technique based on the introduction of an auxiliary process, (ii) design a Min-Sum protocol for consensus problems that achieves better convergence than previously established results, and (iii) show the connection between the proposed method, and lifted Markov chains and multi-step methods in convex optimization. The motivation and contributions of the paper are clear. The paper is well written and easy to follow, however, it does contain several typos and grammatical mistakes (listed below). The proofs of Propositions 1 and 2, and Theorem 1 appear to be correct. Typos and Grammatical errors: - Line 34: “…with theirs neighbors…” -> “…with their neighbors…” - Line 174: “double-stochastic” -> “doubly-stochastic” - Line 183: “… can be casted as…” -> “… can be cast as…” - Line 192: “…class of graph with…” -> “…class of graphs with…” - Line 197: “…which seems to…” -> “…which seem to…” - Line 206: “…additional overheads…” -> “…additional overhead…” - Line 225: “…pugging…” -> “…plugging…” - Line 238: “…are seen to…” -> “…are able to…” - Line 240: “…both type of…” -> “…both types of…” - Line 248: “…also seen to…” -> “…also shown to…” - Line 279-280: “…to convergence to…” -> “…to converge to…” - Line 300: “…,which scales like…” -> “…,which scale like…” - Line 302: “…for the cycle,…” -> “…for cycle graphs,…” Other minor comments: - Lines 220 and 221: Do you mean “Lagrangian” and “Lagrange multipliers” instead of “Laplacian” and “Laplace multipliers”? - The authors present 3 algorithms, and the quantities involved are not always explained or described. For example, what is R_{vw} and r_{vw} in Algorithm 2? Also, in Algorithm 2, the quantities \hat{R}^0 and \hat{r}^0 do not appear to be initialized. Moreover, since the auxiliary linear process is key to the analysis and the central idea of the paper, the authors show clearly state which variables correspond to this in Algorithm 3. The paper also appears to be missing several references. More specifically: - Lines 41 and 43: (Sub)gradient methods for consensus optimization. There are several more references that could be included: -- Bertsekas and Tsitsiklis, Parallel and distributed computation: numerical methods, 1989 -- Sundhar Ram Srinivasan et. al., Incremental stochastic subgradient algorithms for convex optimization, 2009 -- Wei Shi, Extra: An exact first-order algorithm for decentralized consensus optimization, 2015 (and, of course, many more) - Line 170: “The original literature…” - Line 229: work by Polyak (Heavy-ball) - Line 232: work by Nesterov It would be interesting and useful if the authors could answer/comment and address in the paper the following: - Although the paper is a theoretical paper, the authors should comment on the practicality of the method, and when such a method should be used as opposed to other distributed methods for consensus optimization. - What are the limitations of the Min-Sum Splitting method? - What is the intuition behind using the auxiliary process in the Min-Sum Splitting method? - The results provided in this paper are for consensus problems with quadratic objective functions. Can this framework be extended to solve more general consensus problems that often arise in Machine Learning? - The authors should also clearly state why such an approach is of interest in the context of Machine Learning and for the Machine Learning community. In summary, this paper is a purely theoretical paper in which the authors establish rates of convergence using a new proof technique and show the connections between their method and well-established methods in the literature. Overall, the ideas presented in this paper are interesting, however, the practicality of the method and intuition behind the results are missing, as well as some justification for the importance of this result for the Machine Learning community.
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What are the main contributions of the paper in the field of depth estimation? 2. What is the novel ranking loss function introduced in the paper, and how does it improve the accuracy of depth estimation? 3. How does the proposed approach compare to other recent works in terms of performance and efficiency? 4. What is the significance of the new large dataset of human pairwise relative depth judgements created by the authors? 5. Are there any limitations or areas for improvement in the proposed approach, such as the need for more samples in some experiments?
Review
Review This paper address the problem of depth estimation given a single 2D image as input. The paper has two contributions: (i) a new large dataset of human pairwise relative depth judgements, and (ii) a novel ranking loss function that encourages predicted depths to agree with the ground truth. The ranking function is built on top of a variant of the "hourglass" network of [38]. The proposed approach is compared against the recent approach of Zoran et al. [14] on the tasks of ordinal relation and metric depth prediction on the NYU dataset, and out-performs it on both tasks. The proposed approach out-performs Eigen and Fergus [8] on ordinal relation prediction, but under-performs on the harder task of metric depth prediction.I appreciate the novel dataset, and the combination of the hourglass network with the ranking loss. The latter insight appears to yield improvement over Zoran et al. [14], which estimates the ordinal relationships first, and then conditioned on the estimated relationships optimizes a constrained quadratic problem. The paper also investigates whether superpixel sampling is necessary (it was used in [14]), and shows improved performance over [14] without it. The paper writing is clear, references are good, and experiments are thorough. Minor comment: It seems that performance has not saturated yet with respect to the random sampling experiment in Table 2. Is it possible to try with even more samples?
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What is the focus of the paper regarding depth estimation? 2. What are the key contributions of the paper, particularly in terms of the dataset and loss function? 3. How does the reviewer assess the potential impact and novelty of the proposed approach? 4. Are there any concerns or suggestions regarding the presentation of the experimental evaluation? 5. Do you have any minor comments or suggestions for improving the paper's clarity?
Review
Review This paper tackles the problem of estimating depth for images "in the wild". Specifically, current depth datasets have certain biases, eg only indoor scenes [4] or urban street envinroments [5]. The paper introduces a new dataset with a much larger variety of scenes, where depth annotation comes as relative depth between pairs of random points. In addition, the authors introduce a method to use the provided annotation in order to make a pixel-wise depth prediction using a neural network architecture.Overall, I find the paper interesting and with high potential for future work. The main contributions are a) the dataset and b) the loss for using the provided annotations (see more details below). Also the experimental evaluation is thorough and captures well the comparison with other methods and the relations with other depth datasets. The dataset itself is very useful, as relative relations between points have been already used in practice for different tasks [18, 14, 29, 30]. Moreover, a depth dataset with diverse scenes is a good step towards more general single image depth estimation methods. Another positive of the paper is the introduction of the loss function. With this loss, it is possible to have direct pixel estimations of depth using a CNN architecture. It would be interesting to see this loss applied also to other tasks such as intrinsic image decomposition. Only some minor comments: In the experiment section, it is confusing to present results of the same table on different parts. For example the rand_* results are in a separate part than the above and below results, making the text difficult to follow. In several cases, the authors use questions for making a point (lines 24, 102, 109, 157, 217). Overdoing it makes the text too informal.
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What are the strengths and weaknesses of the paper's contribution to 3D reconstruction from a single image? 2. How does the reviewer assess the quality and generality of the new dataset created by the authors? 3. What are the limitations of the new dataset regarding its applicability to existing methods? 4. Why did the authors choose to label only a single pair of points in each image, and what are the potential advantages of labeling more points? 5. How might an active framework for point annotation improve the efficiency and effectiveness of the dataset construction process? 6. Is there any concern about the novelty of the proposed method for learning from ordinal point relations?
Review
Review In this paper, the authors collected a new dataset for training 3D reconstruction from a single image. Unlike previous datasets that recorded the true 3D structure of the scenes, this dataset only records the relative depth of a pair of points in each image. It is because that compared to the annotating absolute depth of the environment, it is easier for human beings to judge the relative depths of two certain points in the scene. In addition, the authors designed a new method that used annotations of relative depth to learn a model for single-image 3D reconstruction. First, since the new dataset is one of the key contributions of this paper, I suggested the authors to provide more statistics of the dataset. For example, among the 495K images in the dataset, how many images describe large scenes? How many images focus on certain small objects? Among all the scene images, how many images describe indoor scenes, outdoor urban scenes, and outdoor wild scenes? Among all the object images, how many images describe man-made objects and animals? Based on such statistics, we can evaluate the generality of the new dataset. Second, the new dataset can be only applied to the methods that can learn from ordinal relations between two points in an image. Thus, most existing methods cannot use this dataset. Third, I do not understand why the authors only label a single pair of points in each image. I admit that labeling two pairs of points in two different images may provide more information than labeling two pairs of points in a single image, because the two pairs of points in a single image may share similar relative depth. However, there also are some advantages for annotating more pairs of points in a single image. 1) Labeling more points may greatly decrease the structural uncertainty of the image, thus providing more reliable training images. It is possible that a small number of reliable training images may contribute more to model learning than a large number of unreliable training images. 2) Labeling more points in a single image may make this dataset applicable to more methods, because only using one pair of annotations in an image is a very strict constraint. Fourth, 50% of the point pairs for annotation were randomly selected, and the other 50% of point pairs had symmetric shapes. Such a design for point annotation is quite arbitrary. I suggest the authors to design a certain loss to achieve an active framework for point annotation, which uses the loss to identify the most informative point pairs in an image. This will be a more efficient way for dataset construction. Fifth, the proposed method for learning from ordinal point relations has a minor novelty.
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What is the focus of the paper regarding depth estimation? 2. What are the strengths of the proposed method, particularly its novelty? 3. What are the concerns or weaknesses of the paper, especially regarding the number of point pairs required for training? 4. How does the reviewer assess the quality and presentation of the work? 5. Are there any suggestions or requests for additional information or comparisons?
Review
Review This paper proposed a noval network structure to estimate depth map from a single image input. The main contribution of this work is to proposed a network that can directly output dense depth map, but only using annotation of relative depth in the training stage. Previous work either requires full depth map at the training stage, or can only predict the relative depth between pixels and another post-processing step is required to create the dense depth map. The author also proposed a new dataset "depth in the wild", which consists of more challenging testing images compared with previous RGB-D dataset. Experients show that the proposed algorithm out-performs the state-of-the-art single image depth estimation algorithm on the NYU dataset and the new "depth in the wild" dataset.Overall, this paper has a high quality. The proposed network based the relative depth map is noval. Experiment results, both quantitative and qualitative ones, shows superior over the previous methods. The presentation of the work is very clear and results should be easy to reproduce. My main concerns are the number of point pairs required to train the network. The author claimed that only one pair per image might be enough to train the network. I am not convinced that just from one relative depth label per-image, the network can predict a decent depth that is smooth inside objects and has sharp boundaries between objects. Specially, on "depth in the wild" (DIW) dataset, the authors only show the qualitative result of Ours_NYU_DIW, which is pre-trained on NYU dataset using all pairs of training points in all training images (L195-L197). Also, Table 3 shows that directly training on DIW (which only has sparse depth labels), the Weighted Human Disagreement Rate (WHDR) is roughly 9% higher than pre-trained network on fully annotated NYU dataset. This raise up the question that whether single-label per image is enough to learn a good dense depth map, or it mostly helps to adapt the pre-trained network on fully labeled dataset (like NUY) to a new dataset with only sparse label (DIW). If that is case, the author should make this point more clear in the paper. It would also be great if the authors can show the quantitative result of Ours_DIW, and compared it with Ours_Full and Ours_NYU_DIW. And I gave this work poster-level mainly due to this concern.
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What is the main contribution of the paper regarding scene depth estimation? 2. What are the strengths of the proposed approach, particularly in terms of leveraging human crowd sourcing and scalability? 3. Do you have any concerns or questions regarding the loss function and its ability to enforce smoothness in estimated depth? 4. How does the density of human annotations for point pairs in scenes impact the results? 5. Are there any other minor points or clarifications needed in the paper?
Review
Review This paper proposes training a deep neural network for estimating scene depth (up to monotonic transformations) for single images in the wild. They achieve this by building an extensive dataset of relative depths for points in images. Human crowd sourcing was used to obtain relations between pairs of points such that we would know which of the two points are closer. A given image would be annotated with many such pairs and ultimately a large dataset of such images is created. A ranking based loss function that considers both the annotated relative depths and estimated depth is proposed. This loss function is used to train the neural network to output pixelwise depth estimates. The authors show that their framework allows for using a much larger amount of data from the wild than is possible with existing frameworks, which rely on specialized depth sensors for building training datasets.I like the potential to scale up the approach to larger "in the wild" training datasets. I think this work has the potential to be picked up by many researchers in academia and industry as scaling it up is straightforward. My only issue with the work is that it is unclear how the loss function would enforce the degree of smoothness in the estimated depth between neighboring pixels. How does the density of human annotations for point pairs in scenes affect the results? I suspect the more dense the annotations, the better. This would explain why Table 3 shows pretraining on the NYU dataset (which has ground truth Kinect depth) gave the best results. Otherwise, I think this is a solid paper that (to my knowledge) is the first to make use of such a large (495K images) in the wild dataset for depth estimation. Another minor point is that in earlier parts of the paper, I had the impression each training image only consisted of a single pair of annotations. But on reading about the loss function, it seems that multiple annotations are used in each image. It is only that each point pair annotation was done by a single person.
NIPS
Title Single-Image Depth Perception in the Wild Abstract This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations 1 Introduction Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks. One reason is that many higher-level tasks must operate on images “in the wild”—images taken with no constraints on cameras, locations, scenes, and objects—but the RGB-D datasets used to train and evaluate image-to-depth systems are constrained in one way or another. Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and resolution, and often fail on specular or transparent objects [11]. In addition, because there is no Flickr for RGB-D images, researchers have to manually capture the images. As a result, current RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car; Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize to images in the wild. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away, or slightly smaller but closer—the absolute depth difference between the house and the tree cannot be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is unclear how to elicit the values from them. But humans are better at judging relative depth [13]: “Is point A closer than point B?” is often a much easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to estimate metric depth using only annotations of relative depth. Although such metric depth estimates are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for further progress: (1) collecting a large amount of relative depth annotations for images in the wild and (2) improving the algorithms that learn from annotations of relative depth. In this paper, we make contributions on both fronts. Our first contribution is a new dataset called “Depth in the Wild” (DIW). It consists of 495K diverse images, each annotated with randomly sampled points and their relative depth. We sample one pair of points per image to minimize the redundancy of annotation 1. To the best of our knowledge this is the first large-scale dataset consisting of images in the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation benchmark as well as a training resource 2. Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14], but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal relation between two points in an image. Given a new image, this classifier is repeatedly applied to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations by solving a constrained quadratic optimization that enforces additional smoothness constraints and reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all pixels assuming a constant depth within each superpixel. In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth (Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and can be trained entirely with annotations of relative depth. The novelty of our approach lies in the combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction of metric depth and (2) a loss function using relative depth. Experiments show that our method produces pixel-wise depth that is more accurately ordered, outperforming not only the method by Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing RGB-D data significantly improves single-image depth estimation in the wild. 2 Related work RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16, 17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images. Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our work differs in goals as well as in several design decisions. First, we sample random points instead of centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within a superpixel. Second, we sample only one pair of points per image instead of many to maximize the value of human annotations. Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep 1A small percentage of images have duplicates and thus have multiple pairs. 2Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild. neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But the networks in these previous works, with the exception of [14], were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth. Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies. Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of points and then make them globally consistent through energy minimization. Narihira et al. [30] learn a “lightness potential” network that takes an image patch and predicts the metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels. Although in principle this lightness potential network can be applied to every pixel to produce pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the authors mentioned in [30]) only solves it partially: as long as the lightness potential network has downsampling layers, which is the case in [30], the final output will be downsampled accordingly. Additional resolution augmentation (such as the “shift and stitch” approach [31]) is thus needed. In contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates. Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition [32] and learning to rank [33, 34]. 3 Dataset construction We gather images from Flickr. We use random query keywords sampled from an English dictionary and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth, we present a crowd worker an image and two highlighted points (Fig. 3), and ask “which point is closer, point 1, point 2, or hard to tell?” The worker presses a key to respond. How Many Pairs? How many pairs of points should we query per image? We sample just one per image because this maximizes the amount of information from human annotators. Consider the other extreme—querying all possible pairs of points in the same image. This is wasteful because pairs of points in close proximity are likely to have the same relative depth. In other words, querying one more pair from the same image may add less information than querying one more pair from a new image. Thus querying only one pair per image is more cost-effective. Which Pairs? Which two points should we query given an image? The simplest way would be to sample two random points from the 2D plane. But this results in a severe bias that can be easily exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less useful as a benchmark. An alternative is to sample two points uniformly from a random horizontal line, which makes it impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply classifies the point closer to the center of the image to be closer in depth, it will agree with humans 71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the left point is almost equally likely (50.03%) to be closer than the right one. Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs, which strikes a balance between the need for representing natural scene statistics and the need for performance differentiation. Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT). To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each query (an image and a point pair) to two workers, and add the query to our dataset if both workers can tell the relative depth and agree with each other; otherwise the query is discarded. Under this protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the gold-standard images. We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs, the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder. Fig. 5 presents examples of different kinds of queries. 4 Learning with relative depth How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14] first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile the relations to recover depth using energy minimization, and then interpolate within each superpixel to produce per-pixel depth. We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a function that maps an image to pixel-wise depth. Why not represent this function as a neural network and learn it from end to end? We just need two ingredients: (1) a network design that outputs the same resolution as the input, and (2) a way to train the network with annotations of relative depth. Network Design: Networks that output the same resolution as the input are aplenty, including the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge detection [37]. A common element is processing and passing information across multiple scales. In this work, we use a variant of the recently introduced “hourglass” network (Fig. 6), which has been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series of convolutions (using a variant of the inception [39] module) and downsampling, followed by a series of convolutions and upsampling, interleaved with skip connections that add back features from high resolutions. The symmetric shape of the network resembles a “hourglass”, hence the name. We refer the reader to [38] for comparing the design to related work. For our purpose, this particular choice is not essential, as the various designs mainly differ in how information from different scales is dispersed and aggregated, and it is possible that all of them can work equally well for our task. Loss Function: How do we train the network using only ordinal annotations? All we need is a loss function that encourages the predicted depth map to agree with the ground-truth ordinal relations. Specifically, consider a training image I and its K queries R = {(ik, jk, rk)}, k = 1, . . . ,K, where ik is the location of the first point in the k-th query, jk is the location of the second point in the k-th query, and rk ∈ {+1,−1, 0} is the ground-truth depth relation between ik and jk: closer (+1), further (−1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik and jk. We define a loss function L(I,R, z) = K∑ k=1 ψk(I, ik, jk, r, z), (1) where ψk(I, ik, jk, z) is the loss for the k-th query ψk(I, ik, jk, z) = log (1 + exp(−zik + zjk)) , rk = +1log (1 + exp(zik − zjk)) , rk = −1(zik − zjk)2, rk = 0. (2) This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth relation is equality; otherwise it encourages a large difference. Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a combination has not been proposed before, and in particular not for estimating depth. 5 Experiments on NYU Depth We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the training images (the subset of NYU Depth consisting of 795 images with semantic labels) using superpixel segmentation and their ground-truth ordinal relations are generated by comparing the ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14]. As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate between the predicted ordinal relations and ground-truth ordinal relations 3. We also report WKDR= (disagreement rate on pairs whose ground-truth relations are =) and WKDR 6= (disagreement rate on pairs whose ground-truth relations are < or >). Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth difference is smaller than a threshold τ . The choice of this threshold will result in different values for the error metrics (WKDR, WKDR=, WKDR 6=): if τ is too small, most pairs will be predicted to be unequal and the error metric on equality relations (WKDR=) will be large; if τ is too big, most pairs will be predicted to be equal and the error metric on inequality relations (WKDR 6=) will be large. We choose the threshold τ that minimizes the maximum of the three error metrics on a validation set held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14]. Our network is trained with the same data 4 but outperforms [14] on all three metrics. Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8], which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set (220K images). To compare fairly, we give our network access to the full NYU Depth training set. In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is not optimized for predicting ordinal relations. But this comparison is still significant in that it shows aComputed using our own implementation based on the definition given in [35]. 3WKDR stands for “Weighted Kinect Disagreement Rate”; the weight is set to 1 as in [14] 4The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image. that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to monotonic transformations. In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We see that although imperfect, the recovered metric depth by our method is overall reasonable and qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth. Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does well in estimating depth up to ordering. But how good is the estimated depth in terms of metric error? We thus evaluate conventional error measures such as RMSE (the root mean squared error), which compares the absolute depth values to the ground truths. Because our network is trained only on relative depth and does not know the range of the ground-truth depth values, to make these error measures meaningful we normalize the depth predicted by our network such that the mean and standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the results. We see that under these metric error measures our network still outperforms the method of Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth. Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14], we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But is superpixel segmentation necessary? That is, can we simply train with randomly sampled points? To answer this question, we train our network with randomly sampled points. We constrain the distance between the two points to be between 13 and 19 pixels (out of a 320×240 image) such that the distance is similar to that between the centers of neighboring superpixels. The results are included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K, rand_12K) further improves performance and significantly outperforms [14]. It is worth noting that in all these experiments the test pairs are still from superpixels, so training on random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve comparable performance despite this mismatch. This shows that our method can indeed operate without superpixel segmentation. 6 Experiments on Depth in the Wild In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into 421K training images and 74K test images 5. We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline method that uses only the location of the query points: classify the lower point to be closer or guess randomly if the two points are at the same height (Query_Location_Only). We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance 54.38% of images are duplicates downloaded using different query keywords and have more than one pairs of points. We have removed test images that have duplicates in the training set. 6All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW. than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and crowdsourced annotations to advance the state-of-the art in single-image depth estimation. 7 Conclusions We have studied single-image depth perception in the wild, recovering depth from a single image taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild annotated with relative depth and proposed a new algorithm that learns to estimate metric depth supervised by relative depth. We have shown that our algorithm outperforms prior art and our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1617767.
1. What is the main contribution of the paper regarding depth map prediction? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity and accuracy of the paper's content? 4. What are some suggestions for improving the experimental evaluation of the paper? 5. Are there any concerns or questions regarding the paper's claims and conclusions?
Review
Review This paper introduces an approach to leveraging ordinal depth relationships to predict a full depth map from a single image. To this end, a deep network is trained with a loss specifically designed to encode depth ordering. The network, however, still outputs a full depth map, and thus avoid the two-stage procedure that would consist of first predicting depth ordering and then optimizing a depth map to satisfy these relationships. The paper also introduces a new dataset of images in the wild with ground-truth ordinal depth relationships. In general, I quite like the paper, as I think that having an end-to-end framework to predict depth from weak annotations is valuable. The paper is sometimes a bit misleading: - From the introduction, I was under the impression that the proposed method would outperform [8] on NYUv2 when trained from a single pair per image coming from the new dataset. This would have been truly remarkable, but is not the case. - The paper suggests that the model can predict metric depth by just being trained using ordinal relationships. This is not entirely true, since, as mentioned in the experiments, the predicted depth maps need to be rescaled to match the training data statistics. In other words, some additional information (although quite weak) is still required to predict metric depth. I would suggest the authors to rephrase their statements to clarify this throughout the paper. Experiments: - Since the authors used an architecture that has not been employed before for depth estimation (the hourglass network), I think it would be interesting to also evaluate this architecture on the fully-supervised case (with full depth maps). While the comparison with [8] is interesting, it is unclear how much of the benefits comes from the use of a different network, or truly from using the ordinal relationships. - Using between 800 pairs and all of them on NYUv2 is not very realistic, although I acknowledge that it corresponds to what [14] did. In practice, one can only expect people to label a much smaller number of pairs. It would be interesting to study the robustness of the method when decreasing the number of pairs. As a matter of fact, I think that this experiment would be more valuable than the one done by removing the superpixels. - It would also be interesting to see how well the network trained from DIW only performs on NYUv2 for metric depth prediction. - On lines 184-186, the authors mention that [14] makes use of a ratio-based rule to determine if two points have the same depth, but, here, the difference is employed. Why not use the same rule as in [14]?
NIPS
Title Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Abstract Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs. 1 Introduction A fundamental problem across computer graphics, computer vision, and artificial intelligence is to infer a representation of a scene’s 3D shape and appearance given impoverished observations such as 2D images of the scene. Recent contributions have advanced the state of the art for this problem significantly. First, neural implicit representations have enabled efficient representation of local 3D scene properties by mapping a 3D coordinate to local properties of the 3D scene at that coordinate [1– 6]. Second, differentiable neural renderers allow for the inference of these representations given only 2D image observations [3, 4]. Finally, leveraging meta-learning approaches such as hypernetworks or gradient-based meta-learning has enabled the learning of distributions of 3D scenes, and therefore reconstruction given only a single image observation [3]. This has enabled a number of applications, such as novel view synthesis [7, 3, 6], 3D reconstruction [5, 3] semantic segmentation [8, 9], and SLAM [10]. However, 3D-structured neural scene representations come with a major limitation: Their rendering is prohibitively expensive, on the order of tens of seconds for a single 256 × 256 image for state-of-the-art approaches. In particular, parameterizing the scene in 3D space necessitates ∗These authors contributed equally to this work. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the discovery of surfaces along camera rays during rendering. This can be solved either by encoding geometry as a level set of an occupancy or signed distance function, or via volumetric rendering, which solves an alpha-compositing problem along each ray. Either approach, however, requires tens or even hundreds of evaluations of the 3D neural scene representation in order to render a single camera ray. We propose a novel neural scene representation, dubbed Light Field Networks or LFNs. Instead of encoding a scene in 3D space, Light Field Networks encode a scene by directly mapping an oriented camera ray in the four dimensional space of light rays to the radiance observed by that ray. This obviates the need to query opacity and RGB at 3D locations along a ray or to ray-march towards the level set of a signed distance function, speeding up rendering by three orders of magnitude compared to volumetric methods. In addition to directly encoding appearance, we demonstrate that LFNs encode information about scene geometry in their derivatives. Utilizing the unique flexibility of neural field representations, we introduce the use of Plücker coordinates to parameterize 360-degree light fields, which allow for storage of a-priori unbounded scenes and admit a simple expression for the depth as an analytical function of an LFN. Using this relationship, we demonstrate the computation of geometry in the form of sparse depth maps. While 3D-structured neural scene representations are multi-view consistent by design, parameterizing a scene in light space does not come with this guarantee: the additional degree of freedom enables rays that view the same 3D point to change appearance across viewpoints. For the setting of simple scenes, we demonstrate that this challenge can be overcome by learning a prior over 4D light fields in a meta-learning framework. We benchmark with current state-of-the-art approaches for single-shot novel view synthesis, and demonstrate that LFNs compare favorably with globally conditioned 3D-structured representations, while accelerating rendering and reducing memory consumption by orders of magnitude. In summary, we make the following contributions: 1. We propose Light Field Networks (LFNs), a novel neural scene representation that directly parameterizes the light field of a 3D scene via a neural network, enabling real-time rendering and vast reduction in memory utilization. 2. We demonstrate that we may leverage 6-dimensional Plücker coordinates as a parameterization of light fields, despite their apparent overparameterization of the 4D space of rays, thereby enabling continuous, 360-degree light fields. 3. By embedding LFNs in a meta-learning framework, we demonstrate light field reconstruction and novel view synthesis of simple scenes from sparse 2D image supervision only. 4. We demonstrate that inferred LFNs encode both appearance and geometry of the underlying 3D scenes by extracting sparse depth maps from the derivatives of LFNs, leveraging their analytical differentiability. Scope. The proposed method is currently constrained to the reconstruction of simple scenes, such as single objects and simple room-scale scenes, in line with recent work on learning generative models in this regime [3, 11]. 2 Related Work Neural Scene Representations and Neural Rendering. A large body of work addresses the question of inferring feature representations of 3D scenes useful to downstream tasks across graphics, vision, and machine learning. Models without 3D structure suffer from poor data efficiency [12, 13]. Voxel grids [14–20] offer 3D structure, but scale poorly with spatial resolution. Inspired by neural implicit representations of 3D geometry [1, 2], recent work has proposed to encode properties of 3D scenes as neural fields (also implicit- or coordinate-based representations, see [21] for an overview), neural networks that map 3D coordinates to local properties of the 3D scene at these coordinates. Using differentiable rendering, these models can be learned from image observations only [3, 4, 22, 11]. Reconstruction from sparse observations can be achieved by learning priors over the space of neural fields [3, 5, 11, 23–25] or by conditioning of the neural field on local features [6, 26, 27]. Differentiable rendering of such 3D-structured neural scene representations is exceptionally computationally intensive, requiring hundreds of evaluations of the neural representation per ray, with tens of thousands to millions of rays per image. Some recent work seeks to accelerate test-time rendering, but either does not admit generalization [28–30], or does not alleviate the cost of rendering at training/inference time [31–33]. With Light Field Networks, we propose to leverage 360- degree light fields as neural scene representations. We introduce a novel neural field parameterization of 360-degree light fields, infer light fields via meta-learning from as few as a single 2D image observation, and demonstrate that LFNs encode both scene geometry and appearance. Light fields and their reconstruction. Light fields have a rich history as a scene representation in both computer vision and computer graphics. Adelson et al. [34] introduced the 5D plenoptic function as a unified representation of information in the early visual system [35]. Levoy et al. [36] and, concurrently, Gortler et al. [37] introduced light fields in computer graphics as a 4D sampled scene representation for fast image-based rendering. Light fields have since enjoyed popularity as a representation for novel view synthesis [38] and computational photography, e.g. [39]. Light fields enable direct rendering of novel views by simply extracting a 2D slice of the 4D light field. However, they tend to incur significant storage cost, and since they rely on two-plane parameterizations, they make it hard to achieve a full 360-degree representation without concatenating multiple light fields. A significant amount of prior work addresses reconstruction of fronto-parallel light fields via handcrafted priors, such as sparsity in the Fourier or shearlet domains [40–42]. With the advent of deep learning, approaches to light field reconstruction that leverage convolutional neural networks to in-paint or extrapolate light fields from sparse views have been proposed [43, 7, 44], but similarly only support fronto-parallel novel view synthesis. We are instead interested in light fields as a representation of 3D appearance and geometry that enables efficient inference of and reasoning about the properties of the full underlying scene. 3 Background: 3D-structured Neural Scene Representations Recent progress in neural scene representation and rendering has been driven by two key innovations. The first are neural fields, often also referred to as neural implicit- or coordinate-based scene representations Φ3D [3, 4], which model a scene as a continuous function, parameterized as an MLP which maps a 3D coordinate to a representation v of whatever is at that 3D coordinate: Φ3D : R3 → Rn, x 7→ Φ3D(x) = v. (1) The second is a differentiable renderer m, which, given a ray r in R3, and the representation Φ3D, computes the value of the color c of the scene when viewed along r: m(r,Φ3D) = c(r) ∈ R3. (2) Existing rendering methods broadly fall into two categories: sphere-tracing-based renderers [3, 45, 5, 46] and volumetric renderers [19, 4]. These methods require on the order of tens or hundreds of evaluations of the values of Φ3D along a ray r to compute c(r). This leads to extraordinarily large memory and time complexity of rendering. As training requires error backpropagation through the renderer, this impacts both training and test time. 4 The Light Field Network Scene Representation We propose to represent a scene as a 360-degree neural light field, a function parameterized by an MLP Φφ with parameters φ that directly maps the 4D space L of oriented rays to their observed radiance: Φφ : L → R3, r 7→ Φφ(r) = c(r). (3) A light field completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. Light fields have the unique property that rendering is achieved by a single evaluation of Φ per light ray, i.e., no ray-casting is required. Moreover, while the light field only encodes appearance explicitly, its derivatives encode geometry information about the underlying 3D scene [47, 34, 35]. This makes many methods to extract 3D geometry from light fields possible [48–51], and we demonstrate efficient recovery of sparse depth maps from LFNs below. 4.1 Implicit representations for 360 degree light fields To fully represent a 3D scene requires a parameterization of all light rays in space. Conventional light field methods are constrained to leverage minimal parameterizations of the 4D space of rays, due to the high memory requirements of discretely sampled high-dimensional spaces. In contrast, our use of neural field representations allows us to freely choose a continuous parameterization that is mathematically convenient. In particular, we propose to leverage the 6D Plücker parameterization of the space of light rays L for LFNs. The Plücker coordinates (see [52] for an excellent overview) of a ray r through a point p in a normalized direction d are r = (d,m) ∈ R6 where m = p× d, for d ∈ S2,p ∈ R3. (4) where × denotes the cross product. While Plücker coordinates are a-priori 6-tuples of real numbers, the coordinates of any ray lie on a curved 4-dimensional subspace L. Plücker coordinates uniformly represent all oriented rays in space without singular directions or special cases. Intuitively, a general ray r together with the origin define a plane, and m is a normal vector to the plane with its magnitude capturing the distance from the ray to the origin; if m = 0 then the ray passes through the origin and is defined by its direction d. This is in contrast to conventional light field parameterizations: Fronto-parallel two-plane or cylindrical parameterizations cannot represent the full 360-degree light field of a scene [36, 53]. Cubical two-plane arrangements [37, 38] are not continuous, complicating the parameterization via a neural implicit representation. In contrast to the two-sphere parameterization [54], Plücker coordinates do not require that scenes are bounded in size and do not require spherical trigonometry. The parameterization via a neural field enables compact storage of a 4D light field that can be sampled at arbitrary resolutions, while non-neural representations are resolution-limited. Neural fields further allow the analytical computation of derivatives. This enables the efficient computation of sparse depth maps, where prior representations of light fields require finite-differences approximations of the gradient [48–50]. Rendering LFNs. To render an image given an LFN, one computes the Plücker coordinates ru,v of the camera rays at each u, v pixel coordinate in the image according to Equation 4. Specifically, given the extrinsic E = [ R|t ] ∈ SE(3) and intrinsic K ∈ R3×3 camera matrices [55] of a camera, one may retrieve the Plücker coordinates of the ray ru,v at pixel coordinate u, v as: ru,v = (du,v, t× du,v)/‖du,v‖, where du,v = RK−1 ( u v 1 ) + t, (5) where we use the world-to-camera convention for the extrinsic camera parameters. Rendering then amounts to a single evaluation of the LFN Φ for each ray, cu,v = Φ(ru,v). For notational convenience, we introduce a rendering function ΘΦE,K : R` → RH×W×3 (6) which renders an LFN Φφ with parameters φ ∈ R` when viewed from a camera with extrinsic and intrinsic parameters (E,K) into an image. 4.2 The geometry of Light Field Networks We will now analyze the properties of LFNs representing Lambertian 3D scenes, and illustrate how the geometry of the underlying 3D scene is encoded. We will first derive an expression that establishes a relationship between LFNs and the classic two-plane parameterization of the light field. Subsequently, we will derive an expression for the depth of a ray in terms of the local color gradient of the light field, therefore allowing us to efficiently extract sparse depth maps from the light field at any camera pose via analytical differentiation of the neural implicit representation. Please see Figure 2 for an overview. Locally linear slices of the light field. We derive here a local parametrization that will allow us to work with an LFN as if it were a conventional 2-plane light field. Given a ray r in Plücker coordinates, we pick two points x,x′ ∈ R3 along this ray. We then find a normalized direction d ∈ S2 not parallel to the ray direction - a canonical choice is a direction orthogonal to the ray direction. We may now parameterize two parallel lines a(s) = x + sd and b(t) = x′ + td that give rise to a local two-plane basis of the light field with ray coordinates s and t. r intersects these lines at the two-plane coordinates (s, t) = (0, 0). This choice of local basis now assigns the two-plane coordinates (s, t) to the ray r from a(s) to b(t). In Figure 2, we illustrate this process on a simple 2D scene. Epipolar Plane Images and their geometry. The Plücker coordinates (see Eq. 4) enable us to extract a 2D slice from an LFN field by varying (s, t) and sampling Φ on the Plücker coordinates of the rays parametrized pairs of points on the lines a(s) and b(t): c(s, t) = Φ (r(s, t)) ,where r(s, t) = −−−−−→ a(s)b(t) = ( b(t)− a(s) ‖b(t)− a(s)‖ , a(s)× b(t) ‖b(t)− a(s)‖ ) . (7) The image of this 2D slice c(s, t) is well-known in the light field literature as an Epipolar Plane Image (EPI) [47]. EPIs carry rich information about the geometry of the underlying 3D scene. For example, consider a point p on the surface of an object in the scene; please see Figure 2 for a diagram. A point p ∈ R2 has a 1-dimensional family of rays going through the point, which correspond to a (green) line Lp in the EPI. In a Lambertian scene, all rays that meet in this point and that are not occluded by other objects must observe the same color. Therefore, the light field is constant along this line. As one travels along Lp, rotating through the family of rays through p, one eventually reaches a (magenta) tangent ray τ to the object. At a tangent ray, the value of the EPI ceases to be constant, and the light field changes its color to whatever is disoccluded by the object at this tangent ray. Because objects of different depth undergo differing amounts of parallax, EPIsRGB Gradients Depthsthe slope of the segment of Lp along which cis constant determines the 3D coordinates of p. Finally, by observing that we may extract EPIs from any perspective, it is clear that an LFN encodes the full 3D geometry of the underlying scene. Intuitively, this may also be seen by con- sidering that one could render out all possible perspectives of the underlying scene, and solve a classic multi-view stereo problem to retrieve the shape. Extracting depth maps from LFNs. A correctly inferred light field necessarily contains accurate 3D geometry information, although the geometry is encoded in a nontrivial way. To extract 3D geometry from an LFN, we utilize the property of the 2-plane parameterization that the light field is constant on segments Lp, the slopes of which determine p. In the supplemental material, we derive Proposition 1. For a Lambertian scene, the distance d along r = −−−−−→ a(s)b(t) from a(s) to the point p on the object is d(r) = D ∂tc(s, t) ∂sc(s, t) + ∂tc(s, t) . (8) where a(s) and b(t) are as above, c(s, t) is defined by (7), D is the distance between the lines a(t) and b(t). Thus p = a(s) + d(r) b(t)−a(s)‖b(t)−a(s)‖ , and ∂x denotes the partial derivative by variable x. This result yields meaningful depth estimates wherever the derivatives of the light fields are nonzero along the ray. In practice, we sample several rays in a small (s, t) neighborhood of the ray r and declare depth estimates as invalid if the gradients have high variance-please see the code for implementation details. This occurs when r hits the object at a point where the surface color is changing, or when r is a tangent ray. We note that there is a wealth of prior art that could be used to extend this approach to extract dense depth maps [48–51]. 4.3 Meta-learning with conditional Light Field Networks We consider a dataset D consisting of N 3D scenes Si = {(Ij ,Ej ,Kj)}Kj=1 ∈ RH×W×3 × SE(3)× R3×3, i = 1 . . . N (9) with K images Ij of each scene taken with cameras with extrinsic parameters Ej and intrinsic parameters Kj [55]. Each scene is completely described by the parameters φi ∈ R` of its corresponding light field MLP Φi = Φφi . Meta-learning and multi-view consistency. In the case of 3D-structured neural scene representations, ray-marching or volumetric rendering naturally ensure multi-view consistency of the reconstructed 3D scene representation. In contrast, a general 4D function Φ : L → R3 is not multi-view consistent, as most such functions are not the light fields of any 3D scene. We propose to overcome this challenge by learning a prior over the space of light fields. As we will demonstrate, this prior can also be used to reconstruct an LFN from a single 2D image observation. In this paradigm, differentiable ray-casting is a method to force the light field of a scene to be multi-view consistent, while we instead impose multi-view consistency by learning a prior over light fields. Meta-learning framework. We propose to represent each 3D scene Si by its own latent vector zi ∈ Rk. Generalizing to new scenes amounts to learning a prior over the space of light fields that is concentrated on the manifold of multi-view consistent light fields of natural scenes. To represent this latent manifold, we utilize a hypernetwork [56, 3]. The hypernetwork is a function, represented as an MLP Ψ : Rk → R`,Ψψ(zi) = φi (10) with parameters ψ which sends the latent code zi of the i-th scene to the parameters of the corresponding LFN. Several reasonable approaches exist to obtain latent codes zi. One may leverage a convolutionalor transformer-based image encoder, directly inferring the latent from an image [11, 5], or utilize gradient-based meta-learning [23]. Here, we follow an auto-decoder framework [1, 3] to find the latent codes zi, but note that LFNs are in no way constrained to this approach. We do not claim that this particular meta-learning method will out-perform other forms of conditioning, such as gradient-based meta-learning [57, 23] or FILM conditioning [58], but perform a comparison to a conditioning-by-concatenation approach in the appendix. We assume that the latent vectors have a Gaussian prior with zero mean and a diagonal covariance matrix. At training time, we jointly optimize the latent parameters zi together with the hypernetwork parameters ψ using the objective arg min {zi},ψ ∑ i ∑ j ‖ΘΦEj ,Kj (Ψψ(zi))− Ij‖ 2 2 + λlat‖zi‖22. (11) Here the ΘΦ is the rendering function (Equation 6), the first term is an `2 loss penalizing the light fields that disagree with the observed images, and the second term enforces the prior over the latent variables. We solve Equation 11 using gradient descent. At test time, we freeze the parameters of the hypernetwork and reconstruct the light field for a new scene S given a single observation of the scene {(I,E,K)} by optimizing, using gradient descent, the latent variable zS of the scene, such that the reconstructed light field ΦΨψ(zS) best matches the given observation of the scene: zS = arg min z ‖ΘΦE,K (Ψψ(z))− I)‖22 + λlat‖z‖22. (12) Global vs. local conditioning The proposed meta-learning framework globally conditions an LFN on a single latent variable z. Recent work instead leverages local conditioning, where a neural field is conditioned on local features extracted from a context image [26, 6, 27]. In particular, the recently proposed pixelNeRF [6] has achieved impressive results on few-shot novel view synthesis. As we will see, the current formulation of LFNs does not outperform pixelNeRF. We note, however, that local conditioning methods solve a different problem. Rather than learning a prior over classes of objects, local conditioning methods learn priors over patches, answering the question “How does this image patch look like from a different perspective?”. As a result, this approach does not learn a latent space of neural scene representations. Rather, scene context is required to be available at test time to reason about the underlying 3D scene, and the representation is not compact: the size of the conditioning grows with the number of context observations. In contrast, globally conditioned methods [3, 11, 1, 2] first infer a global representation that is invariant to the number of context views and subsequently discard the observations. However, local conditioning enables better generalization due to the shift-equivariance of convolutional neural networks. An equivalent to local conditioning in light fields is non-obvious, and an exciting direction for future work. 5 Experiments We demonstrate the efficacy of LFNs by reconstructing 360-degree light fields of a variety of simple 3D scenes. In all experiments, we parameterize LFNs via a 6-layer ReLU MLP, and the hypernetwork as a 3-layer ReLU MLP, both with layer normalization. We solve all optimization problems using the ADAM solver with a step size of 10−4. Please find more results, as well as precise hyperparameter, implementation, and dataset details, in the supplemental document and video. Reconstructing appearance and geometry of single-object and room-scale light fields. We demonstrate that LFN can parameterize 360-degree light fields of both single-object ShapeNet [59] objects and simple, room-scale environments. We train LFNs on the ShapeNet “cars” dataset with 50 observations per object from [3], as well as on simple room-scale environments as proposed in [13]. Subsequently, we evaluate the ability of LFNs to generate novel views of the underlying 3D scenes. Please see Figure 3 for qualitative results. LFNs succeed in parameterizing the 360-degree light field, enabling novel view synthesis at real-time frame-rates (see supplemental video). We further demonstrate that LFNs encode scene geometry by presenting Epipolar Plane Images and leveraging the relationship derived in Equation 8 to infer sparse depth maps. We highlight that both rendering and depth map extraction do not require ray-casting, with only a single evaluation of the network or the network and its gradient respectively. Multi-class single-view reconstruction. Following [5, 6], we benchmark LFNs with recent global conditioning methods on the task of single-view reconstruction and novel view synthesis of the 13 largest ShapeNet categories. We follow the same evaluation protocol as [60] and train a single model across all categories. See Figure 4 for qualitative and Table 1 for quantitative baseline comparisons. We significantly outperform both Differentiable Volumetric Rendering (DVR) [5] and Scene Representation Networks (SRNs) [3] on all but two classes by an average of 1dB, while requiring more than an order of magnitude fewer network evaluations per ray. Qualitatively, we find that the reconstructions from LFNs are often crisper than those of either Scene Representation Networks or DVR. Note that DVR requires additional ground-truth foreground-background segmentation masks. Class-specific single-view reconstruction. We benchmark LFNs on single-shot reconstruction on the Shapenet “cars” and “chairs” classes as proposed in SRNs [3]. See Figure 5 for qualitative and quantitative results. We report performance better than SRNs in PSRN and on par in terms of SSIM on the “cars” class, and worse in PSNR but better in terms of SSIM on the “chairs” class, while requiring an order of magnitude fewer network evaluations and rendering in real-time. We attribute the drop in performance compared to multi-class reconstruction to the smaller dataset size, causing multi-view inconsistency. Global vs. local conditioning and comparison to pixelNeRF [6]. We investigate the role of global conditioning, where a single latent is inferred to describe the whole scene [3], to local conditioning, where latents are inferred per-pixel in a 2D image and leveraged to locally condition a neural implicit representation [26, 27, 6]. We benchmark with the recently proposed pixelNeRF [6]. As noted above (see Section 4.3), local conditioning does not infer a compact neural scene representation of the scene. Nevertheless, we provide the comparison here for completeness. See Figure 6 for qualitative and quantitative results. On average, LFNs perform 1dB worse than pixelNeRF in the single-class case, and 2dB worse in the multi-class setting. Real-time rendering and storage cost. See Table 2 for a quantitative comparison of the rendering complexity of LFN compared with that of volumetric and ray-marching based neural renderers [3, 45, 19, 4, 6]. All clock times were collected for rendering 256× 256 images on an NVIDIA RTX 6000 GPU. We further compare the cost of storing a single LFN with the cost of storing a conventional light field. With approximately 400k parameters, a single LFN requires around 1.6 MB of storage, compared to 146 MB required for storing a 360-degree light field at a resolution of 256×256×17×17 in the six-plane Lumigraph configuration. Multi-view consistency as a function of training set size. We investigate how multi-view consistency scales with the amount of data that the prior is trained on. Please find this analysis in the supplementary material. Overfitting of single 3D scenes. We investigate overfitting a single 3D scene with a Light Field Network with positional encodings / sinusoidal activations [24, 61]. Please find this analysis in the supplementary material. Evaluation of Reconstructed Geometry. We investigate the quality of the geometry that can be computed from an LFN via Eq. 8. For every sample in the class-specific single-shot reconstruction experiment, we extract its per-view sparse depth map. We then backproject depth maps from four views into 3D to reconstruct a point cloud, and benchmark mean L1 error on valid depth estimates with Scene Representation Networks [3]. Fig. 7 displays qualitative and quantitative results. Qualitatively, point clouds succeed in capturing fine detail such as the armrests of chairs. Quantitatively, LFNs outperform SRNs on both cars and chairs. We note that LFNs have a slight advantage in this comparison, as we can only benchmark on the sparse depth values, for which LFNs have high confidence. This includes occlusion boundaries, which are areas where the sphere-tracing based SRNs incurs high error, as it is forced to take smaller and smaller steps and may not reach the surface. We highlight that we do not claim that the proposed method is competetive with methods designed for geometry reconstruction in particular, but that we only report this to demonstrate that the proposed method is capable to extract valid depth estimates from an LFN. Limitations. First, as every existing light field approach, LFNs store only one color per oriented ray, which makes rendering views from cameras placed in between occluding objects challenging, even if the information may still be stored in the light field. Second, though we outperform globally-conditioned methods, we currently do not outperform the locally conditioned pixelNeRF. Finally, as opposed to 3D-structured representations, LFNs do not enforce strict multi-view consistency, and may be inconsistent in the case of small datasets. 6 Discussion and Conclusion We have proposed Light Field Networks, a novel neural scene representation that directly parameterizes the full 360-degree, 4D light field of a 3D scene via a neural implicit representation. This enables both real-time neural rendering with a single evaluation of the neural scene representation per ray, as well as sparse depth map extraction without ray-casting. Light Field Networks outperform globally conditioned baselines in single-shot novel view synthesis, while being three orders of magnitude faster and less memory-intensive than current volumetric rendering approaches. Exciting avenues for future work include combining LFNs with local conditioning, which would enable stronger out-of-distribution generalization, studying the learning of non-Lambertian scenes, and enabling camera placement in obstructed 3D space. With this work, we make important contributions to the emerging fields of neural rendering and neural scene representations, with exciting applications across computer vision, computer graphics, and robotics. Societal Impacts. Potential improvements extending our work on few-observation novel view synthesis could enable abuse by decreasing the cost of non-consensual impersonations. We refer the reader to a recent review of neural rendering [22] for an in-depth discussion of this topic. Acknowledgements and Disclosure of Funding This work is supported by the NSF under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/), ONR under 1015 G TA243/N00014-16-1-2007 (Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning), Mitsubishi under 026455-00001 (Building World Models from some data through analysis by synthesis), DARPA under CW3031624 (Transfer, Augmentation and Automatic Learning with Less Labels), as well as the Singapore DSTA under DST00OECI20300823 (New Representations for Vision). We thank Andrea Tagliasacchi, Tomasz Malisiewicz, Prafull Sharma, Ludwig Schubert, Kevin Smith, Bernhard Egger, Christian Richardt, Manuel Rey Area, and Jürgen and Susanne Sitzmann for interesting discussions and feedback, and Alex Yu for kindly sharing the outputs of pixelNeRF and baselines with us.
1. What is the focus and contribution of the paper on neural scene representation? 2. What are the strengths of the proposed approach, particularly in terms of encoding and rendering scenes? 3. What are the weaknesses or limitations of the method, such as experimental evaluation, ablations, stronger evaluations, and minor comments? 4. How does the reviewer assess the clarity, quality, significance, and novelty of the paper's content? 5. Are there any suggestions or requests for improvements or modifications in the paper?
Summary Of The Paper Review
Summary Of The Paper The paper presents a new neural scene representation based on the idea of light fields. Rather than predicting properties (e.g. occupancies, colors) for points in space, the paper proposes to predict such entities for all rays in a scene using simple MLP networks. For the network input the authors propose to use Plucker coordinates to canonically parametrize the viewing rays independently of a point offset. To encourage multi-view consistency of the network predictions, the authors propose a meta-learning approach allows to decouple the rendering from the latent code optimization. In contrast to volumetric neural scene representations that require expensive sampling with multiple network predictions per ray for rendering a novel view, the proposed method only requires a single network evaluation per ray while achieving state-of-the-art rendering results. Review Paper Strengths: #Originality The paper presents several novel ideas for new ways of encoding neural scene representations and demonstrates their viability. Neural scene representations are an important and vibrant research direction and this paper contributes nicely by introducing a new model which tackles common scalability issues although it also introduces new limitations. #Quality The paper is well structured and written. Illustrative figures support the explanations in the text well. The paper also presents competitive state-of-the-art results. #Clarity The paper is clearly written and the mathematical model description is sound (apart from a few small mistakes - see comments below). #Significance I believe the paper contains several valuable ideas and thoughts that should be shared with the community and hence merit publication. Paper weaknesses / questions: The experimental evaluation could be stronger in my opinion. 1.1. Ablations: One can certainly assume that the method will not work without the proposed meta-learning network architecture, but it is not explicitly stated or empirically shown that/why a simpler network architecture would not work. Moreover, it is unclear how changes of the latent code size or the network size would affect the output quality, as well as overfitting vs. generalization properties. 1.2. Stronger evaluations: One of the major paper claims is that LFNs are able to encode both the geometry and the appearance of a scene (L66). The evaluation of these two entities is not very strong. 1.2.1 For the appearance, all scenes contain only simple piece-wise constant textures with barely any challenging high-frequency details. Since, the network sometimes already struggles to recover these simple scene sharply, one can assume that more complex textures cannot be well recovered. Thus, the paper does not really show that LFNs are able to encode real-world textures. It would still be nice to see some results on real images. Further, it would be interesting to know/discuss whether the appearance modeling quality could be improved with an increased network capacity / larger latent codes. 1.2.2 For the geometry, there is no quantitative evaluation at all. Although the depth values are only sparsely recovered, it is still possible to compute error values for corresponding masked depth maps. Otherwise the quality of computed depth maps (Eq. (8)) is difficult to assess. Besides showing depth maps, one could also (sparsely) evaluate the quality of surface normals to better assess the geometric reconstruction quality. Minor comments: several parameters have not been specified in the paper: what is the latent vector size k (Eq. (9))? what is the LFN parameter size \ell (Eq. (9))? what is the value of \lambda_lat in Eqs. (10), (11) ? Although some parameters are described in the supp. mat., it does not need much space to state their values directly in the paper to make it better self-contained and give the reader a feeling for the network and optimization parameters. \lambda_lat is only called \lambda in the supp. mat. Fig. 2 is very helpful. Although, the figure illustrates several complex concepts which are detailed in the text below one could improve the caption to make the figure more self-contained and briefly explain the symbols as many of them are not explained in the caption. L194: it is better to use a different symbol than \ell to denote a ray since the same symbol is used to denote the number of LFN parameters in Eq. (6) / L154. L202.5: This should probably also be a numbered equation. Even if you do not reference it, other might want to. There is a slight error in there: The right hand side of the “element of” operator describes the space of a tuple, while the left hand side is not a tuple, but a set of tuples. L219: “which sends the” -> “which maps the” ? Eq. (10): z_j -> z_i ? (z_j does not seem to make sense here) Table 1: best numbers are incorrectly highlighted in 2nd and 3rd last columns #Post-Rebuttal: I agree with the other reviewers that the authors did an excellent job in responding to the reviewer concerns. Thanks a lot for this great effort! Overall, I am very happy with the responses not only to my questions, but also to the ones of the other reviewers. Therefore, I will keep my positive rating to accept the paper as I believe it has a lot of valuable insights that should be shared with the community and which merit publication. Thanks for the great work!
NIPS
Title Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Abstract Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs. 1 Introduction A fundamental problem across computer graphics, computer vision, and artificial intelligence is to infer a representation of a scene’s 3D shape and appearance given impoverished observations such as 2D images of the scene. Recent contributions have advanced the state of the art for this problem significantly. First, neural implicit representations have enabled efficient representation of local 3D scene properties by mapping a 3D coordinate to local properties of the 3D scene at that coordinate [1– 6]. Second, differentiable neural renderers allow for the inference of these representations given only 2D image observations [3, 4]. Finally, leveraging meta-learning approaches such as hypernetworks or gradient-based meta-learning has enabled the learning of distributions of 3D scenes, and therefore reconstruction given only a single image observation [3]. This has enabled a number of applications, such as novel view synthesis [7, 3, 6], 3D reconstruction [5, 3] semantic segmentation [8, 9], and SLAM [10]. However, 3D-structured neural scene representations come with a major limitation: Their rendering is prohibitively expensive, on the order of tens of seconds for a single 256 × 256 image for state-of-the-art approaches. In particular, parameterizing the scene in 3D space necessitates ∗These authors contributed equally to this work. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the discovery of surfaces along camera rays during rendering. This can be solved either by encoding geometry as a level set of an occupancy or signed distance function, or via volumetric rendering, which solves an alpha-compositing problem along each ray. Either approach, however, requires tens or even hundreds of evaluations of the 3D neural scene representation in order to render a single camera ray. We propose a novel neural scene representation, dubbed Light Field Networks or LFNs. Instead of encoding a scene in 3D space, Light Field Networks encode a scene by directly mapping an oriented camera ray in the four dimensional space of light rays to the radiance observed by that ray. This obviates the need to query opacity and RGB at 3D locations along a ray or to ray-march towards the level set of a signed distance function, speeding up rendering by three orders of magnitude compared to volumetric methods. In addition to directly encoding appearance, we demonstrate that LFNs encode information about scene geometry in their derivatives. Utilizing the unique flexibility of neural field representations, we introduce the use of Plücker coordinates to parameterize 360-degree light fields, which allow for storage of a-priori unbounded scenes and admit a simple expression for the depth as an analytical function of an LFN. Using this relationship, we demonstrate the computation of geometry in the form of sparse depth maps. While 3D-structured neural scene representations are multi-view consistent by design, parameterizing a scene in light space does not come with this guarantee: the additional degree of freedom enables rays that view the same 3D point to change appearance across viewpoints. For the setting of simple scenes, we demonstrate that this challenge can be overcome by learning a prior over 4D light fields in a meta-learning framework. We benchmark with current state-of-the-art approaches for single-shot novel view synthesis, and demonstrate that LFNs compare favorably with globally conditioned 3D-structured representations, while accelerating rendering and reducing memory consumption by orders of magnitude. In summary, we make the following contributions: 1. We propose Light Field Networks (LFNs), a novel neural scene representation that directly parameterizes the light field of a 3D scene via a neural network, enabling real-time rendering and vast reduction in memory utilization. 2. We demonstrate that we may leverage 6-dimensional Plücker coordinates as a parameterization of light fields, despite their apparent overparameterization of the 4D space of rays, thereby enabling continuous, 360-degree light fields. 3. By embedding LFNs in a meta-learning framework, we demonstrate light field reconstruction and novel view synthesis of simple scenes from sparse 2D image supervision only. 4. We demonstrate that inferred LFNs encode both appearance and geometry of the underlying 3D scenes by extracting sparse depth maps from the derivatives of LFNs, leveraging their analytical differentiability. Scope. The proposed method is currently constrained to the reconstruction of simple scenes, such as single objects and simple room-scale scenes, in line with recent work on learning generative models in this regime [3, 11]. 2 Related Work Neural Scene Representations and Neural Rendering. A large body of work addresses the question of inferring feature representations of 3D scenes useful to downstream tasks across graphics, vision, and machine learning. Models without 3D structure suffer from poor data efficiency [12, 13]. Voxel grids [14–20] offer 3D structure, but scale poorly with spatial resolution. Inspired by neural implicit representations of 3D geometry [1, 2], recent work has proposed to encode properties of 3D scenes as neural fields (also implicit- or coordinate-based representations, see [21] for an overview), neural networks that map 3D coordinates to local properties of the 3D scene at these coordinates. Using differentiable rendering, these models can be learned from image observations only [3, 4, 22, 11]. Reconstruction from sparse observations can be achieved by learning priors over the space of neural fields [3, 5, 11, 23–25] or by conditioning of the neural field on local features [6, 26, 27]. Differentiable rendering of such 3D-structured neural scene representations is exceptionally computationally intensive, requiring hundreds of evaluations of the neural representation per ray, with tens of thousands to millions of rays per image. Some recent work seeks to accelerate test-time rendering, but either does not admit generalization [28–30], or does not alleviate the cost of rendering at training/inference time [31–33]. With Light Field Networks, we propose to leverage 360- degree light fields as neural scene representations. We introduce a novel neural field parameterization of 360-degree light fields, infer light fields via meta-learning from as few as a single 2D image observation, and demonstrate that LFNs encode both scene geometry and appearance. Light fields and their reconstruction. Light fields have a rich history as a scene representation in both computer vision and computer graphics. Adelson et al. [34] introduced the 5D plenoptic function as a unified representation of information in the early visual system [35]. Levoy et al. [36] and, concurrently, Gortler et al. [37] introduced light fields in computer graphics as a 4D sampled scene representation for fast image-based rendering. Light fields have since enjoyed popularity as a representation for novel view synthesis [38] and computational photography, e.g. [39]. Light fields enable direct rendering of novel views by simply extracting a 2D slice of the 4D light field. However, they tend to incur significant storage cost, and since they rely on two-plane parameterizations, they make it hard to achieve a full 360-degree representation without concatenating multiple light fields. A significant amount of prior work addresses reconstruction of fronto-parallel light fields via handcrafted priors, such as sparsity in the Fourier or shearlet domains [40–42]. With the advent of deep learning, approaches to light field reconstruction that leverage convolutional neural networks to in-paint or extrapolate light fields from sparse views have been proposed [43, 7, 44], but similarly only support fronto-parallel novel view synthesis. We are instead interested in light fields as a representation of 3D appearance and geometry that enables efficient inference of and reasoning about the properties of the full underlying scene. 3 Background: 3D-structured Neural Scene Representations Recent progress in neural scene representation and rendering has been driven by two key innovations. The first are neural fields, often also referred to as neural implicit- or coordinate-based scene representations Φ3D [3, 4], which model a scene as a continuous function, parameterized as an MLP which maps a 3D coordinate to a representation v of whatever is at that 3D coordinate: Φ3D : R3 → Rn, x 7→ Φ3D(x) = v. (1) The second is a differentiable renderer m, which, given a ray r in R3, and the representation Φ3D, computes the value of the color c of the scene when viewed along r: m(r,Φ3D) = c(r) ∈ R3. (2) Existing rendering methods broadly fall into two categories: sphere-tracing-based renderers [3, 45, 5, 46] and volumetric renderers [19, 4]. These methods require on the order of tens or hundreds of evaluations of the values of Φ3D along a ray r to compute c(r). This leads to extraordinarily large memory and time complexity of rendering. As training requires error backpropagation through the renderer, this impacts both training and test time. 4 The Light Field Network Scene Representation We propose to represent a scene as a 360-degree neural light field, a function parameterized by an MLP Φφ with parameters φ that directly maps the 4D space L of oriented rays to their observed radiance: Φφ : L → R3, r 7→ Φφ(r) = c(r). (3) A light field completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. Light fields have the unique property that rendering is achieved by a single evaluation of Φ per light ray, i.e., no ray-casting is required. Moreover, while the light field only encodes appearance explicitly, its derivatives encode geometry information about the underlying 3D scene [47, 34, 35]. This makes many methods to extract 3D geometry from light fields possible [48–51], and we demonstrate efficient recovery of sparse depth maps from LFNs below. 4.1 Implicit representations for 360 degree light fields To fully represent a 3D scene requires a parameterization of all light rays in space. Conventional light field methods are constrained to leverage minimal parameterizations of the 4D space of rays, due to the high memory requirements of discretely sampled high-dimensional spaces. In contrast, our use of neural field representations allows us to freely choose a continuous parameterization that is mathematically convenient. In particular, we propose to leverage the 6D Plücker parameterization of the space of light rays L for LFNs. The Plücker coordinates (see [52] for an excellent overview) of a ray r through a point p in a normalized direction d are r = (d,m) ∈ R6 where m = p× d, for d ∈ S2,p ∈ R3. (4) where × denotes the cross product. While Plücker coordinates are a-priori 6-tuples of real numbers, the coordinates of any ray lie on a curved 4-dimensional subspace L. Plücker coordinates uniformly represent all oriented rays in space without singular directions or special cases. Intuitively, a general ray r together with the origin define a plane, and m is a normal vector to the plane with its magnitude capturing the distance from the ray to the origin; if m = 0 then the ray passes through the origin and is defined by its direction d. This is in contrast to conventional light field parameterizations: Fronto-parallel two-plane or cylindrical parameterizations cannot represent the full 360-degree light field of a scene [36, 53]. Cubical two-plane arrangements [37, 38] are not continuous, complicating the parameterization via a neural implicit representation. In contrast to the two-sphere parameterization [54], Plücker coordinates do not require that scenes are bounded in size and do not require spherical trigonometry. The parameterization via a neural field enables compact storage of a 4D light field that can be sampled at arbitrary resolutions, while non-neural representations are resolution-limited. Neural fields further allow the analytical computation of derivatives. This enables the efficient computation of sparse depth maps, where prior representations of light fields require finite-differences approximations of the gradient [48–50]. Rendering LFNs. To render an image given an LFN, one computes the Plücker coordinates ru,v of the camera rays at each u, v pixel coordinate in the image according to Equation 4. Specifically, given the extrinsic E = [ R|t ] ∈ SE(3) and intrinsic K ∈ R3×3 camera matrices [55] of a camera, one may retrieve the Plücker coordinates of the ray ru,v at pixel coordinate u, v as: ru,v = (du,v, t× du,v)/‖du,v‖, where du,v = RK−1 ( u v 1 ) + t, (5) where we use the world-to-camera convention for the extrinsic camera parameters. Rendering then amounts to a single evaluation of the LFN Φ for each ray, cu,v = Φ(ru,v). For notational convenience, we introduce a rendering function ΘΦE,K : R` → RH×W×3 (6) which renders an LFN Φφ with parameters φ ∈ R` when viewed from a camera with extrinsic and intrinsic parameters (E,K) into an image. 4.2 The geometry of Light Field Networks We will now analyze the properties of LFNs representing Lambertian 3D scenes, and illustrate how the geometry of the underlying 3D scene is encoded. We will first derive an expression that establishes a relationship between LFNs and the classic two-plane parameterization of the light field. Subsequently, we will derive an expression for the depth of a ray in terms of the local color gradient of the light field, therefore allowing us to efficiently extract sparse depth maps from the light field at any camera pose via analytical differentiation of the neural implicit representation. Please see Figure 2 for an overview. Locally linear slices of the light field. We derive here a local parametrization that will allow us to work with an LFN as if it were a conventional 2-plane light field. Given a ray r in Plücker coordinates, we pick two points x,x′ ∈ R3 along this ray. We then find a normalized direction d ∈ S2 not parallel to the ray direction - a canonical choice is a direction orthogonal to the ray direction. We may now parameterize two parallel lines a(s) = x + sd and b(t) = x′ + td that give rise to a local two-plane basis of the light field with ray coordinates s and t. r intersects these lines at the two-plane coordinates (s, t) = (0, 0). This choice of local basis now assigns the two-plane coordinates (s, t) to the ray r from a(s) to b(t). In Figure 2, we illustrate this process on a simple 2D scene. Epipolar Plane Images and their geometry. The Plücker coordinates (see Eq. 4) enable us to extract a 2D slice from an LFN field by varying (s, t) and sampling Φ on the Plücker coordinates of the rays parametrized pairs of points on the lines a(s) and b(t): c(s, t) = Φ (r(s, t)) ,where r(s, t) = −−−−−→ a(s)b(t) = ( b(t)− a(s) ‖b(t)− a(s)‖ , a(s)× b(t) ‖b(t)− a(s)‖ ) . (7) The image of this 2D slice c(s, t) is well-known in the light field literature as an Epipolar Plane Image (EPI) [47]. EPIs carry rich information about the geometry of the underlying 3D scene. For example, consider a point p on the surface of an object in the scene; please see Figure 2 for a diagram. A point p ∈ R2 has a 1-dimensional family of rays going through the point, which correspond to a (green) line Lp in the EPI. In a Lambertian scene, all rays that meet in this point and that are not occluded by other objects must observe the same color. Therefore, the light field is constant along this line. As one travels along Lp, rotating through the family of rays through p, one eventually reaches a (magenta) tangent ray τ to the object. At a tangent ray, the value of the EPI ceases to be constant, and the light field changes its color to whatever is disoccluded by the object at this tangent ray. Because objects of different depth undergo differing amounts of parallax, EPIsRGB Gradients Depthsthe slope of the segment of Lp along which cis constant determines the 3D coordinates of p. Finally, by observing that we may extract EPIs from any perspective, it is clear that an LFN encodes the full 3D geometry of the underlying scene. Intuitively, this may also be seen by con- sidering that one could render out all possible perspectives of the underlying scene, and solve a classic multi-view stereo problem to retrieve the shape. Extracting depth maps from LFNs. A correctly inferred light field necessarily contains accurate 3D geometry information, although the geometry is encoded in a nontrivial way. To extract 3D geometry from an LFN, we utilize the property of the 2-plane parameterization that the light field is constant on segments Lp, the slopes of which determine p. In the supplemental material, we derive Proposition 1. For a Lambertian scene, the distance d along r = −−−−−→ a(s)b(t) from a(s) to the point p on the object is d(r) = D ∂tc(s, t) ∂sc(s, t) + ∂tc(s, t) . (8) where a(s) and b(t) are as above, c(s, t) is defined by (7), D is the distance between the lines a(t) and b(t). Thus p = a(s) + d(r) b(t)−a(s)‖b(t)−a(s)‖ , and ∂x denotes the partial derivative by variable x. This result yields meaningful depth estimates wherever the derivatives of the light fields are nonzero along the ray. In practice, we sample several rays in a small (s, t) neighborhood of the ray r and declare depth estimates as invalid if the gradients have high variance-please see the code for implementation details. This occurs when r hits the object at a point where the surface color is changing, or when r is a tangent ray. We note that there is a wealth of prior art that could be used to extend this approach to extract dense depth maps [48–51]. 4.3 Meta-learning with conditional Light Field Networks We consider a dataset D consisting of N 3D scenes Si = {(Ij ,Ej ,Kj)}Kj=1 ∈ RH×W×3 × SE(3)× R3×3, i = 1 . . . N (9) with K images Ij of each scene taken with cameras with extrinsic parameters Ej and intrinsic parameters Kj [55]. Each scene is completely described by the parameters φi ∈ R` of its corresponding light field MLP Φi = Φφi . Meta-learning and multi-view consistency. In the case of 3D-structured neural scene representations, ray-marching or volumetric rendering naturally ensure multi-view consistency of the reconstructed 3D scene representation. In contrast, a general 4D function Φ : L → R3 is not multi-view consistent, as most such functions are not the light fields of any 3D scene. We propose to overcome this challenge by learning a prior over the space of light fields. As we will demonstrate, this prior can also be used to reconstruct an LFN from a single 2D image observation. In this paradigm, differentiable ray-casting is a method to force the light field of a scene to be multi-view consistent, while we instead impose multi-view consistency by learning a prior over light fields. Meta-learning framework. We propose to represent each 3D scene Si by its own latent vector zi ∈ Rk. Generalizing to new scenes amounts to learning a prior over the space of light fields that is concentrated on the manifold of multi-view consistent light fields of natural scenes. To represent this latent manifold, we utilize a hypernetwork [56, 3]. The hypernetwork is a function, represented as an MLP Ψ : Rk → R`,Ψψ(zi) = φi (10) with parameters ψ which sends the latent code zi of the i-th scene to the parameters of the corresponding LFN. Several reasonable approaches exist to obtain latent codes zi. One may leverage a convolutionalor transformer-based image encoder, directly inferring the latent from an image [11, 5], or utilize gradient-based meta-learning [23]. Here, we follow an auto-decoder framework [1, 3] to find the latent codes zi, but note that LFNs are in no way constrained to this approach. We do not claim that this particular meta-learning method will out-perform other forms of conditioning, such as gradient-based meta-learning [57, 23] or FILM conditioning [58], but perform a comparison to a conditioning-by-concatenation approach in the appendix. We assume that the latent vectors have a Gaussian prior with zero mean and a diagonal covariance matrix. At training time, we jointly optimize the latent parameters zi together with the hypernetwork parameters ψ using the objective arg min {zi},ψ ∑ i ∑ j ‖ΘΦEj ,Kj (Ψψ(zi))− Ij‖ 2 2 + λlat‖zi‖22. (11) Here the ΘΦ is the rendering function (Equation 6), the first term is an `2 loss penalizing the light fields that disagree with the observed images, and the second term enforces the prior over the latent variables. We solve Equation 11 using gradient descent. At test time, we freeze the parameters of the hypernetwork and reconstruct the light field for a new scene S given a single observation of the scene {(I,E,K)} by optimizing, using gradient descent, the latent variable zS of the scene, such that the reconstructed light field ΦΨψ(zS) best matches the given observation of the scene: zS = arg min z ‖ΘΦE,K (Ψψ(z))− I)‖22 + λlat‖z‖22. (12) Global vs. local conditioning The proposed meta-learning framework globally conditions an LFN on a single latent variable z. Recent work instead leverages local conditioning, where a neural field is conditioned on local features extracted from a context image [26, 6, 27]. In particular, the recently proposed pixelNeRF [6] has achieved impressive results on few-shot novel view synthesis. As we will see, the current formulation of LFNs does not outperform pixelNeRF. We note, however, that local conditioning methods solve a different problem. Rather than learning a prior over classes of objects, local conditioning methods learn priors over patches, answering the question “How does this image patch look like from a different perspective?”. As a result, this approach does not learn a latent space of neural scene representations. Rather, scene context is required to be available at test time to reason about the underlying 3D scene, and the representation is not compact: the size of the conditioning grows with the number of context observations. In contrast, globally conditioned methods [3, 11, 1, 2] first infer a global representation that is invariant to the number of context views and subsequently discard the observations. However, local conditioning enables better generalization due to the shift-equivariance of convolutional neural networks. An equivalent to local conditioning in light fields is non-obvious, and an exciting direction for future work. 5 Experiments We demonstrate the efficacy of LFNs by reconstructing 360-degree light fields of a variety of simple 3D scenes. In all experiments, we parameterize LFNs via a 6-layer ReLU MLP, and the hypernetwork as a 3-layer ReLU MLP, both with layer normalization. We solve all optimization problems using the ADAM solver with a step size of 10−4. Please find more results, as well as precise hyperparameter, implementation, and dataset details, in the supplemental document and video. Reconstructing appearance and geometry of single-object and room-scale light fields. We demonstrate that LFN can parameterize 360-degree light fields of both single-object ShapeNet [59] objects and simple, room-scale environments. We train LFNs on the ShapeNet “cars” dataset with 50 observations per object from [3], as well as on simple room-scale environments as proposed in [13]. Subsequently, we evaluate the ability of LFNs to generate novel views of the underlying 3D scenes. Please see Figure 3 for qualitative results. LFNs succeed in parameterizing the 360-degree light field, enabling novel view synthesis at real-time frame-rates (see supplemental video). We further demonstrate that LFNs encode scene geometry by presenting Epipolar Plane Images and leveraging the relationship derived in Equation 8 to infer sparse depth maps. We highlight that both rendering and depth map extraction do not require ray-casting, with only a single evaluation of the network or the network and its gradient respectively. Multi-class single-view reconstruction. Following [5, 6], we benchmark LFNs with recent global conditioning methods on the task of single-view reconstruction and novel view synthesis of the 13 largest ShapeNet categories. We follow the same evaluation protocol as [60] and train a single model across all categories. See Figure 4 for qualitative and Table 1 for quantitative baseline comparisons. We significantly outperform both Differentiable Volumetric Rendering (DVR) [5] and Scene Representation Networks (SRNs) [3] on all but two classes by an average of 1dB, while requiring more than an order of magnitude fewer network evaluations per ray. Qualitatively, we find that the reconstructions from LFNs are often crisper than those of either Scene Representation Networks or DVR. Note that DVR requires additional ground-truth foreground-background segmentation masks. Class-specific single-view reconstruction. We benchmark LFNs on single-shot reconstruction on the Shapenet “cars” and “chairs” classes as proposed in SRNs [3]. See Figure 5 for qualitative and quantitative results. We report performance better than SRNs in PSRN and on par in terms of SSIM on the “cars” class, and worse in PSNR but better in terms of SSIM on the “chairs” class, while requiring an order of magnitude fewer network evaluations and rendering in real-time. We attribute the drop in performance compared to multi-class reconstruction to the smaller dataset size, causing multi-view inconsistency. Global vs. local conditioning and comparison to pixelNeRF [6]. We investigate the role of global conditioning, where a single latent is inferred to describe the whole scene [3], to local conditioning, where latents are inferred per-pixel in a 2D image and leveraged to locally condition a neural implicit representation [26, 27, 6]. We benchmark with the recently proposed pixelNeRF [6]. As noted above (see Section 4.3), local conditioning does not infer a compact neural scene representation of the scene. Nevertheless, we provide the comparison here for completeness. See Figure 6 for qualitative and quantitative results. On average, LFNs perform 1dB worse than pixelNeRF in the single-class case, and 2dB worse in the multi-class setting. Real-time rendering and storage cost. See Table 2 for a quantitative comparison of the rendering complexity of LFN compared with that of volumetric and ray-marching based neural renderers [3, 45, 19, 4, 6]. All clock times were collected for rendering 256× 256 images on an NVIDIA RTX 6000 GPU. We further compare the cost of storing a single LFN with the cost of storing a conventional light field. With approximately 400k parameters, a single LFN requires around 1.6 MB of storage, compared to 146 MB required for storing a 360-degree light field at a resolution of 256×256×17×17 in the six-plane Lumigraph configuration. Multi-view consistency as a function of training set size. We investigate how multi-view consistency scales with the amount of data that the prior is trained on. Please find this analysis in the supplementary material. Overfitting of single 3D scenes. We investigate overfitting a single 3D scene with a Light Field Network with positional encodings / sinusoidal activations [24, 61]. Please find this analysis in the supplementary material. Evaluation of Reconstructed Geometry. We investigate the quality of the geometry that can be computed from an LFN via Eq. 8. For every sample in the class-specific single-shot reconstruction experiment, we extract its per-view sparse depth map. We then backproject depth maps from four views into 3D to reconstruct a point cloud, and benchmark mean L1 error on valid depth estimates with Scene Representation Networks [3]. Fig. 7 displays qualitative and quantitative results. Qualitatively, point clouds succeed in capturing fine detail such as the armrests of chairs. Quantitatively, LFNs outperform SRNs on both cars and chairs. We note that LFNs have a slight advantage in this comparison, as we can only benchmark on the sparse depth values, for which LFNs have high confidence. This includes occlusion boundaries, which are areas where the sphere-tracing based SRNs incurs high error, as it is forced to take smaller and smaller steps and may not reach the surface. We highlight that we do not claim that the proposed method is competetive with methods designed for geometry reconstruction in particular, but that we only report this to demonstrate that the proposed method is capable to extract valid depth estimates from an LFN. Limitations. First, as every existing light field approach, LFNs store only one color per oriented ray, which makes rendering views from cameras placed in between occluding objects challenging, even if the information may still be stored in the light field. Second, though we outperform globally-conditioned methods, we currently do not outperform the locally conditioned pixelNeRF. Finally, as opposed to 3D-structured representations, LFNs do not enforce strict multi-view consistency, and may be inconsistent in the case of small datasets. 6 Discussion and Conclusion We have proposed Light Field Networks, a novel neural scene representation that directly parameterizes the full 360-degree, 4D light field of a 3D scene via a neural implicit representation. This enables both real-time neural rendering with a single evaluation of the neural scene representation per ray, as well as sparse depth map extraction without ray-casting. Light Field Networks outperform globally conditioned baselines in single-shot novel view synthesis, while being three orders of magnitude faster and less memory-intensive than current volumetric rendering approaches. Exciting avenues for future work include combining LFNs with local conditioning, which would enable stronger out-of-distribution generalization, studying the learning of non-Lambertian scenes, and enabling camera placement in obstructed 3D space. With this work, we make important contributions to the emerging fields of neural rendering and neural scene representations, with exciting applications across computer vision, computer graphics, and robotics. Societal Impacts. Potential improvements extending our work on few-observation novel view synthesis could enable abuse by decreasing the cost of non-consensual impersonations. We refer the reader to a recent review of neural rendering [22] for an in-depth discussion of this topic. Acknowledgements and Disclosure of Funding This work is supported by the NSF under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/), ONR under 1015 G TA243/N00014-16-1-2007 (Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning), Mitsubishi under 026455-00001 (Building World Models from some data through analysis by synthesis), DARPA under CW3031624 (Transfer, Augmentation and Automatic Learning with Less Labels), as well as the Singapore DSTA under DST00OECI20300823 (New Representations for Vision). We thank Andrea Tagliasacchi, Tomasz Malisiewicz, Prafull Sharma, Ludwig Schubert, Kevin Smith, Bernhard Egger, Christian Richardt, Manuel Rey Area, and Jürgen and Susanne Sitzmann for interesting discussions and feedback, and Alex Yu for kindly sharing the outputs of pixelNeRF and baselines with us.
1. What is the focus and contribution of the paper on neural scene representation? 2. What are the strengths of the proposed Light Field Networks (LFNs) regarding efficiency and real-time rendering? 3. What are the limitations of the current approach, particularly regarding scene complexity? 4. How does the evaluation of the proposed method compare to other state-of-the-art approaches in terms of timings and qualitative results? 5. What are some interesting aspects for further evaluation, such as camera configurations and global vs. local conditioning? 6. Is the paper well-structured and easy to follow, with informative figures, tables, and captions? 7. Are there any typos or inconsistencies in the notation used in the paper? 8. Do the authors provide sufficient motivation for their approach and discuss its potential future developments? 9. Does the paper seem reproducible based on the information provided?
Summary Of The Paper Review
Summary Of The Paper The authors propose a novel neural scene representation called Light Field Networks (LFNs), where geometry and appearance of the considered scene are represented in a 360-degree, 4D light field that is parameterized via a neural implicit representation. In contrast to other ray-marching-based or volumetric-rendering-based techniques that rely on hundreds of evaluations per ray in similar tasks, the proposed LFN only requires a single evaluation per ray, thereby significantly improving efficiency and enabling real-time rendering at low memory requirements. Key aspects to achieve this are the parametrization of the space of light rays based on Plücker coordinates. In addition, the authors embed LFNs in a meta-learning framework to allow novel view synthesis from solely sparse 2D image supervision. The overall approach seems novel and interesting. While the complexity benefits have been demonstrated by the authors, the current approach seems to be limited to simple scenes. Review Originality The approach seems novel and reasonable. In particular, the complexity benefits have also been demonstrated by the authors. Evaluation In the paragraph on 'real-time rendering and storage cost' the statement might be misleading. Only references 3 and 6 have been included in Table 2, which might confuse less experienced readers. In this comparison, the authors provide an evaluation of the computational complexity w.r.t. to SRNs and pixelNeRF, where the proposed approach shows clear benefits. However, insights on the timings for different image resolutions have not been provided. The authors provide quantitative and qualitative results for single-shot multi-class reconstruction and class-specific single-shot reconstruction. However, the comparison only includes DVR and SRNs which seem to not be state-of-the-art approaches any more. The qualitative comparison to further scene-overfitting approaches (i,e. NeRF-like approaches) beyond Figure 6 would be interesting as well. For the evaluation of global vs. local conditioning, the authors provide only a comparison to the local conditioning by pixelNeRF, that results in better quality. Another interesting aspect for the evaluation would be the discussion regarding which configurations of cameras can be handled, i.e. how many views are required and how the views may be distributed for a reliable scene representation. Limitations have been discussed. Exposition The paper is well-structured and easy to follow. Figures/tables and captions are informative. The approach is well-motivated. There are a few typos that can be solved in a proof-reading. In equation 4 and the nearby text, the ray seems to be denoted inconsistently by r and l. In Section 5, as I understood, the reference in the paragraph 'class-specific single-view reconstruction' should read Figure 5 (instead of Figure 4). There is an unfinished sentence at the end of the caption of Figure 5 on page 8 and at the end of the caption of Figure 6 on page 9. Reproducibility The paper seems reproducible from the facts in the paper. In addition, the authors mentioned to release code upon acceptance. Post-Rebuttal: I thank the authors for provding a comprehensive feedback to the reviewer's comments and agree with the and agree with the other reviewers that this significantly improves the paper. I like the presented contributions and their potential for future developments and am looking for the inclusion of the insights provided in the rebuttal into the paper and supplemental. This also makes me increase my rating towards accept.
NIPS
Title Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Abstract Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs. 1 Introduction A fundamental problem across computer graphics, computer vision, and artificial intelligence is to infer a representation of a scene’s 3D shape and appearance given impoverished observations such as 2D images of the scene. Recent contributions have advanced the state of the art for this problem significantly. First, neural implicit representations have enabled efficient representation of local 3D scene properties by mapping a 3D coordinate to local properties of the 3D scene at that coordinate [1– 6]. Second, differentiable neural renderers allow for the inference of these representations given only 2D image observations [3, 4]. Finally, leveraging meta-learning approaches such as hypernetworks or gradient-based meta-learning has enabled the learning of distributions of 3D scenes, and therefore reconstruction given only a single image observation [3]. This has enabled a number of applications, such as novel view synthesis [7, 3, 6], 3D reconstruction [5, 3] semantic segmentation [8, 9], and SLAM [10]. However, 3D-structured neural scene representations come with a major limitation: Their rendering is prohibitively expensive, on the order of tens of seconds for a single 256 × 256 image for state-of-the-art approaches. In particular, parameterizing the scene in 3D space necessitates ∗These authors contributed equally to this work. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the discovery of surfaces along camera rays during rendering. This can be solved either by encoding geometry as a level set of an occupancy or signed distance function, or via volumetric rendering, which solves an alpha-compositing problem along each ray. Either approach, however, requires tens or even hundreds of evaluations of the 3D neural scene representation in order to render a single camera ray. We propose a novel neural scene representation, dubbed Light Field Networks or LFNs. Instead of encoding a scene in 3D space, Light Field Networks encode a scene by directly mapping an oriented camera ray in the four dimensional space of light rays to the radiance observed by that ray. This obviates the need to query opacity and RGB at 3D locations along a ray or to ray-march towards the level set of a signed distance function, speeding up rendering by three orders of magnitude compared to volumetric methods. In addition to directly encoding appearance, we demonstrate that LFNs encode information about scene geometry in their derivatives. Utilizing the unique flexibility of neural field representations, we introduce the use of Plücker coordinates to parameterize 360-degree light fields, which allow for storage of a-priori unbounded scenes and admit a simple expression for the depth as an analytical function of an LFN. Using this relationship, we demonstrate the computation of geometry in the form of sparse depth maps. While 3D-structured neural scene representations are multi-view consistent by design, parameterizing a scene in light space does not come with this guarantee: the additional degree of freedom enables rays that view the same 3D point to change appearance across viewpoints. For the setting of simple scenes, we demonstrate that this challenge can be overcome by learning a prior over 4D light fields in a meta-learning framework. We benchmark with current state-of-the-art approaches for single-shot novel view synthesis, and demonstrate that LFNs compare favorably with globally conditioned 3D-structured representations, while accelerating rendering and reducing memory consumption by orders of magnitude. In summary, we make the following contributions: 1. We propose Light Field Networks (LFNs), a novel neural scene representation that directly parameterizes the light field of a 3D scene via a neural network, enabling real-time rendering and vast reduction in memory utilization. 2. We demonstrate that we may leverage 6-dimensional Plücker coordinates as a parameterization of light fields, despite their apparent overparameterization of the 4D space of rays, thereby enabling continuous, 360-degree light fields. 3. By embedding LFNs in a meta-learning framework, we demonstrate light field reconstruction and novel view synthesis of simple scenes from sparse 2D image supervision only. 4. We demonstrate that inferred LFNs encode both appearance and geometry of the underlying 3D scenes by extracting sparse depth maps from the derivatives of LFNs, leveraging their analytical differentiability. Scope. The proposed method is currently constrained to the reconstruction of simple scenes, such as single objects and simple room-scale scenes, in line with recent work on learning generative models in this regime [3, 11]. 2 Related Work Neural Scene Representations and Neural Rendering. A large body of work addresses the question of inferring feature representations of 3D scenes useful to downstream tasks across graphics, vision, and machine learning. Models without 3D structure suffer from poor data efficiency [12, 13]. Voxel grids [14–20] offer 3D structure, but scale poorly with spatial resolution. Inspired by neural implicit representations of 3D geometry [1, 2], recent work has proposed to encode properties of 3D scenes as neural fields (also implicit- or coordinate-based representations, see [21] for an overview), neural networks that map 3D coordinates to local properties of the 3D scene at these coordinates. Using differentiable rendering, these models can be learned from image observations only [3, 4, 22, 11]. Reconstruction from sparse observations can be achieved by learning priors over the space of neural fields [3, 5, 11, 23–25] or by conditioning of the neural field on local features [6, 26, 27]. Differentiable rendering of such 3D-structured neural scene representations is exceptionally computationally intensive, requiring hundreds of evaluations of the neural representation per ray, with tens of thousands to millions of rays per image. Some recent work seeks to accelerate test-time rendering, but either does not admit generalization [28–30], or does not alleviate the cost of rendering at training/inference time [31–33]. With Light Field Networks, we propose to leverage 360- degree light fields as neural scene representations. We introduce a novel neural field parameterization of 360-degree light fields, infer light fields via meta-learning from as few as a single 2D image observation, and demonstrate that LFNs encode both scene geometry and appearance. Light fields and their reconstruction. Light fields have a rich history as a scene representation in both computer vision and computer graphics. Adelson et al. [34] introduced the 5D plenoptic function as a unified representation of information in the early visual system [35]. Levoy et al. [36] and, concurrently, Gortler et al. [37] introduced light fields in computer graphics as a 4D sampled scene representation for fast image-based rendering. Light fields have since enjoyed popularity as a representation for novel view synthesis [38] and computational photography, e.g. [39]. Light fields enable direct rendering of novel views by simply extracting a 2D slice of the 4D light field. However, they tend to incur significant storage cost, and since they rely on two-plane parameterizations, they make it hard to achieve a full 360-degree representation without concatenating multiple light fields. A significant amount of prior work addresses reconstruction of fronto-parallel light fields via handcrafted priors, such as sparsity in the Fourier or shearlet domains [40–42]. With the advent of deep learning, approaches to light field reconstruction that leverage convolutional neural networks to in-paint or extrapolate light fields from sparse views have been proposed [43, 7, 44], but similarly only support fronto-parallel novel view synthesis. We are instead interested in light fields as a representation of 3D appearance and geometry that enables efficient inference of and reasoning about the properties of the full underlying scene. 3 Background: 3D-structured Neural Scene Representations Recent progress in neural scene representation and rendering has been driven by two key innovations. The first are neural fields, often also referred to as neural implicit- or coordinate-based scene representations Φ3D [3, 4], which model a scene as a continuous function, parameterized as an MLP which maps a 3D coordinate to a representation v of whatever is at that 3D coordinate: Φ3D : R3 → Rn, x 7→ Φ3D(x) = v. (1) The second is a differentiable renderer m, which, given a ray r in R3, and the representation Φ3D, computes the value of the color c of the scene when viewed along r: m(r,Φ3D) = c(r) ∈ R3. (2) Existing rendering methods broadly fall into two categories: sphere-tracing-based renderers [3, 45, 5, 46] and volumetric renderers [19, 4]. These methods require on the order of tens or hundreds of evaluations of the values of Φ3D along a ray r to compute c(r). This leads to extraordinarily large memory and time complexity of rendering. As training requires error backpropagation through the renderer, this impacts both training and test time. 4 The Light Field Network Scene Representation We propose to represent a scene as a 360-degree neural light field, a function parameterized by an MLP Φφ with parameters φ that directly maps the 4D space L of oriented rays to their observed radiance: Φφ : L → R3, r 7→ Φφ(r) = c(r). (3) A light field completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. Light fields have the unique property that rendering is achieved by a single evaluation of Φ per light ray, i.e., no ray-casting is required. Moreover, while the light field only encodes appearance explicitly, its derivatives encode geometry information about the underlying 3D scene [47, 34, 35]. This makes many methods to extract 3D geometry from light fields possible [48–51], and we demonstrate efficient recovery of sparse depth maps from LFNs below. 4.1 Implicit representations for 360 degree light fields To fully represent a 3D scene requires a parameterization of all light rays in space. Conventional light field methods are constrained to leverage minimal parameterizations of the 4D space of rays, due to the high memory requirements of discretely sampled high-dimensional spaces. In contrast, our use of neural field representations allows us to freely choose a continuous parameterization that is mathematically convenient. In particular, we propose to leverage the 6D Plücker parameterization of the space of light rays L for LFNs. The Plücker coordinates (see [52] for an excellent overview) of a ray r through a point p in a normalized direction d are r = (d,m) ∈ R6 where m = p× d, for d ∈ S2,p ∈ R3. (4) where × denotes the cross product. While Plücker coordinates are a-priori 6-tuples of real numbers, the coordinates of any ray lie on a curved 4-dimensional subspace L. Plücker coordinates uniformly represent all oriented rays in space without singular directions or special cases. Intuitively, a general ray r together with the origin define a plane, and m is a normal vector to the plane with its magnitude capturing the distance from the ray to the origin; if m = 0 then the ray passes through the origin and is defined by its direction d. This is in contrast to conventional light field parameterizations: Fronto-parallel two-plane or cylindrical parameterizations cannot represent the full 360-degree light field of a scene [36, 53]. Cubical two-plane arrangements [37, 38] are not continuous, complicating the parameterization via a neural implicit representation. In contrast to the two-sphere parameterization [54], Plücker coordinates do not require that scenes are bounded in size and do not require spherical trigonometry. The parameterization via a neural field enables compact storage of a 4D light field that can be sampled at arbitrary resolutions, while non-neural representations are resolution-limited. Neural fields further allow the analytical computation of derivatives. This enables the efficient computation of sparse depth maps, where prior representations of light fields require finite-differences approximations of the gradient [48–50]. Rendering LFNs. To render an image given an LFN, one computes the Plücker coordinates ru,v of the camera rays at each u, v pixel coordinate in the image according to Equation 4. Specifically, given the extrinsic E = [ R|t ] ∈ SE(3) and intrinsic K ∈ R3×3 camera matrices [55] of a camera, one may retrieve the Plücker coordinates of the ray ru,v at pixel coordinate u, v as: ru,v = (du,v, t× du,v)/‖du,v‖, where du,v = RK−1 ( u v 1 ) + t, (5) where we use the world-to-camera convention for the extrinsic camera parameters. Rendering then amounts to a single evaluation of the LFN Φ for each ray, cu,v = Φ(ru,v). For notational convenience, we introduce a rendering function ΘΦE,K : R` → RH×W×3 (6) which renders an LFN Φφ with parameters φ ∈ R` when viewed from a camera with extrinsic and intrinsic parameters (E,K) into an image. 4.2 The geometry of Light Field Networks We will now analyze the properties of LFNs representing Lambertian 3D scenes, and illustrate how the geometry of the underlying 3D scene is encoded. We will first derive an expression that establishes a relationship between LFNs and the classic two-plane parameterization of the light field. Subsequently, we will derive an expression for the depth of a ray in terms of the local color gradient of the light field, therefore allowing us to efficiently extract sparse depth maps from the light field at any camera pose via analytical differentiation of the neural implicit representation. Please see Figure 2 for an overview. Locally linear slices of the light field. We derive here a local parametrization that will allow us to work with an LFN as if it were a conventional 2-plane light field. Given a ray r in Plücker coordinates, we pick two points x,x′ ∈ R3 along this ray. We then find a normalized direction d ∈ S2 not parallel to the ray direction - a canonical choice is a direction orthogonal to the ray direction. We may now parameterize two parallel lines a(s) = x + sd and b(t) = x′ + td that give rise to a local two-plane basis of the light field with ray coordinates s and t. r intersects these lines at the two-plane coordinates (s, t) = (0, 0). This choice of local basis now assigns the two-plane coordinates (s, t) to the ray r from a(s) to b(t). In Figure 2, we illustrate this process on a simple 2D scene. Epipolar Plane Images and their geometry. The Plücker coordinates (see Eq. 4) enable us to extract a 2D slice from an LFN field by varying (s, t) and sampling Φ on the Plücker coordinates of the rays parametrized pairs of points on the lines a(s) and b(t): c(s, t) = Φ (r(s, t)) ,where r(s, t) = −−−−−→ a(s)b(t) = ( b(t)− a(s) ‖b(t)− a(s)‖ , a(s)× b(t) ‖b(t)− a(s)‖ ) . (7) The image of this 2D slice c(s, t) is well-known in the light field literature as an Epipolar Plane Image (EPI) [47]. EPIs carry rich information about the geometry of the underlying 3D scene. For example, consider a point p on the surface of an object in the scene; please see Figure 2 for a diagram. A point p ∈ R2 has a 1-dimensional family of rays going through the point, which correspond to a (green) line Lp in the EPI. In a Lambertian scene, all rays that meet in this point and that are not occluded by other objects must observe the same color. Therefore, the light field is constant along this line. As one travels along Lp, rotating through the family of rays through p, one eventually reaches a (magenta) tangent ray τ to the object. At a tangent ray, the value of the EPI ceases to be constant, and the light field changes its color to whatever is disoccluded by the object at this tangent ray. Because objects of different depth undergo differing amounts of parallax, EPIsRGB Gradients Depthsthe slope of the segment of Lp along which cis constant determines the 3D coordinates of p. Finally, by observing that we may extract EPIs from any perspective, it is clear that an LFN encodes the full 3D geometry of the underlying scene. Intuitively, this may also be seen by con- sidering that one could render out all possible perspectives of the underlying scene, and solve a classic multi-view stereo problem to retrieve the shape. Extracting depth maps from LFNs. A correctly inferred light field necessarily contains accurate 3D geometry information, although the geometry is encoded in a nontrivial way. To extract 3D geometry from an LFN, we utilize the property of the 2-plane parameterization that the light field is constant on segments Lp, the slopes of which determine p. In the supplemental material, we derive Proposition 1. For a Lambertian scene, the distance d along r = −−−−−→ a(s)b(t) from a(s) to the point p on the object is d(r) = D ∂tc(s, t) ∂sc(s, t) + ∂tc(s, t) . (8) where a(s) and b(t) are as above, c(s, t) is defined by (7), D is the distance between the lines a(t) and b(t). Thus p = a(s) + d(r) b(t)−a(s)‖b(t)−a(s)‖ , and ∂x denotes the partial derivative by variable x. This result yields meaningful depth estimates wherever the derivatives of the light fields are nonzero along the ray. In practice, we sample several rays in a small (s, t) neighborhood of the ray r and declare depth estimates as invalid if the gradients have high variance-please see the code for implementation details. This occurs when r hits the object at a point where the surface color is changing, or when r is a tangent ray. We note that there is a wealth of prior art that could be used to extend this approach to extract dense depth maps [48–51]. 4.3 Meta-learning with conditional Light Field Networks We consider a dataset D consisting of N 3D scenes Si = {(Ij ,Ej ,Kj)}Kj=1 ∈ RH×W×3 × SE(3)× R3×3, i = 1 . . . N (9) with K images Ij of each scene taken with cameras with extrinsic parameters Ej and intrinsic parameters Kj [55]. Each scene is completely described by the parameters φi ∈ R` of its corresponding light field MLP Φi = Φφi . Meta-learning and multi-view consistency. In the case of 3D-structured neural scene representations, ray-marching or volumetric rendering naturally ensure multi-view consistency of the reconstructed 3D scene representation. In contrast, a general 4D function Φ : L → R3 is not multi-view consistent, as most such functions are not the light fields of any 3D scene. We propose to overcome this challenge by learning a prior over the space of light fields. As we will demonstrate, this prior can also be used to reconstruct an LFN from a single 2D image observation. In this paradigm, differentiable ray-casting is a method to force the light field of a scene to be multi-view consistent, while we instead impose multi-view consistency by learning a prior over light fields. Meta-learning framework. We propose to represent each 3D scene Si by its own latent vector zi ∈ Rk. Generalizing to new scenes amounts to learning a prior over the space of light fields that is concentrated on the manifold of multi-view consistent light fields of natural scenes. To represent this latent manifold, we utilize a hypernetwork [56, 3]. The hypernetwork is a function, represented as an MLP Ψ : Rk → R`,Ψψ(zi) = φi (10) with parameters ψ which sends the latent code zi of the i-th scene to the parameters of the corresponding LFN. Several reasonable approaches exist to obtain latent codes zi. One may leverage a convolutionalor transformer-based image encoder, directly inferring the latent from an image [11, 5], or utilize gradient-based meta-learning [23]. Here, we follow an auto-decoder framework [1, 3] to find the latent codes zi, but note that LFNs are in no way constrained to this approach. We do not claim that this particular meta-learning method will out-perform other forms of conditioning, such as gradient-based meta-learning [57, 23] or FILM conditioning [58], but perform a comparison to a conditioning-by-concatenation approach in the appendix. We assume that the latent vectors have a Gaussian prior with zero mean and a diagonal covariance matrix. At training time, we jointly optimize the latent parameters zi together with the hypernetwork parameters ψ using the objective arg min {zi},ψ ∑ i ∑ j ‖ΘΦEj ,Kj (Ψψ(zi))− Ij‖ 2 2 + λlat‖zi‖22. (11) Here the ΘΦ is the rendering function (Equation 6), the first term is an `2 loss penalizing the light fields that disagree with the observed images, and the second term enforces the prior over the latent variables. We solve Equation 11 using gradient descent. At test time, we freeze the parameters of the hypernetwork and reconstruct the light field for a new scene S given a single observation of the scene {(I,E,K)} by optimizing, using gradient descent, the latent variable zS of the scene, such that the reconstructed light field ΦΨψ(zS) best matches the given observation of the scene: zS = arg min z ‖ΘΦE,K (Ψψ(z))− I)‖22 + λlat‖z‖22. (12) Global vs. local conditioning The proposed meta-learning framework globally conditions an LFN on a single latent variable z. Recent work instead leverages local conditioning, where a neural field is conditioned on local features extracted from a context image [26, 6, 27]. In particular, the recently proposed pixelNeRF [6] has achieved impressive results on few-shot novel view synthesis. As we will see, the current formulation of LFNs does not outperform pixelNeRF. We note, however, that local conditioning methods solve a different problem. Rather than learning a prior over classes of objects, local conditioning methods learn priors over patches, answering the question “How does this image patch look like from a different perspective?”. As a result, this approach does not learn a latent space of neural scene representations. Rather, scene context is required to be available at test time to reason about the underlying 3D scene, and the representation is not compact: the size of the conditioning grows with the number of context observations. In contrast, globally conditioned methods [3, 11, 1, 2] first infer a global representation that is invariant to the number of context views and subsequently discard the observations. However, local conditioning enables better generalization due to the shift-equivariance of convolutional neural networks. An equivalent to local conditioning in light fields is non-obvious, and an exciting direction for future work. 5 Experiments We demonstrate the efficacy of LFNs by reconstructing 360-degree light fields of a variety of simple 3D scenes. In all experiments, we parameterize LFNs via a 6-layer ReLU MLP, and the hypernetwork as a 3-layer ReLU MLP, both with layer normalization. We solve all optimization problems using the ADAM solver with a step size of 10−4. Please find more results, as well as precise hyperparameter, implementation, and dataset details, in the supplemental document and video. Reconstructing appearance and geometry of single-object and room-scale light fields. We demonstrate that LFN can parameterize 360-degree light fields of both single-object ShapeNet [59] objects and simple, room-scale environments. We train LFNs on the ShapeNet “cars” dataset with 50 observations per object from [3], as well as on simple room-scale environments as proposed in [13]. Subsequently, we evaluate the ability of LFNs to generate novel views of the underlying 3D scenes. Please see Figure 3 for qualitative results. LFNs succeed in parameterizing the 360-degree light field, enabling novel view synthesis at real-time frame-rates (see supplemental video). We further demonstrate that LFNs encode scene geometry by presenting Epipolar Plane Images and leveraging the relationship derived in Equation 8 to infer sparse depth maps. We highlight that both rendering and depth map extraction do not require ray-casting, with only a single evaluation of the network or the network and its gradient respectively. Multi-class single-view reconstruction. Following [5, 6], we benchmark LFNs with recent global conditioning methods on the task of single-view reconstruction and novel view synthesis of the 13 largest ShapeNet categories. We follow the same evaluation protocol as [60] and train a single model across all categories. See Figure 4 for qualitative and Table 1 for quantitative baseline comparisons. We significantly outperform both Differentiable Volumetric Rendering (DVR) [5] and Scene Representation Networks (SRNs) [3] on all but two classes by an average of 1dB, while requiring more than an order of magnitude fewer network evaluations per ray. Qualitatively, we find that the reconstructions from LFNs are often crisper than those of either Scene Representation Networks or DVR. Note that DVR requires additional ground-truth foreground-background segmentation masks. Class-specific single-view reconstruction. We benchmark LFNs on single-shot reconstruction on the Shapenet “cars” and “chairs” classes as proposed in SRNs [3]. See Figure 5 for qualitative and quantitative results. We report performance better than SRNs in PSRN and on par in terms of SSIM on the “cars” class, and worse in PSNR but better in terms of SSIM on the “chairs” class, while requiring an order of magnitude fewer network evaluations and rendering in real-time. We attribute the drop in performance compared to multi-class reconstruction to the smaller dataset size, causing multi-view inconsistency. Global vs. local conditioning and comparison to pixelNeRF [6]. We investigate the role of global conditioning, where a single latent is inferred to describe the whole scene [3], to local conditioning, where latents are inferred per-pixel in a 2D image and leveraged to locally condition a neural implicit representation [26, 27, 6]. We benchmark with the recently proposed pixelNeRF [6]. As noted above (see Section 4.3), local conditioning does not infer a compact neural scene representation of the scene. Nevertheless, we provide the comparison here for completeness. See Figure 6 for qualitative and quantitative results. On average, LFNs perform 1dB worse than pixelNeRF in the single-class case, and 2dB worse in the multi-class setting. Real-time rendering and storage cost. See Table 2 for a quantitative comparison of the rendering complexity of LFN compared with that of volumetric and ray-marching based neural renderers [3, 45, 19, 4, 6]. All clock times were collected for rendering 256× 256 images on an NVIDIA RTX 6000 GPU. We further compare the cost of storing a single LFN with the cost of storing a conventional light field. With approximately 400k parameters, a single LFN requires around 1.6 MB of storage, compared to 146 MB required for storing a 360-degree light field at a resolution of 256×256×17×17 in the six-plane Lumigraph configuration. Multi-view consistency as a function of training set size. We investigate how multi-view consistency scales with the amount of data that the prior is trained on. Please find this analysis in the supplementary material. Overfitting of single 3D scenes. We investigate overfitting a single 3D scene with a Light Field Network with positional encodings / sinusoidal activations [24, 61]. Please find this analysis in the supplementary material. Evaluation of Reconstructed Geometry. We investigate the quality of the geometry that can be computed from an LFN via Eq. 8. For every sample in the class-specific single-shot reconstruction experiment, we extract its per-view sparse depth map. We then backproject depth maps from four views into 3D to reconstruct a point cloud, and benchmark mean L1 error on valid depth estimates with Scene Representation Networks [3]. Fig. 7 displays qualitative and quantitative results. Qualitatively, point clouds succeed in capturing fine detail such as the armrests of chairs. Quantitatively, LFNs outperform SRNs on both cars and chairs. We note that LFNs have a slight advantage in this comparison, as we can only benchmark on the sparse depth values, for which LFNs have high confidence. This includes occlusion boundaries, which are areas where the sphere-tracing based SRNs incurs high error, as it is forced to take smaller and smaller steps and may not reach the surface. We highlight that we do not claim that the proposed method is competetive with methods designed for geometry reconstruction in particular, but that we only report this to demonstrate that the proposed method is capable to extract valid depth estimates from an LFN. Limitations. First, as every existing light field approach, LFNs store only one color per oriented ray, which makes rendering views from cameras placed in between occluding objects challenging, even if the information may still be stored in the light field. Second, though we outperform globally-conditioned methods, we currently do not outperform the locally conditioned pixelNeRF. Finally, as opposed to 3D-structured representations, LFNs do not enforce strict multi-view consistency, and may be inconsistent in the case of small datasets. 6 Discussion and Conclusion We have proposed Light Field Networks, a novel neural scene representation that directly parameterizes the full 360-degree, 4D light field of a 3D scene via a neural implicit representation. This enables both real-time neural rendering with a single evaluation of the neural scene representation per ray, as well as sparse depth map extraction without ray-casting. Light Field Networks outperform globally conditioned baselines in single-shot novel view synthesis, while being three orders of magnitude faster and less memory-intensive than current volumetric rendering approaches. Exciting avenues for future work include combining LFNs with local conditioning, which would enable stronger out-of-distribution generalization, studying the learning of non-Lambertian scenes, and enabling camera placement in obstructed 3D space. With this work, we make important contributions to the emerging fields of neural rendering and neural scene representations, with exciting applications across computer vision, computer graphics, and robotics. Societal Impacts. Potential improvements extending our work on few-observation novel view synthesis could enable abuse by decreasing the cost of non-consensual impersonations. We refer the reader to a recent review of neural rendering [22] for an in-depth discussion of this topic. Acknowledgements and Disclosure of Funding This work is supported by the NSF under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/), ONR under 1015 G TA243/N00014-16-1-2007 (Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning), Mitsubishi under 026455-00001 (Building World Models from some data through analysis by synthesis), DARPA under CW3031624 (Transfer, Augmentation and Automatic Learning with Less Labels), as well as the Singapore DSTA under DST00OECI20300823 (New Representations for Vision). We thank Andrea Tagliasacchi, Tomasz Malisiewicz, Prafull Sharma, Ludwig Schubert, Kevin Smith, Bernhard Egger, Christian Richardt, Manuel Rey Area, and Jürgen and Susanne Sitzmann for interesting discussions and feedback, and Alex Yu for kindly sharing the outputs of pixelNeRF and baselines with us.
1. What is the focus and contribution of the paper on neural scene representation? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its ability to handle multiview consistency and explicit 3D geometry representation? 3. How does the suggested scene representation compare to previous works, such as PixelNeRF, in terms of global and local conditioning? 4. What are the limitations of the method when applied to real data settings, and how can these limitations be addressed? 5. Can the authors provide more information about the sparse depth maps extraction and its limitation to Lambertian surfaces? 6. How does the method perform on single scene multiview reconstruction, and what are the differences between the suggested representation and SDF in terms of geometry representation?
Summary Of The Paper Review
Summary Of The Paper The paper suggests a new representation of the 4D light field using a neural scene representation that predicts the light field from the 6D Plucker coordinates. The new scene representation enables the pixel radiance estimation using a single network query, whereas previous works needed to follow ray-marching or volume rendering procedures that require multiple queries along the ray. The presented scene representation is then utilized to learn multiple shapes or scenes using hypernetworks, in order to learn a prior of multiview consistency. Novel view synthesis results are presented for the learned scenes from the dataset, as well as generalizations from only single view supervision. Moreover, the authors demonstrate how sparse depth maps can be extracted from the learned light field in the case of Lambertian scenes. Review Those are the main strengths and novelty I find in the paper: The paper is well written and has a good flaw. Except for Figure 3, which I find to be a bit confusing and crowded, all the figures as well the supplied video tells the story nicely and helps to understand the method concepts. The idea of representing the light field using a unique coordinate system for rays is novel and interesting, but most importantly, it allows fast rendering which is currently the significant bottleneck of the existing rendering techniques. This parametrization is well motivated by the authors for 360 degrees scenes, where it is natural to assume that the radiance is constant along the ray. The authors exploited the epipolar plane images properties in Lambertian surfaces and developed an equation to extract sparse depth maps from the learned light field using a single evaluation of the network and its gradient. I believe it presents a novel approach that can be leveraged in future works with neural scene representations. Nonetheless, the depth maps results are impressive given that the light field is learned without inherent representation of the geometry. Following are the weaknesses I find in the presented method and the concerns which I would like to be addressed by the authors: Although real-time rendering is a well-desired property for the task of view-synthesis, so as inherent multiview consistency, that the authors agreed that their representation does not possess. The lack of explicit 3D geometry representation makes the suggested representation not multiview consistent by design, unlike previous works, which I find to be the major limitation of the method. As the authors mentioned, their representation encodes both geometry and appearance together. Hence I suspect it can mismatch between the two and compensate over geometry properties using the learned appearance. I want to emphasize that this comment differs from 1, where both limitations come from the fact the suggested representation does not model explicit geometry properties. The authors suggest that a future direction for their work would be working on real data. With the above (1,2) said, it is not clear how those limitations can be addressed in the real data settings, where in those cases the need for multiview consistency is more crucial, and need to store more than one color per ray. Moreover, I'm interested to know how the representation performs on a single scene multiview reconstruction. The comparison between global and local conditioning in the context of shape space learning is important and the authors describe the strengths and limitations of each option properly. However, in the context of the suggested scene representation (which is the paper's main contribution), I find the comparison to PixelNeRF irrelevant. A more proper baseline would be PixelNerF with global conditioning (meaning, auto-decoder as in the method of the paper with rendering function of NeRF). The source for that PixelNeRF overcomes LFN is not clear- is it due to the scene representation or due to the conditioning learning method? I believe there is a need to separate between those two observations to strengthen the paper's contribution. Another point of difference is that PixelNeRF utilizes positional encoding, which enables to learn higher frequencies in the light field, and I wonder why the authors did not use that. In several results, it seems that thin areas are model incorrectly (smoothed or mixed), for example - figure 5 bottom chair, or the tables and benches legs in the single-shot results (10:45-55 in the supplied video). I find possible reasons for that: the LFN fails to learn the high discontinuity (frequency) of the light field where a small change in the ray change from the table leg to the background; the multiview consistency is not modeled correctly in this area; due to the low-resolution data; compensation of the learned appearance over geometry; wrong generalization of tests views. I would appreciate it if the authors could address this concern, and I suggest they show depth evaluation for the more complicated geometry areas. The addition of extracting the sparse depth maps is another good contribution of the methods, however, it needs to be addressed by the authors that it is limited to Lambertian surfaces and that a generalization to secular scenes is unclear. After reading the other reviews and the author's response, I decided to update my rating to accept. However, there are few points I suggest the authors clarify in the revised paper: In section 1.2 in the authors' response, I believe that the comparison to SDF can be misleading. Compares to LFNs, SDF is not limited to Lambertian surfaces for representing scene geometry, and the geometry representation is not partial (as the sparse depth maps extracted from LFNs). In section 1.4 in the authors' response, running methods like SRN and DVR with a fixed compute budget will probably yield worsened results. However, they will still be multi-view consistent, meaning that the learned scene from a novel view will correspond to the learned scene geometry. Also, I believe incorporating all the new results presented in the rebuttal (also the new Fern results in 2.5) would serve this paper well. Overall, I'm convinced by the novelty and contribution of this new representation, and I'm intrigued by the future works it opens up.
NIPS
Title Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Abstract Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs. 1 Introduction A fundamental problem across computer graphics, computer vision, and artificial intelligence is to infer a representation of a scene’s 3D shape and appearance given impoverished observations such as 2D images of the scene. Recent contributions have advanced the state of the art for this problem significantly. First, neural implicit representations have enabled efficient representation of local 3D scene properties by mapping a 3D coordinate to local properties of the 3D scene at that coordinate [1– 6]. Second, differentiable neural renderers allow for the inference of these representations given only 2D image observations [3, 4]. Finally, leveraging meta-learning approaches such as hypernetworks or gradient-based meta-learning has enabled the learning of distributions of 3D scenes, and therefore reconstruction given only a single image observation [3]. This has enabled a number of applications, such as novel view synthesis [7, 3, 6], 3D reconstruction [5, 3] semantic segmentation [8, 9], and SLAM [10]. However, 3D-structured neural scene representations come with a major limitation: Their rendering is prohibitively expensive, on the order of tens of seconds for a single 256 × 256 image for state-of-the-art approaches. In particular, parameterizing the scene in 3D space necessitates ∗These authors contributed equally to this work. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the discovery of surfaces along camera rays during rendering. This can be solved either by encoding geometry as a level set of an occupancy or signed distance function, or via volumetric rendering, which solves an alpha-compositing problem along each ray. Either approach, however, requires tens or even hundreds of evaluations of the 3D neural scene representation in order to render a single camera ray. We propose a novel neural scene representation, dubbed Light Field Networks or LFNs. Instead of encoding a scene in 3D space, Light Field Networks encode a scene by directly mapping an oriented camera ray in the four dimensional space of light rays to the radiance observed by that ray. This obviates the need to query opacity and RGB at 3D locations along a ray or to ray-march towards the level set of a signed distance function, speeding up rendering by three orders of magnitude compared to volumetric methods. In addition to directly encoding appearance, we demonstrate that LFNs encode information about scene geometry in their derivatives. Utilizing the unique flexibility of neural field representations, we introduce the use of Plücker coordinates to parameterize 360-degree light fields, which allow for storage of a-priori unbounded scenes and admit a simple expression for the depth as an analytical function of an LFN. Using this relationship, we demonstrate the computation of geometry in the form of sparse depth maps. While 3D-structured neural scene representations are multi-view consistent by design, parameterizing a scene in light space does not come with this guarantee: the additional degree of freedom enables rays that view the same 3D point to change appearance across viewpoints. For the setting of simple scenes, we demonstrate that this challenge can be overcome by learning a prior over 4D light fields in a meta-learning framework. We benchmark with current state-of-the-art approaches for single-shot novel view synthesis, and demonstrate that LFNs compare favorably with globally conditioned 3D-structured representations, while accelerating rendering and reducing memory consumption by orders of magnitude. In summary, we make the following contributions: 1. We propose Light Field Networks (LFNs), a novel neural scene representation that directly parameterizes the light field of a 3D scene via a neural network, enabling real-time rendering and vast reduction in memory utilization. 2. We demonstrate that we may leverage 6-dimensional Plücker coordinates as a parameterization of light fields, despite their apparent overparameterization of the 4D space of rays, thereby enabling continuous, 360-degree light fields. 3. By embedding LFNs in a meta-learning framework, we demonstrate light field reconstruction and novel view synthesis of simple scenes from sparse 2D image supervision only. 4. We demonstrate that inferred LFNs encode both appearance and geometry of the underlying 3D scenes by extracting sparse depth maps from the derivatives of LFNs, leveraging their analytical differentiability. Scope. The proposed method is currently constrained to the reconstruction of simple scenes, such as single objects and simple room-scale scenes, in line with recent work on learning generative models in this regime [3, 11]. 2 Related Work Neural Scene Representations and Neural Rendering. A large body of work addresses the question of inferring feature representations of 3D scenes useful to downstream tasks across graphics, vision, and machine learning. Models without 3D structure suffer from poor data efficiency [12, 13]. Voxel grids [14–20] offer 3D structure, but scale poorly with spatial resolution. Inspired by neural implicit representations of 3D geometry [1, 2], recent work has proposed to encode properties of 3D scenes as neural fields (also implicit- or coordinate-based representations, see [21] for an overview), neural networks that map 3D coordinates to local properties of the 3D scene at these coordinates. Using differentiable rendering, these models can be learned from image observations only [3, 4, 22, 11]. Reconstruction from sparse observations can be achieved by learning priors over the space of neural fields [3, 5, 11, 23–25] or by conditioning of the neural field on local features [6, 26, 27]. Differentiable rendering of such 3D-structured neural scene representations is exceptionally computationally intensive, requiring hundreds of evaluations of the neural representation per ray, with tens of thousands to millions of rays per image. Some recent work seeks to accelerate test-time rendering, but either does not admit generalization [28–30], or does not alleviate the cost of rendering at training/inference time [31–33]. With Light Field Networks, we propose to leverage 360- degree light fields as neural scene representations. We introduce a novel neural field parameterization of 360-degree light fields, infer light fields via meta-learning from as few as a single 2D image observation, and demonstrate that LFNs encode both scene geometry and appearance. Light fields and their reconstruction. Light fields have a rich history as a scene representation in both computer vision and computer graphics. Adelson et al. [34] introduced the 5D plenoptic function as a unified representation of information in the early visual system [35]. Levoy et al. [36] and, concurrently, Gortler et al. [37] introduced light fields in computer graphics as a 4D sampled scene representation for fast image-based rendering. Light fields have since enjoyed popularity as a representation for novel view synthesis [38] and computational photography, e.g. [39]. Light fields enable direct rendering of novel views by simply extracting a 2D slice of the 4D light field. However, they tend to incur significant storage cost, and since they rely on two-plane parameterizations, they make it hard to achieve a full 360-degree representation without concatenating multiple light fields. A significant amount of prior work addresses reconstruction of fronto-parallel light fields via handcrafted priors, such as sparsity in the Fourier or shearlet domains [40–42]. With the advent of deep learning, approaches to light field reconstruction that leverage convolutional neural networks to in-paint or extrapolate light fields from sparse views have been proposed [43, 7, 44], but similarly only support fronto-parallel novel view synthesis. We are instead interested in light fields as a representation of 3D appearance and geometry that enables efficient inference of and reasoning about the properties of the full underlying scene. 3 Background: 3D-structured Neural Scene Representations Recent progress in neural scene representation and rendering has been driven by two key innovations. The first are neural fields, often also referred to as neural implicit- or coordinate-based scene representations Φ3D [3, 4], which model a scene as a continuous function, parameterized as an MLP which maps a 3D coordinate to a representation v of whatever is at that 3D coordinate: Φ3D : R3 → Rn, x 7→ Φ3D(x) = v. (1) The second is a differentiable renderer m, which, given a ray r in R3, and the representation Φ3D, computes the value of the color c of the scene when viewed along r: m(r,Φ3D) = c(r) ∈ R3. (2) Existing rendering methods broadly fall into two categories: sphere-tracing-based renderers [3, 45, 5, 46] and volumetric renderers [19, 4]. These methods require on the order of tens or hundreds of evaluations of the values of Φ3D along a ray r to compute c(r). This leads to extraordinarily large memory and time complexity of rendering. As training requires error backpropagation through the renderer, this impacts both training and test time. 4 The Light Field Network Scene Representation We propose to represent a scene as a 360-degree neural light field, a function parameterized by an MLP Φφ with parameters φ that directly maps the 4D space L of oriented rays to their observed radiance: Φφ : L → R3, r 7→ Φφ(r) = c(r). (3) A light field completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. Light fields have the unique property that rendering is achieved by a single evaluation of Φ per light ray, i.e., no ray-casting is required. Moreover, while the light field only encodes appearance explicitly, its derivatives encode geometry information about the underlying 3D scene [47, 34, 35]. This makes many methods to extract 3D geometry from light fields possible [48–51], and we demonstrate efficient recovery of sparse depth maps from LFNs below. 4.1 Implicit representations for 360 degree light fields To fully represent a 3D scene requires a parameterization of all light rays in space. Conventional light field methods are constrained to leverage minimal parameterizations of the 4D space of rays, due to the high memory requirements of discretely sampled high-dimensional spaces. In contrast, our use of neural field representations allows us to freely choose a continuous parameterization that is mathematically convenient. In particular, we propose to leverage the 6D Plücker parameterization of the space of light rays L for LFNs. The Plücker coordinates (see [52] for an excellent overview) of a ray r through a point p in a normalized direction d are r = (d,m) ∈ R6 where m = p× d, for d ∈ S2,p ∈ R3. (4) where × denotes the cross product. While Plücker coordinates are a-priori 6-tuples of real numbers, the coordinates of any ray lie on a curved 4-dimensional subspace L. Plücker coordinates uniformly represent all oriented rays in space without singular directions or special cases. Intuitively, a general ray r together with the origin define a plane, and m is a normal vector to the plane with its magnitude capturing the distance from the ray to the origin; if m = 0 then the ray passes through the origin and is defined by its direction d. This is in contrast to conventional light field parameterizations: Fronto-parallel two-plane or cylindrical parameterizations cannot represent the full 360-degree light field of a scene [36, 53]. Cubical two-plane arrangements [37, 38] are not continuous, complicating the parameterization via a neural implicit representation. In contrast to the two-sphere parameterization [54], Plücker coordinates do not require that scenes are bounded in size and do not require spherical trigonometry. The parameterization via a neural field enables compact storage of a 4D light field that can be sampled at arbitrary resolutions, while non-neural representations are resolution-limited. Neural fields further allow the analytical computation of derivatives. This enables the efficient computation of sparse depth maps, where prior representations of light fields require finite-differences approximations of the gradient [48–50]. Rendering LFNs. To render an image given an LFN, one computes the Plücker coordinates ru,v of the camera rays at each u, v pixel coordinate in the image according to Equation 4. Specifically, given the extrinsic E = [ R|t ] ∈ SE(3) and intrinsic K ∈ R3×3 camera matrices [55] of a camera, one may retrieve the Plücker coordinates of the ray ru,v at pixel coordinate u, v as: ru,v = (du,v, t× du,v)/‖du,v‖, where du,v = RK−1 ( u v 1 ) + t, (5) where we use the world-to-camera convention for the extrinsic camera parameters. Rendering then amounts to a single evaluation of the LFN Φ for each ray, cu,v = Φ(ru,v). For notational convenience, we introduce a rendering function ΘΦE,K : R` → RH×W×3 (6) which renders an LFN Φφ with parameters φ ∈ R` when viewed from a camera with extrinsic and intrinsic parameters (E,K) into an image. 4.2 The geometry of Light Field Networks We will now analyze the properties of LFNs representing Lambertian 3D scenes, and illustrate how the geometry of the underlying 3D scene is encoded. We will first derive an expression that establishes a relationship between LFNs and the classic two-plane parameterization of the light field. Subsequently, we will derive an expression for the depth of a ray in terms of the local color gradient of the light field, therefore allowing us to efficiently extract sparse depth maps from the light field at any camera pose via analytical differentiation of the neural implicit representation. Please see Figure 2 for an overview. Locally linear slices of the light field. We derive here a local parametrization that will allow us to work with an LFN as if it were a conventional 2-plane light field. Given a ray r in Plücker coordinates, we pick two points x,x′ ∈ R3 along this ray. We then find a normalized direction d ∈ S2 not parallel to the ray direction - a canonical choice is a direction orthogonal to the ray direction. We may now parameterize two parallel lines a(s) = x + sd and b(t) = x′ + td that give rise to a local two-plane basis of the light field with ray coordinates s and t. r intersects these lines at the two-plane coordinates (s, t) = (0, 0). This choice of local basis now assigns the two-plane coordinates (s, t) to the ray r from a(s) to b(t). In Figure 2, we illustrate this process on a simple 2D scene. Epipolar Plane Images and their geometry. The Plücker coordinates (see Eq. 4) enable us to extract a 2D slice from an LFN field by varying (s, t) and sampling Φ on the Plücker coordinates of the rays parametrized pairs of points on the lines a(s) and b(t): c(s, t) = Φ (r(s, t)) ,where r(s, t) = −−−−−→ a(s)b(t) = ( b(t)− a(s) ‖b(t)− a(s)‖ , a(s)× b(t) ‖b(t)− a(s)‖ ) . (7) The image of this 2D slice c(s, t) is well-known in the light field literature as an Epipolar Plane Image (EPI) [47]. EPIs carry rich information about the geometry of the underlying 3D scene. For example, consider a point p on the surface of an object in the scene; please see Figure 2 for a diagram. A point p ∈ R2 has a 1-dimensional family of rays going through the point, which correspond to a (green) line Lp in the EPI. In a Lambertian scene, all rays that meet in this point and that are not occluded by other objects must observe the same color. Therefore, the light field is constant along this line. As one travels along Lp, rotating through the family of rays through p, one eventually reaches a (magenta) tangent ray τ to the object. At a tangent ray, the value of the EPI ceases to be constant, and the light field changes its color to whatever is disoccluded by the object at this tangent ray. Because objects of different depth undergo differing amounts of parallax, EPIsRGB Gradients Depthsthe slope of the segment of Lp along which cis constant determines the 3D coordinates of p. Finally, by observing that we may extract EPIs from any perspective, it is clear that an LFN encodes the full 3D geometry of the underlying scene. Intuitively, this may also be seen by con- sidering that one could render out all possible perspectives of the underlying scene, and solve a classic multi-view stereo problem to retrieve the shape. Extracting depth maps from LFNs. A correctly inferred light field necessarily contains accurate 3D geometry information, although the geometry is encoded in a nontrivial way. To extract 3D geometry from an LFN, we utilize the property of the 2-plane parameterization that the light field is constant on segments Lp, the slopes of which determine p. In the supplemental material, we derive Proposition 1. For a Lambertian scene, the distance d along r = −−−−−→ a(s)b(t) from a(s) to the point p on the object is d(r) = D ∂tc(s, t) ∂sc(s, t) + ∂tc(s, t) . (8) where a(s) and b(t) are as above, c(s, t) is defined by (7), D is the distance between the lines a(t) and b(t). Thus p = a(s) + d(r) b(t)−a(s)‖b(t)−a(s)‖ , and ∂x denotes the partial derivative by variable x. This result yields meaningful depth estimates wherever the derivatives of the light fields are nonzero along the ray. In practice, we sample several rays in a small (s, t) neighborhood of the ray r and declare depth estimates as invalid if the gradients have high variance-please see the code for implementation details. This occurs when r hits the object at a point where the surface color is changing, or when r is a tangent ray. We note that there is a wealth of prior art that could be used to extend this approach to extract dense depth maps [48–51]. 4.3 Meta-learning with conditional Light Field Networks We consider a dataset D consisting of N 3D scenes Si = {(Ij ,Ej ,Kj)}Kj=1 ∈ RH×W×3 × SE(3)× R3×3, i = 1 . . . N (9) with K images Ij of each scene taken with cameras with extrinsic parameters Ej and intrinsic parameters Kj [55]. Each scene is completely described by the parameters φi ∈ R` of its corresponding light field MLP Φi = Φφi . Meta-learning and multi-view consistency. In the case of 3D-structured neural scene representations, ray-marching or volumetric rendering naturally ensure multi-view consistency of the reconstructed 3D scene representation. In contrast, a general 4D function Φ : L → R3 is not multi-view consistent, as most such functions are not the light fields of any 3D scene. We propose to overcome this challenge by learning a prior over the space of light fields. As we will demonstrate, this prior can also be used to reconstruct an LFN from a single 2D image observation. In this paradigm, differentiable ray-casting is a method to force the light field of a scene to be multi-view consistent, while we instead impose multi-view consistency by learning a prior over light fields. Meta-learning framework. We propose to represent each 3D scene Si by its own latent vector zi ∈ Rk. Generalizing to new scenes amounts to learning a prior over the space of light fields that is concentrated on the manifold of multi-view consistent light fields of natural scenes. To represent this latent manifold, we utilize a hypernetwork [56, 3]. The hypernetwork is a function, represented as an MLP Ψ : Rk → R`,Ψψ(zi) = φi (10) with parameters ψ which sends the latent code zi of the i-th scene to the parameters of the corresponding LFN. Several reasonable approaches exist to obtain latent codes zi. One may leverage a convolutionalor transformer-based image encoder, directly inferring the latent from an image [11, 5], or utilize gradient-based meta-learning [23]. Here, we follow an auto-decoder framework [1, 3] to find the latent codes zi, but note that LFNs are in no way constrained to this approach. We do not claim that this particular meta-learning method will out-perform other forms of conditioning, such as gradient-based meta-learning [57, 23] or FILM conditioning [58], but perform a comparison to a conditioning-by-concatenation approach in the appendix. We assume that the latent vectors have a Gaussian prior with zero mean and a diagonal covariance matrix. At training time, we jointly optimize the latent parameters zi together with the hypernetwork parameters ψ using the objective arg min {zi},ψ ∑ i ∑ j ‖ΘΦEj ,Kj (Ψψ(zi))− Ij‖ 2 2 + λlat‖zi‖22. (11) Here the ΘΦ is the rendering function (Equation 6), the first term is an `2 loss penalizing the light fields that disagree with the observed images, and the second term enforces the prior over the latent variables. We solve Equation 11 using gradient descent. At test time, we freeze the parameters of the hypernetwork and reconstruct the light field for a new scene S given a single observation of the scene {(I,E,K)} by optimizing, using gradient descent, the latent variable zS of the scene, such that the reconstructed light field ΦΨψ(zS) best matches the given observation of the scene: zS = arg min z ‖ΘΦE,K (Ψψ(z))− I)‖22 + λlat‖z‖22. (12) Global vs. local conditioning The proposed meta-learning framework globally conditions an LFN on a single latent variable z. Recent work instead leverages local conditioning, where a neural field is conditioned on local features extracted from a context image [26, 6, 27]. In particular, the recently proposed pixelNeRF [6] has achieved impressive results on few-shot novel view synthesis. As we will see, the current formulation of LFNs does not outperform pixelNeRF. We note, however, that local conditioning methods solve a different problem. Rather than learning a prior over classes of objects, local conditioning methods learn priors over patches, answering the question “How does this image patch look like from a different perspective?”. As a result, this approach does not learn a latent space of neural scene representations. Rather, scene context is required to be available at test time to reason about the underlying 3D scene, and the representation is not compact: the size of the conditioning grows with the number of context observations. In contrast, globally conditioned methods [3, 11, 1, 2] first infer a global representation that is invariant to the number of context views and subsequently discard the observations. However, local conditioning enables better generalization due to the shift-equivariance of convolutional neural networks. An equivalent to local conditioning in light fields is non-obvious, and an exciting direction for future work. 5 Experiments We demonstrate the efficacy of LFNs by reconstructing 360-degree light fields of a variety of simple 3D scenes. In all experiments, we parameterize LFNs via a 6-layer ReLU MLP, and the hypernetwork as a 3-layer ReLU MLP, both with layer normalization. We solve all optimization problems using the ADAM solver with a step size of 10−4. Please find more results, as well as precise hyperparameter, implementation, and dataset details, in the supplemental document and video. Reconstructing appearance and geometry of single-object and room-scale light fields. We demonstrate that LFN can parameterize 360-degree light fields of both single-object ShapeNet [59] objects and simple, room-scale environments. We train LFNs on the ShapeNet “cars” dataset with 50 observations per object from [3], as well as on simple room-scale environments as proposed in [13]. Subsequently, we evaluate the ability of LFNs to generate novel views of the underlying 3D scenes. Please see Figure 3 for qualitative results. LFNs succeed in parameterizing the 360-degree light field, enabling novel view synthesis at real-time frame-rates (see supplemental video). We further demonstrate that LFNs encode scene geometry by presenting Epipolar Plane Images and leveraging the relationship derived in Equation 8 to infer sparse depth maps. We highlight that both rendering and depth map extraction do not require ray-casting, with only a single evaluation of the network or the network and its gradient respectively. Multi-class single-view reconstruction. Following [5, 6], we benchmark LFNs with recent global conditioning methods on the task of single-view reconstruction and novel view synthesis of the 13 largest ShapeNet categories. We follow the same evaluation protocol as [60] and train a single model across all categories. See Figure 4 for qualitative and Table 1 for quantitative baseline comparisons. We significantly outperform both Differentiable Volumetric Rendering (DVR) [5] and Scene Representation Networks (SRNs) [3] on all but two classes by an average of 1dB, while requiring more than an order of magnitude fewer network evaluations per ray. Qualitatively, we find that the reconstructions from LFNs are often crisper than those of either Scene Representation Networks or DVR. Note that DVR requires additional ground-truth foreground-background segmentation masks. Class-specific single-view reconstruction. We benchmark LFNs on single-shot reconstruction on the Shapenet “cars” and “chairs” classes as proposed in SRNs [3]. See Figure 5 for qualitative and quantitative results. We report performance better than SRNs in PSRN and on par in terms of SSIM on the “cars” class, and worse in PSNR but better in terms of SSIM on the “chairs” class, while requiring an order of magnitude fewer network evaluations and rendering in real-time. We attribute the drop in performance compared to multi-class reconstruction to the smaller dataset size, causing multi-view inconsistency. Global vs. local conditioning and comparison to pixelNeRF [6]. We investigate the role of global conditioning, where a single latent is inferred to describe the whole scene [3], to local conditioning, where latents are inferred per-pixel in a 2D image and leveraged to locally condition a neural implicit representation [26, 27, 6]. We benchmark with the recently proposed pixelNeRF [6]. As noted above (see Section 4.3), local conditioning does not infer a compact neural scene representation of the scene. Nevertheless, we provide the comparison here for completeness. See Figure 6 for qualitative and quantitative results. On average, LFNs perform 1dB worse than pixelNeRF in the single-class case, and 2dB worse in the multi-class setting. Real-time rendering and storage cost. See Table 2 for a quantitative comparison of the rendering complexity of LFN compared with that of volumetric and ray-marching based neural renderers [3, 45, 19, 4, 6]. All clock times were collected for rendering 256× 256 images on an NVIDIA RTX 6000 GPU. We further compare the cost of storing a single LFN with the cost of storing a conventional light field. With approximately 400k parameters, a single LFN requires around 1.6 MB of storage, compared to 146 MB required for storing a 360-degree light field at a resolution of 256×256×17×17 in the six-plane Lumigraph configuration. Multi-view consistency as a function of training set size. We investigate how multi-view consistency scales with the amount of data that the prior is trained on. Please find this analysis in the supplementary material. Overfitting of single 3D scenes. We investigate overfitting a single 3D scene with a Light Field Network with positional encodings / sinusoidal activations [24, 61]. Please find this analysis in the supplementary material. Evaluation of Reconstructed Geometry. We investigate the quality of the geometry that can be computed from an LFN via Eq. 8. For every sample in the class-specific single-shot reconstruction experiment, we extract its per-view sparse depth map. We then backproject depth maps from four views into 3D to reconstruct a point cloud, and benchmark mean L1 error on valid depth estimates with Scene Representation Networks [3]. Fig. 7 displays qualitative and quantitative results. Qualitatively, point clouds succeed in capturing fine detail such as the armrests of chairs. Quantitatively, LFNs outperform SRNs on both cars and chairs. We note that LFNs have a slight advantage in this comparison, as we can only benchmark on the sparse depth values, for which LFNs have high confidence. This includes occlusion boundaries, which are areas where the sphere-tracing based SRNs incurs high error, as it is forced to take smaller and smaller steps and may not reach the surface. We highlight that we do not claim that the proposed method is competetive with methods designed for geometry reconstruction in particular, but that we only report this to demonstrate that the proposed method is capable to extract valid depth estimates from an LFN. Limitations. First, as every existing light field approach, LFNs store only one color per oriented ray, which makes rendering views from cameras placed in between occluding objects challenging, even if the information may still be stored in the light field. Second, though we outperform globally-conditioned methods, we currently do not outperform the locally conditioned pixelNeRF. Finally, as opposed to 3D-structured representations, LFNs do not enforce strict multi-view consistency, and may be inconsistent in the case of small datasets. 6 Discussion and Conclusion We have proposed Light Field Networks, a novel neural scene representation that directly parameterizes the full 360-degree, 4D light field of a 3D scene via a neural implicit representation. This enables both real-time neural rendering with a single evaluation of the neural scene representation per ray, as well as sparse depth map extraction without ray-casting. Light Field Networks outperform globally conditioned baselines in single-shot novel view synthesis, while being three orders of magnitude faster and less memory-intensive than current volumetric rendering approaches. Exciting avenues for future work include combining LFNs with local conditioning, which would enable stronger out-of-distribution generalization, studying the learning of non-Lambertian scenes, and enabling camera placement in obstructed 3D space. With this work, we make important contributions to the emerging fields of neural rendering and neural scene representations, with exciting applications across computer vision, computer graphics, and robotics. Societal Impacts. Potential improvements extending our work on few-observation novel view synthesis could enable abuse by decreasing the cost of non-consensual impersonations. We refer the reader to a recent review of neural rendering [22] for an in-depth discussion of this topic. Acknowledgements and Disclosure of Funding This work is supported by the NSF under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/), ONR under 1015 G TA243/N00014-16-1-2007 (Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning), Mitsubishi under 026455-00001 (Building World Models from some data through analysis by synthesis), DARPA under CW3031624 (Transfer, Augmentation and Automatic Learning with Less Labels), as well as the Singapore DSTA under DST00OECI20300823 (New Representations for Vision). We thank Andrea Tagliasacchi, Tomasz Malisiewicz, Prafull Sharma, Ludwig Schubert, Kevin Smith, Bernhard Egger, Christian Richardt, Manuel Rey Area, and Jürgen and Susanne Sitzmann for interesting discussions and feedback, and Alex Yu for kindly sharing the outputs of pixelNeRF and baselines with us.
1. What is the focus of the paper on neural light fields, and how does it differ from prior works? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of rendering speed and multi-view consistency? 3. How does the reviewer assess the novelty and potential impact of the paper's contributions? 4. What are some suggested experiments or modifications that could improve the paper's results or provide better insights into the method's capabilities? 5. Are there any minor clarity issues or typos in the paper that the reviewer noticed?
Summary Of The Paper Review
Summary Of The Paper Light Field Networks encode the light field of a 3D scene, which comes with certain restrictions on where an observer can be placed. LFNs are coordinate-based MLPs that use the Plucker parameterization of directed rays in 3D space to represent rays. Thus they can output the final color of a pixel with a single network evaluation, massively speeding up rendering relative to NeRF but sacrificing "hard" multi-view consistency, which now needs to be learned. LFNs here are trained via a hypernetwork and scene conditioning happens via an auto-decoded latent code for the hypernetwork. While light fields only represent appearance directly, it is possible to use gradients to extract depth along appearance edges. Results are presented on ShapeNet and very simple synthetic rooms. Review LFNs are novel and an interesting take on neural radiance fields that trades multi-view-consistency-by-design for orders of magnitude higher rendering speed. Results in the supplemental video are random and not cherry-picked. Qualitative results seem overall similar to DVR to me. But since LFNs are much faster, they offer an advantage over DVR in terms of results. The paper is well written. I would have liked a geometrical intuition as to why Plucker coordinates are independent of the specific point on the ray. The origin and the ray (i.e. all points p on the ray) define one single plane and p x d is the normal vector of that plane. Rays through the origin are a corner case in that intuition, but they are directly identified by their direction. Interestingly, when comparing single-class vs. multi-class, chairs have better results overall on single-class while cars are better in multi-class. Looking at more classes in single-class experiments would give some indication on whether the claim in line 280 (smaller dataset is the issue) is correct, which would mean that chairs is an outlier class. How many objects per class are there roughly in the single-class and multi-class settings? Hundreds? It would also be interesting to see how multi-view consistency (e.g. measured naively via PSNR/SSIM of novel views on a random but fixed set of e.g. 10 shapes per class) improves with an increasing training set size (e.g. 1, 10, 100, 1k, all per class), both in the single-class and multi-class setting. That would give a better idea on how much the hypernetwork learns multi-view consistency as an abstract property, which is the main motivation for using a hypernetwork (lines 10-12). In general, more experiments than just straightforward reconstruction results would add insights. E.g. an ablation of the hypernetwork setting vs. a simpler latent-code conditioning setting with a single, larger LFN. Or a quantitative evaluation of the sparse depth maps, even if only at the sparse points where depth is extractable. Especially the latter should be added since all of page 5 and Sec. 1 of the supplement discuss depth extraction. A comparison to DVR and SRNs, if possible, would offer a baseline. Or backproject multiple depth renderings of a single object and show the merged point cloud in the video, for example. Some experiment that gives a better idea of how well geometry extraction works. Since light fields mostly support inside-out scenes similar to the room scenes and outside-in scenes like the ShapeNet objects, and since both settings are spherical, a simple two-sphere parametrization (e.g. a view-conditionined NeRF that is only evaluated once on a single 3D sphere) would be enough for the results presented in the paper. Better scenes that highlight the strengths of the particular parametrization would have been nice, namely unbounded scenes, e.g. some very long corridor (lines 141-142). Overall: LNFs take a valuable route towards better neural graphics representations that is complementary to the slow and hard-constrained setting of NeRF. I hope that future work will be able to re-introduce the multi-view consistency by design instead of learning it as a soft constraint. The main issue I have with the paper in its current form is the experimental investigation. I believe that depth evaluation is necessary, that an experiment investigating multi-view consistency vs. training set size is at least strongly recommendable, and that a hypernetwork vs. simple conditioning experiment and evaluating scenes benefiting from the Plucker parametrization would strengthen the paper noticeably but aren't required for acceptance. Major clarity: What are class-specific and multi-class in the table in Fig. 6 referring to? Means across all classes? Or what class was evaluated for the class-specific setting? Related to that, means across all classes (weighted equally or by class size, should be mentioned) could be added to Table 1. line 299: "more difficult" sounds too inaccurate to me. It is only even possible in a few carefully designed cases (e.g. line 68 in the supplement) and otherwise simply impossible (e.g. placing an arbitrarily rotating observer inside the convex hull of an outside-in scene like chairs from ShapeNet). For single-class single-shot car reconstruction, the video states that LFNs offer slightly more detail, but I am unable to confirm that. To my eye, LFNs and SRNs are qualitatively on par and quantitatively LFNs are only slightly better. That is okay considering the speed difference. How long does auto-decoding in the single-view setting take per scene at test time? Lines 86 and 97 in the supplement state "until convergence". How long does that take in practice? Minor clarity: "Moreoever" in line 123 I don't understand why line 212 is emphasized. Wouldn't line 213 be more appropriate? Eq. 10: there's an unnecessary closing paranthesis (same in Eq. 11) and it should be z_i in the second term. It would also be easier to parse if the arg min is over {z_i} instead of z_i. What does "single-shot" in Table 1 refer to? Auto-decoding with a single view? Table 1 boat SSIM is incorrectly bolded. Fig. 5 Cars SSIM: LFN and SRNs are on par w.r.t. the two reported digits. That's wrongly bolded and incorrectly stated in line 277. The authors have provided a very thorough rebuttal. They addressed my concerns very well. I read the other reviewers' concerns and I am satisfied by the authors' response to those. Given the improvements from the rebuttal, I have increased my score from a 5 to an 8. This work opens up a number of interesting directions, I especially liked the remark at the end of Sec. 1.4 of the rebuttal: "We argue that LFNs offer an intriguing orthogonal approach to neural rendering that may in future work inform or even be combined with 3D-structured neural representations." The only concern that remains for me with the work is Sec. 2.5 in the rebuttal regarding real-world scenes, which I did not find as convincing as the other points. That LFNs struggle in the overfitting regime, which general real-world scenes fall into, is an unfortunate downside that future work hopefully addresses.
NIPS
Title Contrastive Language-Image Pre-Training with Knowledge Graphs Abstract Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semantic information into the widely used CLIP model [38]. Through introducing knowledge-based objectives in the pre-training process and utilizing different types of knowledge graphs as training data, our model can semantically align the representations in vision and language with higher quality, and enhance the reasoning ability across scenarios and modalities. Extensive experiments on various vision-language downstream tasks demonstrate the effectiveness of Knowledge-CLIP compared with the original CLIP and competitive baselines. 1 Introduction Large-scale vision-language pre-training has attracted wide research interests in recent years [9, 26, 38, 72]. Different from training independent models for each specific task, pre-trained models take the analogy of human biological intelligence system, trying to perceive the world from various data modalities and handle comprehensive tasks. Specifically, it aims to provide a unified inference paradigm that simultaneously learns representations for multi-modal data and can easily transfer to a variety of downstream tasks. Benefiting from the accessibility of massive image-text pairs from the web, the vision-language pre-training can leverage a broader source of supervision, and effectively improves the model’s generalization power. Early attempts on vision-language pre-training mainly focus on detecting objects in the images and aligning the corresponding word tokens with object regions [9, 28, 50]. Though effective, the entanglement with the concept of objects, and the additional resources for pre-trained object detectors impose restrictions on real-world applications. One of the pioneer works, CLIP [38], extends the scale of the pre-training dataset to 400 million image-text pairs, and learns representations by directly matching raw text with the corresponding image. Through a contrastive-based training scheme, CLIP learns visual concepts under a large vocabulary which significantly improves the model performances on various downstream tasks. Taking inspiration from CLIP, the following researches further extend the work from several perspectives, including data modality [72], downstream tasks [57], and training data efficiency [19, 44]. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Although showing promising results, the current pre-training frameworks also suffer from limitations. Specifically, the data pairs for pre-training are organized in the simplest manner, where only the descriptions of matched and unmatched are used to represent the relation between a given image and text pair. This usually leads to a degenerated scenario, where the model tends to rely on the co-occurrence of inputs instead of their semantic meanings. We give a toy example in Fig. 1 by evaluating the zero-shot transfer performance of CLIP on the ImageNet dataset [10] with the templates ’a photo of a {}’ and ’not a photo of a {}’. It is shown that the distributions of CLIP outputs under two templates are quite similar, suggesting that the current model fails to understand the semantic meaning of word tokens. As a result, the transferability of the model is restricted, and tends to show worse performances on tasks that require reasoning ability, e.g., visual question answering. To address the limitation of pre-trained models on semantic perceiving, we resort to the technique of knowledge graph, which has been widely studied in the field of natural language processing [7, 58]. Knowledge graph (KG) is a large-scale semantic network that comprises entities as nodes and semantic relations as edges. Through organizing data in a graph structure, knowledge graphs provide rich information on describing the relations between entities and enable a reasoning process through the whole graph. These advantages over regular-structured data are favorable on various tasks including question-answering [18, 70], relation prediction [29, 43] and knowledge reasoning [6, 59]. In recent years, knowledge graph has also been investigated in the field of computer vision, e.g., scene graph [65], and the integration of both language and image [2]. This bridges the gap between different modalities in the knowledge graph, which inspires us to explore a new knowledge-based pre-training framework, and inject semantic information into simple image-text pairs. In this paper, we propose a novel vision-language pre-training approach, dubbed Knowledge-CLIP, by constructing a knowledge augmented pre-training framework based on the widely used CLIP models. As illustrated in Fig. 2, we follow the structure of CLIP, and use two Transformer-based models as image and text encoders respectively. These two encoders take entities and relations in the knowledge graph as input and extract raw features for both entities and relations. Notably, entities can be in the form of image/text, while the relations are constantly described by language tokens. Then, a multi-modal Transformer encoder is adopted to fuse the entity features conditioned on their relations. In this way, the pre-trained model is pushed to concentrate on understanding semantic relations between visual and word concepts, thereby establishing strong semantic connections between vision and language modalities. To additionally improve the training efficiency and avoid the massive computation cost in the pretraining procedure, we adopt a simple continuous learning strategy by training our model based on the pre-trained weights of CLIP. This provides a possibility of efficiently promoting the model performance of CLIP with low training resources. We train our model on three knowledge graph datasets, namely Visual-Genome [24] (scene graph), ConceptNet [46] (language-based graph), and VisualSem [2] (multi-modal graph), and also adopt part of datasets from CLIP to avoid the model forgetting problem. With the knowledge-enhanced pre-training, Knowledge-CLIP achieves consistent improvements over the original CLIP models on various vision and language downstream tasks. 2 Related works Large-scale pre-training. Benefited from the development of Transformer in both vision [35, 63, 36] and language [54] tasks, large-scale pre-training framework has received wide concerns in recent years and shown promising results in the field of computer vision and natural language processing. GPT [39] is one of the pioneer works for language pre-training which optimizes the probability of output based on previous words in the sequence. BERT [11] adopts the masked language modeling technique and predicts the masked tokens conditioned on the unmasked ones. Similarly, computer vision society also witnesses the development of pre-training models thanks to the emergence of large-scale image datasets. IGPT [5] proposes a generative pre-training technique and shows promising results on classification task. MAE [17] adopts a similar pre-training scheme as BERT and predicts the masked regions of an image with unmasked ones. Multi-modal pre-training bears differences from the aforementioned frameworks and requires the alignment between various data modalities. Using enormous image-text pairs collected from Internet, vision-language models show significant improvements on various downstream tasks. Among these approaches, various pre-training scheme is adopted, including contrastive learning [1, 27, 31], masked language modeling [47, 51], and masked region modeling [9]. The problem of semantic misunderstanding has also been investigated by previous works. EICLIP [33] considers the problem of cross-modal retrieval in the field of E-commerce. Sharing similar insight with our work, the authors notice the model bias towards some specific word tokens in CLIP, and introduce causal inference to align the text encoder with e-commerce domain knowledge. K3M [73] focuses on the modality-missing and modality-noise problem and introduces knowledge modality into E-commerce tasks. DeVLBert [69] studies the spurious correlations between different modalities and adjusts the conditional probability of image tokens and word tokens. KaleidoBERT [74] focuses on image-text coherence by introducing several novel self-supervised tasks. Compared to previous approaches, we are the first to incorporate multi-modal knowledge graphs into the pre-training process, and effectively enhance the model perception on semantic relations between visual and language concepts. Knowledge Graph. Knowledge graph is first introduced in the field of natural language processing, and the knowledge graph embedding approaches have been successful on capturing the semantics of symbols (entities and relations) and achieving impressive results on a wide range of real-world applications including text understanding [13, 66], recommendation system [16, 56] and natural language question answering [18, 70]. On the other hand, scene graphs represent a type of graphstructured data in computer vision, where the visual concepts in the image are connected with semantic relations. Scene graphs emphasize the fine-grained semantic features for images and are widely adopted in various downstream tasks, including scene graph generation [65], and Scene Graph Parsing [68]. Besides scene graph, knowledge graph is also adopted in other computer vision tasks, including image classification [22], panoptic segmentation [62], and image captioning [71]. On this basis, multi-modal knowledge graph earns wide concerns in recent years. Considering the natural alignment between different data modalities, multi-modal knowledge graphs have been widely adopted in various graph-based tasks including link prediction [3, 30], entity classification [61], while also showing great potential on out of graph applications like visual question answering [20, 41] and recommendation systems [49, 52]. 3 Contrastive Language-Image Pre-training (CLIP) We first provide a brief review of model architectures and training settings in CLIP. CLIP uses two separate models for image encoder and text encoder respectively. For text inputs, a 12-layer Transformer is adopted with 512 width and 8 attention heads. Raw texts are first converted using byte pair encoding [40] technique under a vocabulary size of 49,152. The text sequence length is capped at 76 and added by a positional encoding before being sent into the text encoder. On the other hand, CLIP has different versions of image encoder with ResNet-based and Vision Transformer-based architectures. As the following researches have demonstrated the better performances of Vision Transformer models, we only consider Transformer-based image encoders in this paper. Similar to the text input, images are first converted to patches, and added by a positional encoding. At the last stage of both encoders, a global pooling function is adopted to compress the feature map into a single feature, which serves as the representation of the whole image/text sequence. The cosine distance of the image and text features is computed as the similarity of the data pair. For training supervision, a contrastive loss is adopted to maximize the similarity of matched pairs while minimizing the similarity of unmatched pairs. Given a batch of N data pairs {Ii,Ti}Ni=1, where Ii and T represents the ith image and text respectively, the loss function can be parameterized as: L = −1 2 N∑ i=1 ( log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ii), fT(Tj))/τ) + log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ij), fT(Ti))/τ) ) , (1) where fI and fT correspond to image and text encoders respectively, cos(·) denotes the cosine similarity between the inputs, and τ is a learnable temperature initialized at 0.07. This simple training framework actually brings several concerns that need to be addressed. First, the pre-training framework fails to model the semantic information of inputs due to the simplicity of the data structure. This results in inferior performances on tasks that require reasoning ability, e.g., visual question answering and visual commonsense reasoning. Second, the image and text features reside in separate spaces, which makes it difficult to model the interactions between different modalities. Third, the massive time and resource consumption in the training procedure set restrictions on performing a full pre-training schedule from scratch. 4 Knowledge-CLIP As we have summarized above, there are several concerns that hinder the transferability of CLIP and potential improvements on model performances. In this paper, we propose a novel pre-training framework based on knowledge graphs, that addresses the limitation of the original CLIP model from several perspectives: (1) we introduce knowledge graphs into the training dataset where the graph-structured data and semantic relations between concepts enable the model to extract semantic features and establish semantic connection across inputs; (2) A multi-modal encoder is added on top of the current image and text encoders to fuse the features from different modalities, and model the joint distribution between inputs; (3) A continuous learning strategy based on the pre-trained model of CLIP is adopted which avoids the massive computation cost in the pre-training procedure, and enhance the generalization power of the model efficiently. We introduce our framework in detail in the following sections, and show the overview in Fig. 2. 4.1 Data Preparation Different from raw image-text pairs adopted in the original CLIP, our model takes knowledge graphs as input. A knowledge graph can be defined as a directed graph G = {ξ,R, TR}, where ξ, R correspond to sets of entities and relations, and TR represent the set of relation triplets. A triplet (h, r, t) ∈ TR denotes that entity h ∈ ξ has relation r ∈ R with entity t ∈ ξ. As illustrated in Fig. 3, we pre-train our model on three types of knowledge graphs, including multi-modal knowledge graph, scene graph, and language-based knowledge graph. Among these, relations are constantly described in language tokens, where the entities are from different modalities in different forms. For multi-modal knowledge graph, the entities contain both illustrative images and language descriptions. Through representing the same entity under various modalities and connecting entities with relations, it helps to build semantic connections between vision and language concepts. In practice, language and vision descriptions are randomly chosen for each entity. In this way, the triplet set TR contains different forms including (Img, Rel, Img), (Img, Rel, Text), and (Text, Rel, Text), providing rich information across modalities while also enhancing perceptions within modalities. Different from multi-modal knowledge graph, scene graph extracts visual concepts (mainly objects) for each image, and connects them with predefined semantic relations describing relative locations, actions, etc. Therefore, the entities in the scene graph correspond to a certain region in an image, with the triplet form of (Img, Rel, Img). We practically use the selected regions as the input and discard the irrelevant parts. As two entities in the same triplet denote different regions in the same image, it forces the model to extract more fine-grained features. Lastly, language-based knowledge graph connects words and phrases of natural language with labeled edges. It is built on only language modality with the triplet form of (Text, Rel, Text), while helping to build semantic alignment within word tokens. 4.2 Model Architecture The model architecture and the training framework are illustrated in Fig. 2(A). Specifically, we first process the inputs into token sequences with modality-specific tokenizers. The BPE tokenzier [40] is adopted for language inputs, while image inputs are sliced into non-overlapped patches and converted into a sequence of patches following ViT [12]. For convenient processing, we set the length of the image sequence and text sequence as lI and lT respectively for all inputs. To preserve the relative position information in the input, learnable positional encodings are added to the corresponding sequences before being sent to the model. Two separate image encoder fI(·) and text encoder fT(·) are then adopted to extract features from raw inputs. For a given triplet (h, r, t), the entities h and t are sent to the encoders with respect to their modalities (image or text). The relation r, which is represented by language tokens, is sent to text encoder similar to text entity. Compared to the model structure in CLIP, we introduce a modification to better adapt our framework. Specifically, vanilla CLIP models use a pooling function at the last layer of two encoders to compress the feature map into a global representation. Namely, for an input u ∈ RL×di , where L and di denote the sequence length and feature dimension, the output of the encoder can be formulated as: xu = f(u) ∈ RL×do , x̄u = Pool(xu) ∈ Rdo , (2) where f represents the feature extraction module, Pool(·) denotes the pooling function, and do is the output dimension. Though efficient, it also leads to inevitable information loss in the local region, especially for the image inputs. Therefore, we remove the pooling functions for image and text entities to preserve the local information, and use xu ∈ RL×do as the extracted feature. The relation, on the other hand, is normally under a limited sequence length, e.g., one or two word tokens, where the information density is smaller than entities. Therefore, we retain the pooling function for relation input and use x̄u ∈ Rdo as the extracted features. In this way, we have extracted the features defined as (xh, x̄r, xt), which correspond to the elements in the input triplet (h, r, t). To model the joint distribution of different elements in the triplet, we consider a multi-modal encoder TransEncoder(·) to fuse the features from different sources. Specifically, we first concatenate all the features in the triplet into a single sequence and use a head token <head> at the beginning of the sequence. To emphasize the status of the tokens in the sequence, we consider additional learnable encodings for each element h, r, t in the triplet: X(h, r, t) = [<head>, xh+PEh, x̄r+PEr, xt+PEt]. (3) After processing by the multi-modal encoder, the feature of the head token <head> finally serves as the representation of the whole sequence: Y (h, r, t) = TransEncoder(X(h, r, t))[0, :]. (4) Also, representation for relation is extracted from the corresponding token: R(h, r, t) = TransEncoder(X(h, r, t))[1 + len(xh), :]. (5) 4.3 Training Targets Considering the unique data structure of knowledge graphs, we mainly adopt two types of training targets in our framework, including triplet-based loss and graph-based loss as illustrated in Fig. 2(B). Besides, a knowledge distillation loss is also considered due to the continuous learning strategy adopted in our framework. Triplet-based loss considers a batch of triplets as the input and supervises the training of our model by estimating the joint distribution of elements in the triplets. Inspired by the mask prediction technique that models the distribution of masked tokens conditioned on the unmasked regions, we similarly mask the elements in the triplets and predict the distribution with the help of a multi-modal encoder. Specifically, for incomplete triplets where certain elements are missing in the input, the concatenated sequence can be similarly derived as in Eq. 3 by masking the corresponding feature. For example, the concatenated sequence for an input (h, r, -) can be represented as: X(h, r, -) = [<head>, xh+PEh, x̄r+PEr, 0]. (6) On this basis, given a set of input D = {(hi, ri, ti)}Ni=1, we first model the distribution when one of the entities, i.e., ti, is masked, and derive the Entity-Entity (E2E) Loss by minimizing the negative log-likelihood: −E(h,r)∼Dlog(P (xt|xh, x̄r)). (7) We practically approximate the distribution P (xt|xh, x̄r) as the cosine similarity of P (xt) and P (xh, x̄r), and defined the loss function as: LE2E = − N∑ i=1 log( exp(cos(Y (-, -, ti), Y (hi, ri, -))/τ)∑ j exp(cos(Y (-, -, ti), Y (hj , rj , -))/τ) ). (8) We also model the distribution when the relation in the triplet is masked, and similarly derive the Entity-Relation (E2R) Loss: −E(h,t)∼Dlog(P (x̄r|xh, xt)). (9) Different from E2E loss, the relations in the triplets are defined in a limited set of relation groups. Therefore, we instead extract the representation of relation through an auxiliary two-layer MLP network, and model the objective as a classification problem from a predefined set of relation labels R. In this way, the loss function can be defined as: LE2R = − N∑ i=1 ∑ r∈R 1(r=ri)log(y(x̄ri)), where y(x̄ri) = MLP(R(hi, -, ti)), (10) is extracted from an MLP model followed by the output of multi-modal encoder defined in Eq. (5). Graph-based loss. We also take advantage of the graph structure in knowledge graph datasets, and adopt a graph neural network to extract deeper structural information among entities. We propagate information through connected edges in the graph, and update entity representations with aggregated feature. Specifically, for a graph neural network with L layers, the update function for the lth layer can be formulated as: G(l)(t) = E{hi,ri,t}∈TR g (l−1)(R(hi, -, t))G(l−1)(hi), G0(t) = Y (-, -, t), (11) where g(l)(R(hi, -, t)) = W (l)R(hi, -, t), (12) calculates the aggregation weights by relation representation R(hi, -, t) with a learnable matrix W (l). Finally, we define the Graph-Entity(G2E) Loss by computing the cosine similarity of entity features before and after the propagation procedure in the graph: LG2E = − 1 Nξ ∑ ti∈ξ log( exp(cos(Y (-, -, ti), G(L)(ti))/τ)∑ tj exp(cos(Y (-, -, ti), G(L)(tj))/τ) ). (13) Continuous Learning. Large-scale pre-training usually requires massive computation resources which makes it highly inefficient when training from scratch. Therefore, to inject the semantic information in an efficient manner, we consider training our model based on the pre-trained weights from the original CLIP. This powerful initialization promotes the convergence of our model and greatly enhances the training efficiency. However, naively extending the training process with new data leads to severe forgetting problem that hampers the performance of the original models. To address this limitation, we adopt simple solutions to maintain CLIP performances while improving its ability to extract semantic features from knowledge graphs. (1) Besides the knowledge graph datasets, we also train our model on several widely adopted image-text datasets that share a similar data distribution with the training data in CLIP. To better fit our pre-training framework, we convert the original image-text pair into the form of triplets, with specifically designed relations ’image of’ and ’caption of’. (2) We also use the original CLIP model as the teacher, and use an auxiliary loss LKD to measure the KL distance between the output of CLIP and our model. Overall, the final pre-training objective of Knowledge-CLIP is formulated as: L = LE2E + LE2R + LG2E + LKD. (14) 5 Experiments 5.1 Implementation Details Experimental Setup. In all the experiments, we use the same model structure as CLIP [38]. A 12-layer Transformer model with 512 width is adopted for text encoder, and ViT-L/14 is adopted for image encoder. For text and image encoder, we use the pre-trained weights in the original CLIP as the initialization. For the multi-modal encoder, we consider a 4 layer Transformer model with 1024 width. The rate for drop path is set as 0.1 during training. As the added multi-modal encoder is trained from random initialization, we decrease the learning rate for the pre-trained weights from CLIP to achieve a more balanced step in the optimization. We train Knowledge-CLIP with an initial learning rate of 1e-5 for image and text encoders, and 1e-3 for the multi-modal encoder. Cosine learning rate with linear warmup is used in the training schedule. Weight decay and gradient clip are also adopted. See more details in the supplemental material. Pre-train Dataset. Three knowledge graph datasets are adopted in the pre-training process. VisualSem [2] is a high-quality multi-modal knowledge graph dataset for vision and language concepts, including entities with multilingual glosses, multiple illustrative images, and visually relevant relations, covering a total number of 90k nodes, 1.3M glosses and 938k images. 13 semantic relations are used to connect different entities in the graph, while the entities in VisualSem are linked to Wikipedia articles, WordNet [34], and high-quality images from ImageNet [10]. Visual Genome [24] is a knowledge-based scene graph dataset that connects structured image concepts with semantic relations. Visual Genome serves as the benchmark for various vision tasks, e.g., visual grounding, and scene graph generation. ConceptNet [46] is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources including expert-created resources and crowd-sourcing built on only language modality. Besides the three knowledge graph datasets, we also train our model on two widely adopted imagetext datasets that share the similar data distribution with the training data in CLIP. We practically add COCO Caption [8] and CC3M [42] to the training set, while large-scale datasets like CC12M [4] or YFCC [21] are not considered to maintain training efficiency. Downstream Task. To validate the effectiveness of our framework, we conduct experiments on various downstream tasks, including multi-modal tasks like text and image retrieval, visual question answering, and uni-modal tasks like image classification and natural language understanding. 5.2 Multi-modal Tasks Visual question answering / Visual Entailment. We also validate the effectiveness of Knowledge-CLIP on other vision-language tasks, including VQA [15] and SNLI-VE [64]. We show the comparison results in Tab. 2. Compared to competitive baselines including VILLA [14] and ALBEF [26], Knowledge-CLIP with ViT-L/14 shows better performances under all settings, while the smaller model also achieves competitive re- sults. Compared to the original CLIP model, our pre-trained model practically improves its transferability on downstream tasks, especially on the datasets like VQA that requires reasoning ability. Image and text retrieval. We first conduct experiments on Flickr30k [37] and COCO Caption [8] dataset to show the performances of our model on image-text retrieval tasks. Given input sets X and Y of images and texts, we use Knowledge-CLIP to extract features for each input, and model the joint probability with the cosine similarity between image and text pairs. We summarize the comparison results of Knowledge-CLIP with competitive baselines in Tab. 1. It is shown that our model consistently achieves better results over the original CLIP on both datasets, while comparable with competitive baselines like OSCAR. 5.3 Uni-modal Tasks Image Classification. To further demonstrate the generalization power of Knowledge-CLIP, we compare the performances of pre-train models on the ImageNet classification task [10]. We summarize the comparison results in Tab. 3, and show that Knowledge-CLIP can also handle vision tasks well. We argue the improvements over baselines may attribute to the scene graphs in our pre-training dataset, which emphasize the visual concepts in the images. Language Understanding. We validate the generalization performance of Knowledge-CLIP for language understanding tasks on the widely adopted GLUE dataset [55]. Specifically, we conduct experiments on 7 tasks in GLUE and summarize the comparison results in Tab. 4. It is shown that our model achieves comparable performances with competitive baseline models. Also, for tasks like QQP and MNLI that require sentence-pair matching, Knowledge-CLIP shows higher performances, due to the existence of language triplets in the pre-training dataset. 5.4 Ablation Studies To validate the effectiveness of the components in our work, we carefully design several settings, including (1) CLIP+continuous learning: we train vanilla CLIP (pretrained weights as initialization) on knowledge datasets adopted in our work; (2) Knowledge-CLIP-(t1, t2, t3): we remove the training objectives respectively in our work to analyze the contribution of each loss. For all experiments, we adopt a smaller model (ViT-B/32) as the image encoder of CLIP in the ablation study. Also, it is worth noticing that KD loss plays a vital role in the continuous learning scheme, without which will lead to a significant performance drop due to the model forgetting problem. Therefore, we use KD loss in all the ablation settings for a fair comparison. We show the comparison results on two representative tasks in Tab. 5, including the image/text retrieval task on Flickr30K, and the visual question answering task in VQA. Several observations can be made from the ablation: (1) All three training objectives (E2E, E2R, G2E) contribute to improving the model performance. Training the model without any of the objectives leads to inferior performances on downstream tasks. We argue that the E2E, E2R, and G2E loss promote the model from different perspectives by focusing on semantic understanding of concepts, complicated relations between entities, and structural information. Therefore, all three objectives are necessary for the framework and contribute to the improvement respectively. (2) By comparing the first and second row, we can see that simply training the CLIP model with extra time and data fails to improve the generalization performance. It also demonstrates that the improvements mainly come from the injected knowledge information rather than the continuous learning scheme. We also conduct an ablation study on the KD loss adopted for continuous learning and summarize the results in Tab. 6. The model achieves lower results after removing the KD loss, indicating its vital role in the continuous learning scheme. We argue the reason for this phenomenon is that the model suffers from the forgetting problem, which is widely spotted in the field of lifelong learning and continuous learning. 5.5 Analysis on particular semantics We also conduct experiments on carefully selected data which may better reflect how a visionlanguage model understands a particular type of input. Specifically, we select questions in the VQA dataset that contains (1) Negations; (2) Color attributes; (3) Position attributes; (4) Sizes. We summarize the comparison results of CLIP and our model on these sub-datasets in Tab. 7. As we can observe, our model achieves consistent improvements over CLIP on these specially designed datasets and shows significantly better results. Regarding questions with negation, our model achieves 2.1% higher accuracy. Regarding color and position attributes, our model shows even higher improvements. We believe these comparisons on different ’semantic domains’ demonstrate the effectiveness of injecting knowledge information into the current vision-language pretraining framework which practically enhances the model perception of semantic understanding. 6 Conclusion In this paper, we propose a novel vision-language pretraining framework that incorporates knowledge information to model the semantic connections between vision and language entities. We introduce three types of graph-structured datasets into the training process, and adopt a multi-modal encoder to model the joint distribution of entities and their semantic relations. Extensive experiments on various downstream tasks including multi-modal, uni-modal, and graph-based tasks validate the transfer and generalization ability of our model. Our approach is now limited in injecting knowledge information into the CLIP models. However, our training objectives and new knowledge graph datasets are technically compatible with other large-scale pretraining frameworks. We will explore the possibility of further applications in the future. 7 Acknowledgement This work is supported in part by the National Key R&D Program of China under Grant 2020AAA0105200, the National Natural Science Foundation of China under Grants 62022048, Guoqiang Institute of Tsinghua University and Beijing Academy of Artificial Intelligence. We also appreciate the generous donation of computing resources by High-Flyer AI.
1. What is the focus and contribution of the paper regarding the extension of the CLIP model? 2. What are the strengths of the proposed framework, particularly in terms of intuition and presentation? 3. What are the weaknesses of the paper, especially regarding the lack of proper ablation studies? 4. Do you have any concerns about the motivation presented in Figure 1? 5. Is there any confusion regarding Equation 13? 6. How do the authors address the limitations and potential negative societal impact of their work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper extends the Contrastive Language-Image Pre-training (CLIP) model with a knowledge-based pre-training framework. The framework introduces knowledge-based objectives in the pre-training process and utilizes diverse types of knowledge graphs as training data. Contributions: A framework to utilize knowledge graph to vision-language pretraining Continued pretraining to save computation resources Improved performance upon CLIP on vision-language multi-modal tasks as well as uni-modal tasks. Strengths And Weaknesses Strengths: The framework is intuitive and clearly presented. The improvement of the CLIP model is significant and promising. Weakness: The lack of proper ablation study. For the vision-language or uni-modal tasks in the evaluation, not much reasoning is required to perform the task. So, it is important to answer the question: if the improvement in Knowledge-CLIP comes from the knowledge-based objective or just from more training data. An ablated baseline would be CLIP continued pretrained on the same datasets which Knowledge-CLIP was pretrained. Another ablation study should be on the three objectives to measure their contributions to the final result. The motivation in figure 1 is not accurate. The example only shows that CLIP fails to capture negation, not all semantics. From the same figure, we can see that CLIP performs well in differentiating cars from planes. Authors need to be more specific on their claims. Questions Equation 13 is confusing to me. Does it have a simple but meaningless solution that G^L(t) = Y(-,-,t)? Limitations The authors adequately addressed the limitations and potential negative societal impact of their work.
NIPS
Title Contrastive Language-Image Pre-Training with Knowledge Graphs Abstract Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semantic information into the widely used CLIP model [38]. Through introducing knowledge-based objectives in the pre-training process and utilizing different types of knowledge graphs as training data, our model can semantically align the representations in vision and language with higher quality, and enhance the reasoning ability across scenarios and modalities. Extensive experiments on various vision-language downstream tasks demonstrate the effectiveness of Knowledge-CLIP compared with the original CLIP and competitive baselines. 1 Introduction Large-scale vision-language pre-training has attracted wide research interests in recent years [9, 26, 38, 72]. Different from training independent models for each specific task, pre-trained models take the analogy of human biological intelligence system, trying to perceive the world from various data modalities and handle comprehensive tasks. Specifically, it aims to provide a unified inference paradigm that simultaneously learns representations for multi-modal data and can easily transfer to a variety of downstream tasks. Benefiting from the accessibility of massive image-text pairs from the web, the vision-language pre-training can leverage a broader source of supervision, and effectively improves the model’s generalization power. Early attempts on vision-language pre-training mainly focus on detecting objects in the images and aligning the corresponding word tokens with object regions [9, 28, 50]. Though effective, the entanglement with the concept of objects, and the additional resources for pre-trained object detectors impose restrictions on real-world applications. One of the pioneer works, CLIP [38], extends the scale of the pre-training dataset to 400 million image-text pairs, and learns representations by directly matching raw text with the corresponding image. Through a contrastive-based training scheme, CLIP learns visual concepts under a large vocabulary which significantly improves the model performances on various downstream tasks. Taking inspiration from CLIP, the following researches further extend the work from several perspectives, including data modality [72], downstream tasks [57], and training data efficiency [19, 44]. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Although showing promising results, the current pre-training frameworks also suffer from limitations. Specifically, the data pairs for pre-training are organized in the simplest manner, where only the descriptions of matched and unmatched are used to represent the relation between a given image and text pair. This usually leads to a degenerated scenario, where the model tends to rely on the co-occurrence of inputs instead of their semantic meanings. We give a toy example in Fig. 1 by evaluating the zero-shot transfer performance of CLIP on the ImageNet dataset [10] with the templates ’a photo of a {}’ and ’not a photo of a {}’. It is shown that the distributions of CLIP outputs under two templates are quite similar, suggesting that the current model fails to understand the semantic meaning of word tokens. As a result, the transferability of the model is restricted, and tends to show worse performances on tasks that require reasoning ability, e.g., visual question answering. To address the limitation of pre-trained models on semantic perceiving, we resort to the technique of knowledge graph, which has been widely studied in the field of natural language processing [7, 58]. Knowledge graph (KG) is a large-scale semantic network that comprises entities as nodes and semantic relations as edges. Through organizing data in a graph structure, knowledge graphs provide rich information on describing the relations between entities and enable a reasoning process through the whole graph. These advantages over regular-structured data are favorable on various tasks including question-answering [18, 70], relation prediction [29, 43] and knowledge reasoning [6, 59]. In recent years, knowledge graph has also been investigated in the field of computer vision, e.g., scene graph [65], and the integration of both language and image [2]. This bridges the gap between different modalities in the knowledge graph, which inspires us to explore a new knowledge-based pre-training framework, and inject semantic information into simple image-text pairs. In this paper, we propose a novel vision-language pre-training approach, dubbed Knowledge-CLIP, by constructing a knowledge augmented pre-training framework based on the widely used CLIP models. As illustrated in Fig. 2, we follow the structure of CLIP, and use two Transformer-based models as image and text encoders respectively. These two encoders take entities and relations in the knowledge graph as input and extract raw features for both entities and relations. Notably, entities can be in the form of image/text, while the relations are constantly described by language tokens. Then, a multi-modal Transformer encoder is adopted to fuse the entity features conditioned on their relations. In this way, the pre-trained model is pushed to concentrate on understanding semantic relations between visual and word concepts, thereby establishing strong semantic connections between vision and language modalities. To additionally improve the training efficiency and avoid the massive computation cost in the pretraining procedure, we adopt a simple continuous learning strategy by training our model based on the pre-trained weights of CLIP. This provides a possibility of efficiently promoting the model performance of CLIP with low training resources. We train our model on three knowledge graph datasets, namely Visual-Genome [24] (scene graph), ConceptNet [46] (language-based graph), and VisualSem [2] (multi-modal graph), and also adopt part of datasets from CLIP to avoid the model forgetting problem. With the knowledge-enhanced pre-training, Knowledge-CLIP achieves consistent improvements over the original CLIP models on various vision and language downstream tasks. 2 Related works Large-scale pre-training. Benefited from the development of Transformer in both vision [35, 63, 36] and language [54] tasks, large-scale pre-training framework has received wide concerns in recent years and shown promising results in the field of computer vision and natural language processing. GPT [39] is one of the pioneer works for language pre-training which optimizes the probability of output based on previous words in the sequence. BERT [11] adopts the masked language modeling technique and predicts the masked tokens conditioned on the unmasked ones. Similarly, computer vision society also witnesses the development of pre-training models thanks to the emergence of large-scale image datasets. IGPT [5] proposes a generative pre-training technique and shows promising results on classification task. MAE [17] adopts a similar pre-training scheme as BERT and predicts the masked regions of an image with unmasked ones. Multi-modal pre-training bears differences from the aforementioned frameworks and requires the alignment between various data modalities. Using enormous image-text pairs collected from Internet, vision-language models show significant improvements on various downstream tasks. Among these approaches, various pre-training scheme is adopted, including contrastive learning [1, 27, 31], masked language modeling [47, 51], and masked region modeling [9]. The problem of semantic misunderstanding has also been investigated by previous works. EICLIP [33] considers the problem of cross-modal retrieval in the field of E-commerce. Sharing similar insight with our work, the authors notice the model bias towards some specific word tokens in CLIP, and introduce causal inference to align the text encoder with e-commerce domain knowledge. K3M [73] focuses on the modality-missing and modality-noise problem and introduces knowledge modality into E-commerce tasks. DeVLBert [69] studies the spurious correlations between different modalities and adjusts the conditional probability of image tokens and word tokens. KaleidoBERT [74] focuses on image-text coherence by introducing several novel self-supervised tasks. Compared to previous approaches, we are the first to incorporate multi-modal knowledge graphs into the pre-training process, and effectively enhance the model perception on semantic relations between visual and language concepts. Knowledge Graph. Knowledge graph is first introduced in the field of natural language processing, and the knowledge graph embedding approaches have been successful on capturing the semantics of symbols (entities and relations) and achieving impressive results on a wide range of real-world applications including text understanding [13, 66], recommendation system [16, 56] and natural language question answering [18, 70]. On the other hand, scene graphs represent a type of graphstructured data in computer vision, where the visual concepts in the image are connected with semantic relations. Scene graphs emphasize the fine-grained semantic features for images and are widely adopted in various downstream tasks, including scene graph generation [65], and Scene Graph Parsing [68]. Besides scene graph, knowledge graph is also adopted in other computer vision tasks, including image classification [22], panoptic segmentation [62], and image captioning [71]. On this basis, multi-modal knowledge graph earns wide concerns in recent years. Considering the natural alignment between different data modalities, multi-modal knowledge graphs have been widely adopted in various graph-based tasks including link prediction [3, 30], entity classification [61], while also showing great potential on out of graph applications like visual question answering [20, 41] and recommendation systems [49, 52]. 3 Contrastive Language-Image Pre-training (CLIP) We first provide a brief review of model architectures and training settings in CLIP. CLIP uses two separate models for image encoder and text encoder respectively. For text inputs, a 12-layer Transformer is adopted with 512 width and 8 attention heads. Raw texts are first converted using byte pair encoding [40] technique under a vocabulary size of 49,152. The text sequence length is capped at 76 and added by a positional encoding before being sent into the text encoder. On the other hand, CLIP has different versions of image encoder with ResNet-based and Vision Transformer-based architectures. As the following researches have demonstrated the better performances of Vision Transformer models, we only consider Transformer-based image encoders in this paper. Similar to the text input, images are first converted to patches, and added by a positional encoding. At the last stage of both encoders, a global pooling function is adopted to compress the feature map into a single feature, which serves as the representation of the whole image/text sequence. The cosine distance of the image and text features is computed as the similarity of the data pair. For training supervision, a contrastive loss is adopted to maximize the similarity of matched pairs while minimizing the similarity of unmatched pairs. Given a batch of N data pairs {Ii,Ti}Ni=1, where Ii and T represents the ith image and text respectively, the loss function can be parameterized as: L = −1 2 N∑ i=1 ( log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ii), fT(Tj))/τ) + log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ij), fT(Ti))/τ) ) , (1) where fI and fT correspond to image and text encoders respectively, cos(·) denotes the cosine similarity between the inputs, and τ is a learnable temperature initialized at 0.07. This simple training framework actually brings several concerns that need to be addressed. First, the pre-training framework fails to model the semantic information of inputs due to the simplicity of the data structure. This results in inferior performances on tasks that require reasoning ability, e.g., visual question answering and visual commonsense reasoning. Second, the image and text features reside in separate spaces, which makes it difficult to model the interactions between different modalities. Third, the massive time and resource consumption in the training procedure set restrictions on performing a full pre-training schedule from scratch. 4 Knowledge-CLIP As we have summarized above, there are several concerns that hinder the transferability of CLIP and potential improvements on model performances. In this paper, we propose a novel pre-training framework based on knowledge graphs, that addresses the limitation of the original CLIP model from several perspectives: (1) we introduce knowledge graphs into the training dataset where the graph-structured data and semantic relations between concepts enable the model to extract semantic features and establish semantic connection across inputs; (2) A multi-modal encoder is added on top of the current image and text encoders to fuse the features from different modalities, and model the joint distribution between inputs; (3) A continuous learning strategy based on the pre-trained model of CLIP is adopted which avoids the massive computation cost in the pre-training procedure, and enhance the generalization power of the model efficiently. We introduce our framework in detail in the following sections, and show the overview in Fig. 2. 4.1 Data Preparation Different from raw image-text pairs adopted in the original CLIP, our model takes knowledge graphs as input. A knowledge graph can be defined as a directed graph G = {ξ,R, TR}, where ξ, R correspond to sets of entities and relations, and TR represent the set of relation triplets. A triplet (h, r, t) ∈ TR denotes that entity h ∈ ξ has relation r ∈ R with entity t ∈ ξ. As illustrated in Fig. 3, we pre-train our model on three types of knowledge graphs, including multi-modal knowledge graph, scene graph, and language-based knowledge graph. Among these, relations are constantly described in language tokens, where the entities are from different modalities in different forms. For multi-modal knowledge graph, the entities contain both illustrative images and language descriptions. Through representing the same entity under various modalities and connecting entities with relations, it helps to build semantic connections between vision and language concepts. In practice, language and vision descriptions are randomly chosen for each entity. In this way, the triplet set TR contains different forms including (Img, Rel, Img), (Img, Rel, Text), and (Text, Rel, Text), providing rich information across modalities while also enhancing perceptions within modalities. Different from multi-modal knowledge graph, scene graph extracts visual concepts (mainly objects) for each image, and connects them with predefined semantic relations describing relative locations, actions, etc. Therefore, the entities in the scene graph correspond to a certain region in an image, with the triplet form of (Img, Rel, Img). We practically use the selected regions as the input and discard the irrelevant parts. As two entities in the same triplet denote different regions in the same image, it forces the model to extract more fine-grained features. Lastly, language-based knowledge graph connects words and phrases of natural language with labeled edges. It is built on only language modality with the triplet form of (Text, Rel, Text), while helping to build semantic alignment within word tokens. 4.2 Model Architecture The model architecture and the training framework are illustrated in Fig. 2(A). Specifically, we first process the inputs into token sequences with modality-specific tokenizers. The BPE tokenzier [40] is adopted for language inputs, while image inputs are sliced into non-overlapped patches and converted into a sequence of patches following ViT [12]. For convenient processing, we set the length of the image sequence and text sequence as lI and lT respectively for all inputs. To preserve the relative position information in the input, learnable positional encodings are added to the corresponding sequences before being sent to the model. Two separate image encoder fI(·) and text encoder fT(·) are then adopted to extract features from raw inputs. For a given triplet (h, r, t), the entities h and t are sent to the encoders with respect to their modalities (image or text). The relation r, which is represented by language tokens, is sent to text encoder similar to text entity. Compared to the model structure in CLIP, we introduce a modification to better adapt our framework. Specifically, vanilla CLIP models use a pooling function at the last layer of two encoders to compress the feature map into a global representation. Namely, for an input u ∈ RL×di , where L and di denote the sequence length and feature dimension, the output of the encoder can be formulated as: xu = f(u) ∈ RL×do , x̄u = Pool(xu) ∈ Rdo , (2) where f represents the feature extraction module, Pool(·) denotes the pooling function, and do is the output dimension. Though efficient, it also leads to inevitable information loss in the local region, especially for the image inputs. Therefore, we remove the pooling functions for image and text entities to preserve the local information, and use xu ∈ RL×do as the extracted feature. The relation, on the other hand, is normally under a limited sequence length, e.g., one or two word tokens, where the information density is smaller than entities. Therefore, we retain the pooling function for relation input and use x̄u ∈ Rdo as the extracted features. In this way, we have extracted the features defined as (xh, x̄r, xt), which correspond to the elements in the input triplet (h, r, t). To model the joint distribution of different elements in the triplet, we consider a multi-modal encoder TransEncoder(·) to fuse the features from different sources. Specifically, we first concatenate all the features in the triplet into a single sequence and use a head token <head> at the beginning of the sequence. To emphasize the status of the tokens in the sequence, we consider additional learnable encodings for each element h, r, t in the triplet: X(h, r, t) = [<head>, xh+PEh, x̄r+PEr, xt+PEt]. (3) After processing by the multi-modal encoder, the feature of the head token <head> finally serves as the representation of the whole sequence: Y (h, r, t) = TransEncoder(X(h, r, t))[0, :]. (4) Also, representation for relation is extracted from the corresponding token: R(h, r, t) = TransEncoder(X(h, r, t))[1 + len(xh), :]. (5) 4.3 Training Targets Considering the unique data structure of knowledge graphs, we mainly adopt two types of training targets in our framework, including triplet-based loss and graph-based loss as illustrated in Fig. 2(B). Besides, a knowledge distillation loss is also considered due to the continuous learning strategy adopted in our framework. Triplet-based loss considers a batch of triplets as the input and supervises the training of our model by estimating the joint distribution of elements in the triplets. Inspired by the mask prediction technique that models the distribution of masked tokens conditioned on the unmasked regions, we similarly mask the elements in the triplets and predict the distribution with the help of a multi-modal encoder. Specifically, for incomplete triplets where certain elements are missing in the input, the concatenated sequence can be similarly derived as in Eq. 3 by masking the corresponding feature. For example, the concatenated sequence for an input (h, r, -) can be represented as: X(h, r, -) = [<head>, xh+PEh, x̄r+PEr, 0]. (6) On this basis, given a set of input D = {(hi, ri, ti)}Ni=1, we first model the distribution when one of the entities, i.e., ti, is masked, and derive the Entity-Entity (E2E) Loss by minimizing the negative log-likelihood: −E(h,r)∼Dlog(P (xt|xh, x̄r)). (7) We practically approximate the distribution P (xt|xh, x̄r) as the cosine similarity of P (xt) and P (xh, x̄r), and defined the loss function as: LE2E = − N∑ i=1 log( exp(cos(Y (-, -, ti), Y (hi, ri, -))/τ)∑ j exp(cos(Y (-, -, ti), Y (hj , rj , -))/τ) ). (8) We also model the distribution when the relation in the triplet is masked, and similarly derive the Entity-Relation (E2R) Loss: −E(h,t)∼Dlog(P (x̄r|xh, xt)). (9) Different from E2E loss, the relations in the triplets are defined in a limited set of relation groups. Therefore, we instead extract the representation of relation through an auxiliary two-layer MLP network, and model the objective as a classification problem from a predefined set of relation labels R. In this way, the loss function can be defined as: LE2R = − N∑ i=1 ∑ r∈R 1(r=ri)log(y(x̄ri)), where y(x̄ri) = MLP(R(hi, -, ti)), (10) is extracted from an MLP model followed by the output of multi-modal encoder defined in Eq. (5). Graph-based loss. We also take advantage of the graph structure in knowledge graph datasets, and adopt a graph neural network to extract deeper structural information among entities. We propagate information through connected edges in the graph, and update entity representations with aggregated feature. Specifically, for a graph neural network with L layers, the update function for the lth layer can be formulated as: G(l)(t) = E{hi,ri,t}∈TR g (l−1)(R(hi, -, t))G(l−1)(hi), G0(t) = Y (-, -, t), (11) where g(l)(R(hi, -, t)) = W (l)R(hi, -, t), (12) calculates the aggregation weights by relation representation R(hi, -, t) with a learnable matrix W (l). Finally, we define the Graph-Entity(G2E) Loss by computing the cosine similarity of entity features before and after the propagation procedure in the graph: LG2E = − 1 Nξ ∑ ti∈ξ log( exp(cos(Y (-, -, ti), G(L)(ti))/τ)∑ tj exp(cos(Y (-, -, ti), G(L)(tj))/τ) ). (13) Continuous Learning. Large-scale pre-training usually requires massive computation resources which makes it highly inefficient when training from scratch. Therefore, to inject the semantic information in an efficient manner, we consider training our model based on the pre-trained weights from the original CLIP. This powerful initialization promotes the convergence of our model and greatly enhances the training efficiency. However, naively extending the training process with new data leads to severe forgetting problem that hampers the performance of the original models. To address this limitation, we adopt simple solutions to maintain CLIP performances while improving its ability to extract semantic features from knowledge graphs. (1) Besides the knowledge graph datasets, we also train our model on several widely adopted image-text datasets that share a similar data distribution with the training data in CLIP. To better fit our pre-training framework, we convert the original image-text pair into the form of triplets, with specifically designed relations ’image of’ and ’caption of’. (2) We also use the original CLIP model as the teacher, and use an auxiliary loss LKD to measure the KL distance between the output of CLIP and our model. Overall, the final pre-training objective of Knowledge-CLIP is formulated as: L = LE2E + LE2R + LG2E + LKD. (14) 5 Experiments 5.1 Implementation Details Experimental Setup. In all the experiments, we use the same model structure as CLIP [38]. A 12-layer Transformer model with 512 width is adopted for text encoder, and ViT-L/14 is adopted for image encoder. For text and image encoder, we use the pre-trained weights in the original CLIP as the initialization. For the multi-modal encoder, we consider a 4 layer Transformer model with 1024 width. The rate for drop path is set as 0.1 during training. As the added multi-modal encoder is trained from random initialization, we decrease the learning rate for the pre-trained weights from CLIP to achieve a more balanced step in the optimization. We train Knowledge-CLIP with an initial learning rate of 1e-5 for image and text encoders, and 1e-3 for the multi-modal encoder. Cosine learning rate with linear warmup is used in the training schedule. Weight decay and gradient clip are also adopted. See more details in the supplemental material. Pre-train Dataset. Three knowledge graph datasets are adopted in the pre-training process. VisualSem [2] is a high-quality multi-modal knowledge graph dataset for vision and language concepts, including entities with multilingual glosses, multiple illustrative images, and visually relevant relations, covering a total number of 90k nodes, 1.3M glosses and 938k images. 13 semantic relations are used to connect different entities in the graph, while the entities in VisualSem are linked to Wikipedia articles, WordNet [34], and high-quality images from ImageNet [10]. Visual Genome [24] is a knowledge-based scene graph dataset that connects structured image concepts with semantic relations. Visual Genome serves as the benchmark for various vision tasks, e.g., visual grounding, and scene graph generation. ConceptNet [46] is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources including expert-created resources and crowd-sourcing built on only language modality. Besides the three knowledge graph datasets, we also train our model on two widely adopted imagetext datasets that share the similar data distribution with the training data in CLIP. We practically add COCO Caption [8] and CC3M [42] to the training set, while large-scale datasets like CC12M [4] or YFCC [21] are not considered to maintain training efficiency. Downstream Task. To validate the effectiveness of our framework, we conduct experiments on various downstream tasks, including multi-modal tasks like text and image retrieval, visual question answering, and uni-modal tasks like image classification and natural language understanding. 5.2 Multi-modal Tasks Visual question answering / Visual Entailment. We also validate the effectiveness of Knowledge-CLIP on other vision-language tasks, including VQA [15] and SNLI-VE [64]. We show the comparison results in Tab. 2. Compared to competitive baselines including VILLA [14] and ALBEF [26], Knowledge-CLIP with ViT-L/14 shows better performances under all settings, while the smaller model also achieves competitive re- sults. Compared to the original CLIP model, our pre-trained model practically improves its transferability on downstream tasks, especially on the datasets like VQA that requires reasoning ability. Image and text retrieval. We first conduct experiments on Flickr30k [37] and COCO Caption [8] dataset to show the performances of our model on image-text retrieval tasks. Given input sets X and Y of images and texts, we use Knowledge-CLIP to extract features for each input, and model the joint probability with the cosine similarity between image and text pairs. We summarize the comparison results of Knowledge-CLIP with competitive baselines in Tab. 1. It is shown that our model consistently achieves better results over the original CLIP on both datasets, while comparable with competitive baselines like OSCAR. 5.3 Uni-modal Tasks Image Classification. To further demonstrate the generalization power of Knowledge-CLIP, we compare the performances of pre-train models on the ImageNet classification task [10]. We summarize the comparison results in Tab. 3, and show that Knowledge-CLIP can also handle vision tasks well. We argue the improvements over baselines may attribute to the scene graphs in our pre-training dataset, which emphasize the visual concepts in the images. Language Understanding. We validate the generalization performance of Knowledge-CLIP for language understanding tasks on the widely adopted GLUE dataset [55]. Specifically, we conduct experiments on 7 tasks in GLUE and summarize the comparison results in Tab. 4. It is shown that our model achieves comparable performances with competitive baseline models. Also, for tasks like QQP and MNLI that require sentence-pair matching, Knowledge-CLIP shows higher performances, due to the existence of language triplets in the pre-training dataset. 5.4 Ablation Studies To validate the effectiveness of the components in our work, we carefully design several settings, including (1) CLIP+continuous learning: we train vanilla CLIP (pretrained weights as initialization) on knowledge datasets adopted in our work; (2) Knowledge-CLIP-(t1, t2, t3): we remove the training objectives respectively in our work to analyze the contribution of each loss. For all experiments, we adopt a smaller model (ViT-B/32) as the image encoder of CLIP in the ablation study. Also, it is worth noticing that KD loss plays a vital role in the continuous learning scheme, without which will lead to a significant performance drop due to the model forgetting problem. Therefore, we use KD loss in all the ablation settings for a fair comparison. We show the comparison results on two representative tasks in Tab. 5, including the image/text retrieval task on Flickr30K, and the visual question answering task in VQA. Several observations can be made from the ablation: (1) All three training objectives (E2E, E2R, G2E) contribute to improving the model performance. Training the model without any of the objectives leads to inferior performances on downstream tasks. We argue that the E2E, E2R, and G2E loss promote the model from different perspectives by focusing on semantic understanding of concepts, complicated relations between entities, and structural information. Therefore, all three objectives are necessary for the framework and contribute to the improvement respectively. (2) By comparing the first and second row, we can see that simply training the CLIP model with extra time and data fails to improve the generalization performance. It also demonstrates that the improvements mainly come from the injected knowledge information rather than the continuous learning scheme. We also conduct an ablation study on the KD loss adopted for continuous learning and summarize the results in Tab. 6. The model achieves lower results after removing the KD loss, indicating its vital role in the continuous learning scheme. We argue the reason for this phenomenon is that the model suffers from the forgetting problem, which is widely spotted in the field of lifelong learning and continuous learning. 5.5 Analysis on particular semantics We also conduct experiments on carefully selected data which may better reflect how a visionlanguage model understands a particular type of input. Specifically, we select questions in the VQA dataset that contains (1) Negations; (2) Color attributes; (3) Position attributes; (4) Sizes. We summarize the comparison results of CLIP and our model on these sub-datasets in Tab. 7. As we can observe, our model achieves consistent improvements over CLIP on these specially designed datasets and shows significantly better results. Regarding questions with negation, our model achieves 2.1% higher accuracy. Regarding color and position attributes, our model shows even higher improvements. We believe these comparisons on different ’semantic domains’ demonstrate the effectiveness of injecting knowledge information into the current vision-language pretraining framework which practically enhances the model perception of semantic understanding. 6 Conclusion In this paper, we propose a novel vision-language pretraining framework that incorporates knowledge information to model the semantic connections between vision and language entities. We introduce three types of graph-structured datasets into the training process, and adopt a multi-modal encoder to model the joint distribution of entities and their semantic relations. Extensive experiments on various downstream tasks including multi-modal, uni-modal, and graph-based tasks validate the transfer and generalization ability of our model. Our approach is now limited in injecting knowledge information into the CLIP models. However, our training objectives and new knowledge graph datasets are technically compatible with other large-scale pretraining frameworks. We will explore the possibility of further applications in the future. 7 Acknowledgement This work is supported in part by the National Key R&D Program of China under Grant 2020AAA0105200, the National Natural Science Foundation of China under Grants 62022048, Guoqiang Institute of Tsinghua University and Beijing Academy of Artificial Intelligence. We also appreciate the generous donation of computing resources by High-Flyer AI.
1. What is the focus and contribution of the paper regarding vision-language pre-training? 2. What are the strengths of the proposed approach, particularly in encoding multiple modalities? 3. What are the weaknesses of the paper, especially regarding the lack of analysis? 4. Do you have any questions or concerns regarding the training objectives and their effectiveness? 5. What are the limitations of the paper, and what additional analysis would be helpful to provide a better understanding of the approach's effectiveness?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a vision-language pre-training framework that incorporates knowledge information by pre-training on multiple knowledge graph datasets (i.e., VisualSem, Visual Genome, and ConceptNet). They unify all the possible triples of different modalities (e.g., image-relation-text, image-relation-image, text-relation-text), encode each modality with its corresponding CLIP encoder (i.e., Image encoder, Text encoder), and concatenate all the embeddings with special token <head>. Then, multi-modal encoder (i.e., 4 layer transformer model) encodes it and uses embedding of <head> token as a representation. For the training objectives, the paper utilizes three different objectives including entity-entity, entity-relation, and graph-entity loss. The pre-trained model consistently outperforms baselines including UNITER, OSCAR, CLIP, ALBEF on various multi-modal tasks. Strengths And Weaknesses Strengths It is interesting that how framework encodes multiple features of different modalities into one representation. The performance shows the effectiveness of the approach. Weakness Lack of analysis Want to see the performance with each training objective for understanding the effectiveness of each objective. (e.g., without L_E2E, without L_E2R, without L_G2E) Want to see the impact of L_KD, which is the loss term of KL distance between original CLIP and the proposed model. Lack of intuition for each objective E2E: Want to see the intuition behind exploiting contrastive learning between (head, relation)-masked triple and (tail)-masked triple. G2E: Want to see the intuition behind applying contrastive learning between before and after graph-propagation on (tail). Questions Figure 2: L_E2L -> L_E2R ? Limitations I really like the approach but the lack of analysis is the limitation of this paper. I am willing to increase the score when more analysis about training objectives are provided.
NIPS
Title Contrastive Language-Image Pre-Training with Knowledge Graphs Abstract Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semantic information into the widely used CLIP model [38]. Through introducing knowledge-based objectives in the pre-training process and utilizing different types of knowledge graphs as training data, our model can semantically align the representations in vision and language with higher quality, and enhance the reasoning ability across scenarios and modalities. Extensive experiments on various vision-language downstream tasks demonstrate the effectiveness of Knowledge-CLIP compared with the original CLIP and competitive baselines. 1 Introduction Large-scale vision-language pre-training has attracted wide research interests in recent years [9, 26, 38, 72]. Different from training independent models for each specific task, pre-trained models take the analogy of human biological intelligence system, trying to perceive the world from various data modalities and handle comprehensive tasks. Specifically, it aims to provide a unified inference paradigm that simultaneously learns representations for multi-modal data and can easily transfer to a variety of downstream tasks. Benefiting from the accessibility of massive image-text pairs from the web, the vision-language pre-training can leverage a broader source of supervision, and effectively improves the model’s generalization power. Early attempts on vision-language pre-training mainly focus on detecting objects in the images and aligning the corresponding word tokens with object regions [9, 28, 50]. Though effective, the entanglement with the concept of objects, and the additional resources for pre-trained object detectors impose restrictions on real-world applications. One of the pioneer works, CLIP [38], extends the scale of the pre-training dataset to 400 million image-text pairs, and learns representations by directly matching raw text with the corresponding image. Through a contrastive-based training scheme, CLIP learns visual concepts under a large vocabulary which significantly improves the model performances on various downstream tasks. Taking inspiration from CLIP, the following researches further extend the work from several perspectives, including data modality [72], downstream tasks [57], and training data efficiency [19, 44]. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Although showing promising results, the current pre-training frameworks also suffer from limitations. Specifically, the data pairs for pre-training are organized in the simplest manner, where only the descriptions of matched and unmatched are used to represent the relation between a given image and text pair. This usually leads to a degenerated scenario, where the model tends to rely on the co-occurrence of inputs instead of their semantic meanings. We give a toy example in Fig. 1 by evaluating the zero-shot transfer performance of CLIP on the ImageNet dataset [10] with the templates ’a photo of a {}’ and ’not a photo of a {}’. It is shown that the distributions of CLIP outputs under two templates are quite similar, suggesting that the current model fails to understand the semantic meaning of word tokens. As a result, the transferability of the model is restricted, and tends to show worse performances on tasks that require reasoning ability, e.g., visual question answering. To address the limitation of pre-trained models on semantic perceiving, we resort to the technique of knowledge graph, which has been widely studied in the field of natural language processing [7, 58]. Knowledge graph (KG) is a large-scale semantic network that comprises entities as nodes and semantic relations as edges. Through organizing data in a graph structure, knowledge graphs provide rich information on describing the relations between entities and enable a reasoning process through the whole graph. These advantages over regular-structured data are favorable on various tasks including question-answering [18, 70], relation prediction [29, 43] and knowledge reasoning [6, 59]. In recent years, knowledge graph has also been investigated in the field of computer vision, e.g., scene graph [65], and the integration of both language and image [2]. This bridges the gap between different modalities in the knowledge graph, which inspires us to explore a new knowledge-based pre-training framework, and inject semantic information into simple image-text pairs. In this paper, we propose a novel vision-language pre-training approach, dubbed Knowledge-CLIP, by constructing a knowledge augmented pre-training framework based on the widely used CLIP models. As illustrated in Fig. 2, we follow the structure of CLIP, and use two Transformer-based models as image and text encoders respectively. These two encoders take entities and relations in the knowledge graph as input and extract raw features for both entities and relations. Notably, entities can be in the form of image/text, while the relations are constantly described by language tokens. Then, a multi-modal Transformer encoder is adopted to fuse the entity features conditioned on their relations. In this way, the pre-trained model is pushed to concentrate on understanding semantic relations between visual and word concepts, thereby establishing strong semantic connections between vision and language modalities. To additionally improve the training efficiency and avoid the massive computation cost in the pretraining procedure, we adopt a simple continuous learning strategy by training our model based on the pre-trained weights of CLIP. This provides a possibility of efficiently promoting the model performance of CLIP with low training resources. We train our model on three knowledge graph datasets, namely Visual-Genome [24] (scene graph), ConceptNet [46] (language-based graph), and VisualSem [2] (multi-modal graph), and also adopt part of datasets from CLIP to avoid the model forgetting problem. With the knowledge-enhanced pre-training, Knowledge-CLIP achieves consistent improvements over the original CLIP models on various vision and language downstream tasks. 2 Related works Large-scale pre-training. Benefited from the development of Transformer in both vision [35, 63, 36] and language [54] tasks, large-scale pre-training framework has received wide concerns in recent years and shown promising results in the field of computer vision and natural language processing. GPT [39] is one of the pioneer works for language pre-training which optimizes the probability of output based on previous words in the sequence. BERT [11] adopts the masked language modeling technique and predicts the masked tokens conditioned on the unmasked ones. Similarly, computer vision society also witnesses the development of pre-training models thanks to the emergence of large-scale image datasets. IGPT [5] proposes a generative pre-training technique and shows promising results on classification task. MAE [17] adopts a similar pre-training scheme as BERT and predicts the masked regions of an image with unmasked ones. Multi-modal pre-training bears differences from the aforementioned frameworks and requires the alignment between various data modalities. Using enormous image-text pairs collected from Internet, vision-language models show significant improvements on various downstream tasks. Among these approaches, various pre-training scheme is adopted, including contrastive learning [1, 27, 31], masked language modeling [47, 51], and masked region modeling [9]. The problem of semantic misunderstanding has also been investigated by previous works. EICLIP [33] considers the problem of cross-modal retrieval in the field of E-commerce. Sharing similar insight with our work, the authors notice the model bias towards some specific word tokens in CLIP, and introduce causal inference to align the text encoder with e-commerce domain knowledge. K3M [73] focuses on the modality-missing and modality-noise problem and introduces knowledge modality into E-commerce tasks. DeVLBert [69] studies the spurious correlations between different modalities and adjusts the conditional probability of image tokens and word tokens. KaleidoBERT [74] focuses on image-text coherence by introducing several novel self-supervised tasks. Compared to previous approaches, we are the first to incorporate multi-modal knowledge graphs into the pre-training process, and effectively enhance the model perception on semantic relations between visual and language concepts. Knowledge Graph. Knowledge graph is first introduced in the field of natural language processing, and the knowledge graph embedding approaches have been successful on capturing the semantics of symbols (entities and relations) and achieving impressive results on a wide range of real-world applications including text understanding [13, 66], recommendation system [16, 56] and natural language question answering [18, 70]. On the other hand, scene graphs represent a type of graphstructured data in computer vision, where the visual concepts in the image are connected with semantic relations. Scene graphs emphasize the fine-grained semantic features for images and are widely adopted in various downstream tasks, including scene graph generation [65], and Scene Graph Parsing [68]. Besides scene graph, knowledge graph is also adopted in other computer vision tasks, including image classification [22], panoptic segmentation [62], and image captioning [71]. On this basis, multi-modal knowledge graph earns wide concerns in recent years. Considering the natural alignment between different data modalities, multi-modal knowledge graphs have been widely adopted in various graph-based tasks including link prediction [3, 30], entity classification [61], while also showing great potential on out of graph applications like visual question answering [20, 41] and recommendation systems [49, 52]. 3 Contrastive Language-Image Pre-training (CLIP) We first provide a brief review of model architectures and training settings in CLIP. CLIP uses two separate models for image encoder and text encoder respectively. For text inputs, a 12-layer Transformer is adopted with 512 width and 8 attention heads. Raw texts are first converted using byte pair encoding [40] technique under a vocabulary size of 49,152. The text sequence length is capped at 76 and added by a positional encoding before being sent into the text encoder. On the other hand, CLIP has different versions of image encoder with ResNet-based and Vision Transformer-based architectures. As the following researches have demonstrated the better performances of Vision Transformer models, we only consider Transformer-based image encoders in this paper. Similar to the text input, images are first converted to patches, and added by a positional encoding. At the last stage of both encoders, a global pooling function is adopted to compress the feature map into a single feature, which serves as the representation of the whole image/text sequence. The cosine distance of the image and text features is computed as the similarity of the data pair. For training supervision, a contrastive loss is adopted to maximize the similarity of matched pairs while minimizing the similarity of unmatched pairs. Given a batch of N data pairs {Ii,Ti}Ni=1, where Ii and T represents the ith image and text respectively, the loss function can be parameterized as: L = −1 2 N∑ i=1 ( log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ii), fT(Tj))/τ) + log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ij), fT(Ti))/τ) ) , (1) where fI and fT correspond to image and text encoders respectively, cos(·) denotes the cosine similarity between the inputs, and τ is a learnable temperature initialized at 0.07. This simple training framework actually brings several concerns that need to be addressed. First, the pre-training framework fails to model the semantic information of inputs due to the simplicity of the data structure. This results in inferior performances on tasks that require reasoning ability, e.g., visual question answering and visual commonsense reasoning. Second, the image and text features reside in separate spaces, which makes it difficult to model the interactions between different modalities. Third, the massive time and resource consumption in the training procedure set restrictions on performing a full pre-training schedule from scratch. 4 Knowledge-CLIP As we have summarized above, there are several concerns that hinder the transferability of CLIP and potential improvements on model performances. In this paper, we propose a novel pre-training framework based on knowledge graphs, that addresses the limitation of the original CLIP model from several perspectives: (1) we introduce knowledge graphs into the training dataset where the graph-structured data and semantic relations between concepts enable the model to extract semantic features and establish semantic connection across inputs; (2) A multi-modal encoder is added on top of the current image and text encoders to fuse the features from different modalities, and model the joint distribution between inputs; (3) A continuous learning strategy based on the pre-trained model of CLIP is adopted which avoids the massive computation cost in the pre-training procedure, and enhance the generalization power of the model efficiently. We introduce our framework in detail in the following sections, and show the overview in Fig. 2. 4.1 Data Preparation Different from raw image-text pairs adopted in the original CLIP, our model takes knowledge graphs as input. A knowledge graph can be defined as a directed graph G = {ξ,R, TR}, where ξ, R correspond to sets of entities and relations, and TR represent the set of relation triplets. A triplet (h, r, t) ∈ TR denotes that entity h ∈ ξ has relation r ∈ R with entity t ∈ ξ. As illustrated in Fig. 3, we pre-train our model on three types of knowledge graphs, including multi-modal knowledge graph, scene graph, and language-based knowledge graph. Among these, relations are constantly described in language tokens, where the entities are from different modalities in different forms. For multi-modal knowledge graph, the entities contain both illustrative images and language descriptions. Through representing the same entity under various modalities and connecting entities with relations, it helps to build semantic connections between vision and language concepts. In practice, language and vision descriptions are randomly chosen for each entity. In this way, the triplet set TR contains different forms including (Img, Rel, Img), (Img, Rel, Text), and (Text, Rel, Text), providing rich information across modalities while also enhancing perceptions within modalities. Different from multi-modal knowledge graph, scene graph extracts visual concepts (mainly objects) for each image, and connects them with predefined semantic relations describing relative locations, actions, etc. Therefore, the entities in the scene graph correspond to a certain region in an image, with the triplet form of (Img, Rel, Img). We practically use the selected regions as the input and discard the irrelevant parts. As two entities in the same triplet denote different regions in the same image, it forces the model to extract more fine-grained features. Lastly, language-based knowledge graph connects words and phrases of natural language with labeled edges. It is built on only language modality with the triplet form of (Text, Rel, Text), while helping to build semantic alignment within word tokens. 4.2 Model Architecture The model architecture and the training framework are illustrated in Fig. 2(A). Specifically, we first process the inputs into token sequences with modality-specific tokenizers. The BPE tokenzier [40] is adopted for language inputs, while image inputs are sliced into non-overlapped patches and converted into a sequence of patches following ViT [12]. For convenient processing, we set the length of the image sequence and text sequence as lI and lT respectively for all inputs. To preserve the relative position information in the input, learnable positional encodings are added to the corresponding sequences before being sent to the model. Two separate image encoder fI(·) and text encoder fT(·) are then adopted to extract features from raw inputs. For a given triplet (h, r, t), the entities h and t are sent to the encoders with respect to their modalities (image or text). The relation r, which is represented by language tokens, is sent to text encoder similar to text entity. Compared to the model structure in CLIP, we introduce a modification to better adapt our framework. Specifically, vanilla CLIP models use a pooling function at the last layer of two encoders to compress the feature map into a global representation. Namely, for an input u ∈ RL×di , where L and di denote the sequence length and feature dimension, the output of the encoder can be formulated as: xu = f(u) ∈ RL×do , x̄u = Pool(xu) ∈ Rdo , (2) where f represents the feature extraction module, Pool(·) denotes the pooling function, and do is the output dimension. Though efficient, it also leads to inevitable information loss in the local region, especially for the image inputs. Therefore, we remove the pooling functions for image and text entities to preserve the local information, and use xu ∈ RL×do as the extracted feature. The relation, on the other hand, is normally under a limited sequence length, e.g., one or two word tokens, where the information density is smaller than entities. Therefore, we retain the pooling function for relation input and use x̄u ∈ Rdo as the extracted features. In this way, we have extracted the features defined as (xh, x̄r, xt), which correspond to the elements in the input triplet (h, r, t). To model the joint distribution of different elements in the triplet, we consider a multi-modal encoder TransEncoder(·) to fuse the features from different sources. Specifically, we first concatenate all the features in the triplet into a single sequence and use a head token <head> at the beginning of the sequence. To emphasize the status of the tokens in the sequence, we consider additional learnable encodings for each element h, r, t in the triplet: X(h, r, t) = [<head>, xh+PEh, x̄r+PEr, xt+PEt]. (3) After processing by the multi-modal encoder, the feature of the head token <head> finally serves as the representation of the whole sequence: Y (h, r, t) = TransEncoder(X(h, r, t))[0, :]. (4) Also, representation for relation is extracted from the corresponding token: R(h, r, t) = TransEncoder(X(h, r, t))[1 + len(xh), :]. (5) 4.3 Training Targets Considering the unique data structure of knowledge graphs, we mainly adopt two types of training targets in our framework, including triplet-based loss and graph-based loss as illustrated in Fig. 2(B). Besides, a knowledge distillation loss is also considered due to the continuous learning strategy adopted in our framework. Triplet-based loss considers a batch of triplets as the input and supervises the training of our model by estimating the joint distribution of elements in the triplets. Inspired by the mask prediction technique that models the distribution of masked tokens conditioned on the unmasked regions, we similarly mask the elements in the triplets and predict the distribution with the help of a multi-modal encoder. Specifically, for incomplete triplets where certain elements are missing in the input, the concatenated sequence can be similarly derived as in Eq. 3 by masking the corresponding feature. For example, the concatenated sequence for an input (h, r, -) can be represented as: X(h, r, -) = [<head>, xh+PEh, x̄r+PEr, 0]. (6) On this basis, given a set of input D = {(hi, ri, ti)}Ni=1, we first model the distribution when one of the entities, i.e., ti, is masked, and derive the Entity-Entity (E2E) Loss by minimizing the negative log-likelihood: −E(h,r)∼Dlog(P (xt|xh, x̄r)). (7) We practically approximate the distribution P (xt|xh, x̄r) as the cosine similarity of P (xt) and P (xh, x̄r), and defined the loss function as: LE2E = − N∑ i=1 log( exp(cos(Y (-, -, ti), Y (hi, ri, -))/τ)∑ j exp(cos(Y (-, -, ti), Y (hj , rj , -))/τ) ). (8) We also model the distribution when the relation in the triplet is masked, and similarly derive the Entity-Relation (E2R) Loss: −E(h,t)∼Dlog(P (x̄r|xh, xt)). (9) Different from E2E loss, the relations in the triplets are defined in a limited set of relation groups. Therefore, we instead extract the representation of relation through an auxiliary two-layer MLP network, and model the objective as a classification problem from a predefined set of relation labels R. In this way, the loss function can be defined as: LE2R = − N∑ i=1 ∑ r∈R 1(r=ri)log(y(x̄ri)), where y(x̄ri) = MLP(R(hi, -, ti)), (10) is extracted from an MLP model followed by the output of multi-modal encoder defined in Eq. (5). Graph-based loss. We also take advantage of the graph structure in knowledge graph datasets, and adopt a graph neural network to extract deeper structural information among entities. We propagate information through connected edges in the graph, and update entity representations with aggregated feature. Specifically, for a graph neural network with L layers, the update function for the lth layer can be formulated as: G(l)(t) = E{hi,ri,t}∈TR g (l−1)(R(hi, -, t))G(l−1)(hi), G0(t) = Y (-, -, t), (11) where g(l)(R(hi, -, t)) = W (l)R(hi, -, t), (12) calculates the aggregation weights by relation representation R(hi, -, t) with a learnable matrix W (l). Finally, we define the Graph-Entity(G2E) Loss by computing the cosine similarity of entity features before and after the propagation procedure in the graph: LG2E = − 1 Nξ ∑ ti∈ξ log( exp(cos(Y (-, -, ti), G(L)(ti))/τ)∑ tj exp(cos(Y (-, -, ti), G(L)(tj))/τ) ). (13) Continuous Learning. Large-scale pre-training usually requires massive computation resources which makes it highly inefficient when training from scratch. Therefore, to inject the semantic information in an efficient manner, we consider training our model based on the pre-trained weights from the original CLIP. This powerful initialization promotes the convergence of our model and greatly enhances the training efficiency. However, naively extending the training process with new data leads to severe forgetting problem that hampers the performance of the original models. To address this limitation, we adopt simple solutions to maintain CLIP performances while improving its ability to extract semantic features from knowledge graphs. (1) Besides the knowledge graph datasets, we also train our model on several widely adopted image-text datasets that share a similar data distribution with the training data in CLIP. To better fit our pre-training framework, we convert the original image-text pair into the form of triplets, with specifically designed relations ’image of’ and ’caption of’. (2) We also use the original CLIP model as the teacher, and use an auxiliary loss LKD to measure the KL distance between the output of CLIP and our model. Overall, the final pre-training objective of Knowledge-CLIP is formulated as: L = LE2E + LE2R + LG2E + LKD. (14) 5 Experiments 5.1 Implementation Details Experimental Setup. In all the experiments, we use the same model structure as CLIP [38]. A 12-layer Transformer model with 512 width is adopted for text encoder, and ViT-L/14 is adopted for image encoder. For text and image encoder, we use the pre-trained weights in the original CLIP as the initialization. For the multi-modal encoder, we consider a 4 layer Transformer model with 1024 width. The rate for drop path is set as 0.1 during training. As the added multi-modal encoder is trained from random initialization, we decrease the learning rate for the pre-trained weights from CLIP to achieve a more balanced step in the optimization. We train Knowledge-CLIP with an initial learning rate of 1e-5 for image and text encoders, and 1e-3 for the multi-modal encoder. Cosine learning rate with linear warmup is used in the training schedule. Weight decay and gradient clip are also adopted. See more details in the supplemental material. Pre-train Dataset. Three knowledge graph datasets are adopted in the pre-training process. VisualSem [2] is a high-quality multi-modal knowledge graph dataset for vision and language concepts, including entities with multilingual glosses, multiple illustrative images, and visually relevant relations, covering a total number of 90k nodes, 1.3M glosses and 938k images. 13 semantic relations are used to connect different entities in the graph, while the entities in VisualSem are linked to Wikipedia articles, WordNet [34], and high-quality images from ImageNet [10]. Visual Genome [24] is a knowledge-based scene graph dataset that connects structured image concepts with semantic relations. Visual Genome serves as the benchmark for various vision tasks, e.g., visual grounding, and scene graph generation. ConceptNet [46] is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources including expert-created resources and crowd-sourcing built on only language modality. Besides the three knowledge graph datasets, we also train our model on two widely adopted imagetext datasets that share the similar data distribution with the training data in CLIP. We practically add COCO Caption [8] and CC3M [42] to the training set, while large-scale datasets like CC12M [4] or YFCC [21] are not considered to maintain training efficiency. Downstream Task. To validate the effectiveness of our framework, we conduct experiments on various downstream tasks, including multi-modal tasks like text and image retrieval, visual question answering, and uni-modal tasks like image classification and natural language understanding. 5.2 Multi-modal Tasks Visual question answering / Visual Entailment. We also validate the effectiveness of Knowledge-CLIP on other vision-language tasks, including VQA [15] and SNLI-VE [64]. We show the comparison results in Tab. 2. Compared to competitive baselines including VILLA [14] and ALBEF [26], Knowledge-CLIP with ViT-L/14 shows better performances under all settings, while the smaller model also achieves competitive re- sults. Compared to the original CLIP model, our pre-trained model practically improves its transferability on downstream tasks, especially on the datasets like VQA that requires reasoning ability. Image and text retrieval. We first conduct experiments on Flickr30k [37] and COCO Caption [8] dataset to show the performances of our model on image-text retrieval tasks. Given input sets X and Y of images and texts, we use Knowledge-CLIP to extract features for each input, and model the joint probability with the cosine similarity between image and text pairs. We summarize the comparison results of Knowledge-CLIP with competitive baselines in Tab. 1. It is shown that our model consistently achieves better results over the original CLIP on both datasets, while comparable with competitive baselines like OSCAR. 5.3 Uni-modal Tasks Image Classification. To further demonstrate the generalization power of Knowledge-CLIP, we compare the performances of pre-train models on the ImageNet classification task [10]. We summarize the comparison results in Tab. 3, and show that Knowledge-CLIP can also handle vision tasks well. We argue the improvements over baselines may attribute to the scene graphs in our pre-training dataset, which emphasize the visual concepts in the images. Language Understanding. We validate the generalization performance of Knowledge-CLIP for language understanding tasks on the widely adopted GLUE dataset [55]. Specifically, we conduct experiments on 7 tasks in GLUE and summarize the comparison results in Tab. 4. It is shown that our model achieves comparable performances with competitive baseline models. Also, for tasks like QQP and MNLI that require sentence-pair matching, Knowledge-CLIP shows higher performances, due to the existence of language triplets in the pre-training dataset. 5.4 Ablation Studies To validate the effectiveness of the components in our work, we carefully design several settings, including (1) CLIP+continuous learning: we train vanilla CLIP (pretrained weights as initialization) on knowledge datasets adopted in our work; (2) Knowledge-CLIP-(t1, t2, t3): we remove the training objectives respectively in our work to analyze the contribution of each loss. For all experiments, we adopt a smaller model (ViT-B/32) as the image encoder of CLIP in the ablation study. Also, it is worth noticing that KD loss plays a vital role in the continuous learning scheme, without which will lead to a significant performance drop due to the model forgetting problem. Therefore, we use KD loss in all the ablation settings for a fair comparison. We show the comparison results on two representative tasks in Tab. 5, including the image/text retrieval task on Flickr30K, and the visual question answering task in VQA. Several observations can be made from the ablation: (1) All three training objectives (E2E, E2R, G2E) contribute to improving the model performance. Training the model without any of the objectives leads to inferior performances on downstream tasks. We argue that the E2E, E2R, and G2E loss promote the model from different perspectives by focusing on semantic understanding of concepts, complicated relations between entities, and structural information. Therefore, all three objectives are necessary for the framework and contribute to the improvement respectively. (2) By comparing the first and second row, we can see that simply training the CLIP model with extra time and data fails to improve the generalization performance. It also demonstrates that the improvements mainly come from the injected knowledge information rather than the continuous learning scheme. We also conduct an ablation study on the KD loss adopted for continuous learning and summarize the results in Tab. 6. The model achieves lower results after removing the KD loss, indicating its vital role in the continuous learning scheme. We argue the reason for this phenomenon is that the model suffers from the forgetting problem, which is widely spotted in the field of lifelong learning and continuous learning. 5.5 Analysis on particular semantics We also conduct experiments on carefully selected data which may better reflect how a visionlanguage model understands a particular type of input. Specifically, we select questions in the VQA dataset that contains (1) Negations; (2) Color attributes; (3) Position attributes; (4) Sizes. We summarize the comparison results of CLIP and our model on these sub-datasets in Tab. 7. As we can observe, our model achieves consistent improvements over CLIP on these specially designed datasets and shows significantly better results. Regarding questions with negation, our model achieves 2.1% higher accuracy. Regarding color and position attributes, our model shows even higher improvements. We believe these comparisons on different ’semantic domains’ demonstrate the effectiveness of injecting knowledge information into the current vision-language pretraining framework which practically enhances the model perception of semantic understanding. 6 Conclusion In this paper, we propose a novel vision-language pretraining framework that incorporates knowledge information to model the semantic connections between vision and language entities. We introduce three types of graph-structured datasets into the training process, and adopt a multi-modal encoder to model the joint distribution of entities and their semantic relations. Extensive experiments on various downstream tasks including multi-modal, uni-modal, and graph-based tasks validate the transfer and generalization ability of our model. Our approach is now limited in injecting knowledge information into the CLIP models. However, our training objectives and new knowledge graph datasets are technically compatible with other large-scale pretraining frameworks. We will explore the possibility of further applications in the future. 7 Acknowledgement This work is supported in part by the National Key R&D Program of China under Grant 2020AAA0105200, the National Natural Science Foundation of China under Grants 62022048, Guoqiang Institute of Tsinghua University and Beijing Academy of Artificial Intelligence. We also appreciate the generous donation of computing resources by High-Flyer AI.
1. What is the main contribution of the paper, and how does it address the issue of semantic alignment in multimodal tasks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to improve multimodal learning? 3. How does the use of graphs in the model benefit its reasoning skills, and what are the limitations of this approach? 4. Are there any concerns regarding the experiments conducted in the paper, such as the choice of loss functions or the lack of visual graphs/tables? 5. How does the paper's reference section address earlier works in the field, such as VLPs in e-commerce, and what are the limitations of these references? 6. What is the reviewer's overall assessment of the paper's quality and novelty, and how does it compare to other works in the field?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In order to address the semantic alignment issue in multimodal tasks, this paper suggests the Knowledge-CLIP framework. Experiments reveal that the proposed approach is practical. Strengths And Weaknesses (a) Model The use of "graphs" for multimodal pre-training is practical. In essence, the model benefits from having strong reasoning skills. However, the details of the model are not well presented, including data processing, sample visualization, etc. (b) Experiments The experiments in the article are adequate and verify that the learning graph can help the model improve multimodal learning. Which of these loss functions affects the results the most, and is it graph-based loss? If not, the model appears to be oriented east-west. In Figure 1, because the template "not a photo of {}" is inherently misleading. In addition, the car below is between yellow and green in color. As a result, this is not a particularly good example to demonstrate what the author is trying to say. There is not much evidence to demonstrate that the proposed methods can really help the model for modal alignment. It is difficult to verify this conclusion just by show only the SOTA results (without some newest SOTA baselines ) and lacking visual graphs/tables. Except for VQA, there is no obvious advantage of the effect of the model in this paper. (c) writing The reviewer likes the motivation in the Introduction section. There aren't enough ablation experiments to confirm that every design is rational. Many earlier works, such as VLPs in e-commerce, also address the issue of alignment from different techniques. The "co-occurence" is the key factor that is the same with this article (no matter the word-patch alignment, or using RoI-tags). The paper's references, however, are insufficient. (d) others The SOTA performance was obtained by ignoring a large number of SOTA models. The authors should give a justifiable explanation. Update after rebuttal Thanks for your detailed response. I stick to my score. Questions Please see and answer "Strengths and Weaknesses" for the issues I mentioned. Additionally, the reviewer suggests the authors take the time to consider why Multimodal Pre-training requires Graph (or knowledge). The authors claimed that focusing on the graph can help semantic alignment at first, but the second half of the paper, just shows that continue learning with graph will have some improvement to CLIP. Limitations I recommend the authors give more robust explanations. Besides, this article still has many self-justifying explanations so far, and I hope the author will give more clear and reasonable illustrations. Overall, this article does a lot of experimenting, which is why I gave it a original POSITIVE rating. However, the article's flaws are really obvious, and I hope that the author will thoroughly improve them based on the comments. I will further check the rebuttal.
NIPS
Title Contrastive Language-Image Pre-Training with Knowledge Graphs Abstract Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semantic information into the widely used CLIP model [38]. Through introducing knowledge-based objectives in the pre-training process and utilizing different types of knowledge graphs as training data, our model can semantically align the representations in vision and language with higher quality, and enhance the reasoning ability across scenarios and modalities. Extensive experiments on various vision-language downstream tasks demonstrate the effectiveness of Knowledge-CLIP compared with the original CLIP and competitive baselines. 1 Introduction Large-scale vision-language pre-training has attracted wide research interests in recent years [9, 26, 38, 72]. Different from training independent models for each specific task, pre-trained models take the analogy of human biological intelligence system, trying to perceive the world from various data modalities and handle comprehensive tasks. Specifically, it aims to provide a unified inference paradigm that simultaneously learns representations for multi-modal data and can easily transfer to a variety of downstream tasks. Benefiting from the accessibility of massive image-text pairs from the web, the vision-language pre-training can leverage a broader source of supervision, and effectively improves the model’s generalization power. Early attempts on vision-language pre-training mainly focus on detecting objects in the images and aligning the corresponding word tokens with object regions [9, 28, 50]. Though effective, the entanglement with the concept of objects, and the additional resources for pre-trained object detectors impose restrictions on real-world applications. One of the pioneer works, CLIP [38], extends the scale of the pre-training dataset to 400 million image-text pairs, and learns representations by directly matching raw text with the corresponding image. Through a contrastive-based training scheme, CLIP learns visual concepts under a large vocabulary which significantly improves the model performances on various downstream tasks. Taking inspiration from CLIP, the following researches further extend the work from several perspectives, including data modality [72], downstream tasks [57], and training data efficiency [19, 44]. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Although showing promising results, the current pre-training frameworks also suffer from limitations. Specifically, the data pairs for pre-training are organized in the simplest manner, where only the descriptions of matched and unmatched are used to represent the relation between a given image and text pair. This usually leads to a degenerated scenario, where the model tends to rely on the co-occurrence of inputs instead of their semantic meanings. We give a toy example in Fig. 1 by evaluating the zero-shot transfer performance of CLIP on the ImageNet dataset [10] with the templates ’a photo of a {}’ and ’not a photo of a {}’. It is shown that the distributions of CLIP outputs under two templates are quite similar, suggesting that the current model fails to understand the semantic meaning of word tokens. As a result, the transferability of the model is restricted, and tends to show worse performances on tasks that require reasoning ability, e.g., visual question answering. To address the limitation of pre-trained models on semantic perceiving, we resort to the technique of knowledge graph, which has been widely studied in the field of natural language processing [7, 58]. Knowledge graph (KG) is a large-scale semantic network that comprises entities as nodes and semantic relations as edges. Through organizing data in a graph structure, knowledge graphs provide rich information on describing the relations between entities and enable a reasoning process through the whole graph. These advantages over regular-structured data are favorable on various tasks including question-answering [18, 70], relation prediction [29, 43] and knowledge reasoning [6, 59]. In recent years, knowledge graph has also been investigated in the field of computer vision, e.g., scene graph [65], and the integration of both language and image [2]. This bridges the gap between different modalities in the knowledge graph, which inspires us to explore a new knowledge-based pre-training framework, and inject semantic information into simple image-text pairs. In this paper, we propose a novel vision-language pre-training approach, dubbed Knowledge-CLIP, by constructing a knowledge augmented pre-training framework based on the widely used CLIP models. As illustrated in Fig. 2, we follow the structure of CLIP, and use two Transformer-based models as image and text encoders respectively. These two encoders take entities and relations in the knowledge graph as input and extract raw features for both entities and relations. Notably, entities can be in the form of image/text, while the relations are constantly described by language tokens. Then, a multi-modal Transformer encoder is adopted to fuse the entity features conditioned on their relations. In this way, the pre-trained model is pushed to concentrate on understanding semantic relations between visual and word concepts, thereby establishing strong semantic connections between vision and language modalities. To additionally improve the training efficiency and avoid the massive computation cost in the pretraining procedure, we adopt a simple continuous learning strategy by training our model based on the pre-trained weights of CLIP. This provides a possibility of efficiently promoting the model performance of CLIP with low training resources. We train our model on three knowledge graph datasets, namely Visual-Genome [24] (scene graph), ConceptNet [46] (language-based graph), and VisualSem [2] (multi-modal graph), and also adopt part of datasets from CLIP to avoid the model forgetting problem. With the knowledge-enhanced pre-training, Knowledge-CLIP achieves consistent improvements over the original CLIP models on various vision and language downstream tasks. 2 Related works Large-scale pre-training. Benefited from the development of Transformer in both vision [35, 63, 36] and language [54] tasks, large-scale pre-training framework has received wide concerns in recent years and shown promising results in the field of computer vision and natural language processing. GPT [39] is one of the pioneer works for language pre-training which optimizes the probability of output based on previous words in the sequence. BERT [11] adopts the masked language modeling technique and predicts the masked tokens conditioned on the unmasked ones. Similarly, computer vision society also witnesses the development of pre-training models thanks to the emergence of large-scale image datasets. IGPT [5] proposes a generative pre-training technique and shows promising results on classification task. MAE [17] adopts a similar pre-training scheme as BERT and predicts the masked regions of an image with unmasked ones. Multi-modal pre-training bears differences from the aforementioned frameworks and requires the alignment between various data modalities. Using enormous image-text pairs collected from Internet, vision-language models show significant improvements on various downstream tasks. Among these approaches, various pre-training scheme is adopted, including contrastive learning [1, 27, 31], masked language modeling [47, 51], and masked region modeling [9]. The problem of semantic misunderstanding has also been investigated by previous works. EICLIP [33] considers the problem of cross-modal retrieval in the field of E-commerce. Sharing similar insight with our work, the authors notice the model bias towards some specific word tokens in CLIP, and introduce causal inference to align the text encoder with e-commerce domain knowledge. K3M [73] focuses on the modality-missing and modality-noise problem and introduces knowledge modality into E-commerce tasks. DeVLBert [69] studies the spurious correlations between different modalities and adjusts the conditional probability of image tokens and word tokens. KaleidoBERT [74] focuses on image-text coherence by introducing several novel self-supervised tasks. Compared to previous approaches, we are the first to incorporate multi-modal knowledge graphs into the pre-training process, and effectively enhance the model perception on semantic relations between visual and language concepts. Knowledge Graph. Knowledge graph is first introduced in the field of natural language processing, and the knowledge graph embedding approaches have been successful on capturing the semantics of symbols (entities and relations) and achieving impressive results on a wide range of real-world applications including text understanding [13, 66], recommendation system [16, 56] and natural language question answering [18, 70]. On the other hand, scene graphs represent a type of graphstructured data in computer vision, where the visual concepts in the image are connected with semantic relations. Scene graphs emphasize the fine-grained semantic features for images and are widely adopted in various downstream tasks, including scene graph generation [65], and Scene Graph Parsing [68]. Besides scene graph, knowledge graph is also adopted in other computer vision tasks, including image classification [22], panoptic segmentation [62], and image captioning [71]. On this basis, multi-modal knowledge graph earns wide concerns in recent years. Considering the natural alignment between different data modalities, multi-modal knowledge graphs have been widely adopted in various graph-based tasks including link prediction [3, 30], entity classification [61], while also showing great potential on out of graph applications like visual question answering [20, 41] and recommendation systems [49, 52]. 3 Contrastive Language-Image Pre-training (CLIP) We first provide a brief review of model architectures and training settings in CLIP. CLIP uses two separate models for image encoder and text encoder respectively. For text inputs, a 12-layer Transformer is adopted with 512 width and 8 attention heads. Raw texts are first converted using byte pair encoding [40] technique under a vocabulary size of 49,152. The text sequence length is capped at 76 and added by a positional encoding before being sent into the text encoder. On the other hand, CLIP has different versions of image encoder with ResNet-based and Vision Transformer-based architectures. As the following researches have demonstrated the better performances of Vision Transformer models, we only consider Transformer-based image encoders in this paper. Similar to the text input, images are first converted to patches, and added by a positional encoding. At the last stage of both encoders, a global pooling function is adopted to compress the feature map into a single feature, which serves as the representation of the whole image/text sequence. The cosine distance of the image and text features is computed as the similarity of the data pair. For training supervision, a contrastive loss is adopted to maximize the similarity of matched pairs while minimizing the similarity of unmatched pairs. Given a batch of N data pairs {Ii,Ti}Ni=1, where Ii and T represents the ith image and text respectively, the loss function can be parameterized as: L = −1 2 N∑ i=1 ( log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ii), fT(Tj))/τ) + log exp(cos(fI(Ii), fT(Ti))/τ)∑N j=1 exp(cos(fI(Ij), fT(Ti))/τ) ) , (1) where fI and fT correspond to image and text encoders respectively, cos(·) denotes the cosine similarity between the inputs, and τ is a learnable temperature initialized at 0.07. This simple training framework actually brings several concerns that need to be addressed. First, the pre-training framework fails to model the semantic information of inputs due to the simplicity of the data structure. This results in inferior performances on tasks that require reasoning ability, e.g., visual question answering and visual commonsense reasoning. Second, the image and text features reside in separate spaces, which makes it difficult to model the interactions between different modalities. Third, the massive time and resource consumption in the training procedure set restrictions on performing a full pre-training schedule from scratch. 4 Knowledge-CLIP As we have summarized above, there are several concerns that hinder the transferability of CLIP and potential improvements on model performances. In this paper, we propose a novel pre-training framework based on knowledge graphs, that addresses the limitation of the original CLIP model from several perspectives: (1) we introduce knowledge graphs into the training dataset where the graph-structured data and semantic relations between concepts enable the model to extract semantic features and establish semantic connection across inputs; (2) A multi-modal encoder is added on top of the current image and text encoders to fuse the features from different modalities, and model the joint distribution between inputs; (3) A continuous learning strategy based on the pre-trained model of CLIP is adopted which avoids the massive computation cost in the pre-training procedure, and enhance the generalization power of the model efficiently. We introduce our framework in detail in the following sections, and show the overview in Fig. 2. 4.1 Data Preparation Different from raw image-text pairs adopted in the original CLIP, our model takes knowledge graphs as input. A knowledge graph can be defined as a directed graph G = {ξ,R, TR}, where ξ, R correspond to sets of entities and relations, and TR represent the set of relation triplets. A triplet (h, r, t) ∈ TR denotes that entity h ∈ ξ has relation r ∈ R with entity t ∈ ξ. As illustrated in Fig. 3, we pre-train our model on three types of knowledge graphs, including multi-modal knowledge graph, scene graph, and language-based knowledge graph. Among these, relations are constantly described in language tokens, where the entities are from different modalities in different forms. For multi-modal knowledge graph, the entities contain both illustrative images and language descriptions. Through representing the same entity under various modalities and connecting entities with relations, it helps to build semantic connections between vision and language concepts. In practice, language and vision descriptions are randomly chosen for each entity. In this way, the triplet set TR contains different forms including (Img, Rel, Img), (Img, Rel, Text), and (Text, Rel, Text), providing rich information across modalities while also enhancing perceptions within modalities. Different from multi-modal knowledge graph, scene graph extracts visual concepts (mainly objects) for each image, and connects them with predefined semantic relations describing relative locations, actions, etc. Therefore, the entities in the scene graph correspond to a certain region in an image, with the triplet form of (Img, Rel, Img). We practically use the selected regions as the input and discard the irrelevant parts. As two entities in the same triplet denote different regions in the same image, it forces the model to extract more fine-grained features. Lastly, language-based knowledge graph connects words and phrases of natural language with labeled edges. It is built on only language modality with the triplet form of (Text, Rel, Text), while helping to build semantic alignment within word tokens. 4.2 Model Architecture The model architecture and the training framework are illustrated in Fig. 2(A). Specifically, we first process the inputs into token sequences with modality-specific tokenizers. The BPE tokenzier [40] is adopted for language inputs, while image inputs are sliced into non-overlapped patches and converted into a sequence of patches following ViT [12]. For convenient processing, we set the length of the image sequence and text sequence as lI and lT respectively for all inputs. To preserve the relative position information in the input, learnable positional encodings are added to the corresponding sequences before being sent to the model. Two separate image encoder fI(·) and text encoder fT(·) are then adopted to extract features from raw inputs. For a given triplet (h, r, t), the entities h and t are sent to the encoders with respect to their modalities (image or text). The relation r, which is represented by language tokens, is sent to text encoder similar to text entity. Compared to the model structure in CLIP, we introduce a modification to better adapt our framework. Specifically, vanilla CLIP models use a pooling function at the last layer of two encoders to compress the feature map into a global representation. Namely, for an input u ∈ RL×di , where L and di denote the sequence length and feature dimension, the output of the encoder can be formulated as: xu = f(u) ∈ RL×do , x̄u = Pool(xu) ∈ Rdo , (2) where f represents the feature extraction module, Pool(·) denotes the pooling function, and do is the output dimension. Though efficient, it also leads to inevitable information loss in the local region, especially for the image inputs. Therefore, we remove the pooling functions for image and text entities to preserve the local information, and use xu ∈ RL×do as the extracted feature. The relation, on the other hand, is normally under a limited sequence length, e.g., one or two word tokens, where the information density is smaller than entities. Therefore, we retain the pooling function for relation input and use x̄u ∈ Rdo as the extracted features. In this way, we have extracted the features defined as (xh, x̄r, xt), which correspond to the elements in the input triplet (h, r, t). To model the joint distribution of different elements in the triplet, we consider a multi-modal encoder TransEncoder(·) to fuse the features from different sources. Specifically, we first concatenate all the features in the triplet into a single sequence and use a head token <head> at the beginning of the sequence. To emphasize the status of the tokens in the sequence, we consider additional learnable encodings for each element h, r, t in the triplet: X(h, r, t) = [<head>, xh+PEh, x̄r+PEr, xt+PEt]. (3) After processing by the multi-modal encoder, the feature of the head token <head> finally serves as the representation of the whole sequence: Y (h, r, t) = TransEncoder(X(h, r, t))[0, :]. (4) Also, representation for relation is extracted from the corresponding token: R(h, r, t) = TransEncoder(X(h, r, t))[1 + len(xh), :]. (5) 4.3 Training Targets Considering the unique data structure of knowledge graphs, we mainly adopt two types of training targets in our framework, including triplet-based loss and graph-based loss as illustrated in Fig. 2(B). Besides, a knowledge distillation loss is also considered due to the continuous learning strategy adopted in our framework. Triplet-based loss considers a batch of triplets as the input and supervises the training of our model by estimating the joint distribution of elements in the triplets. Inspired by the mask prediction technique that models the distribution of masked tokens conditioned on the unmasked regions, we similarly mask the elements in the triplets and predict the distribution with the help of a multi-modal encoder. Specifically, for incomplete triplets where certain elements are missing in the input, the concatenated sequence can be similarly derived as in Eq. 3 by masking the corresponding feature. For example, the concatenated sequence for an input (h, r, -) can be represented as: X(h, r, -) = [<head>, xh+PEh, x̄r+PEr, 0]. (6) On this basis, given a set of input D = {(hi, ri, ti)}Ni=1, we first model the distribution when one of the entities, i.e., ti, is masked, and derive the Entity-Entity (E2E) Loss by minimizing the negative log-likelihood: −E(h,r)∼Dlog(P (xt|xh, x̄r)). (7) We practically approximate the distribution P (xt|xh, x̄r) as the cosine similarity of P (xt) and P (xh, x̄r), and defined the loss function as: LE2E = − N∑ i=1 log( exp(cos(Y (-, -, ti), Y (hi, ri, -))/τ)∑ j exp(cos(Y (-, -, ti), Y (hj , rj , -))/τ) ). (8) We also model the distribution when the relation in the triplet is masked, and similarly derive the Entity-Relation (E2R) Loss: −E(h,t)∼Dlog(P (x̄r|xh, xt)). (9) Different from E2E loss, the relations in the triplets are defined in a limited set of relation groups. Therefore, we instead extract the representation of relation through an auxiliary two-layer MLP network, and model the objective as a classification problem from a predefined set of relation labels R. In this way, the loss function can be defined as: LE2R = − N∑ i=1 ∑ r∈R 1(r=ri)log(y(x̄ri)), where y(x̄ri) = MLP(R(hi, -, ti)), (10) is extracted from an MLP model followed by the output of multi-modal encoder defined in Eq. (5). Graph-based loss. We also take advantage of the graph structure in knowledge graph datasets, and adopt a graph neural network to extract deeper structural information among entities. We propagate information through connected edges in the graph, and update entity representations with aggregated feature. Specifically, for a graph neural network with L layers, the update function for the lth layer can be formulated as: G(l)(t) = E{hi,ri,t}∈TR g (l−1)(R(hi, -, t))G(l−1)(hi), G0(t) = Y (-, -, t), (11) where g(l)(R(hi, -, t)) = W (l)R(hi, -, t), (12) calculates the aggregation weights by relation representation R(hi, -, t) with a learnable matrix W (l). Finally, we define the Graph-Entity(G2E) Loss by computing the cosine similarity of entity features before and after the propagation procedure in the graph: LG2E = − 1 Nξ ∑ ti∈ξ log( exp(cos(Y (-, -, ti), G(L)(ti))/τ)∑ tj exp(cos(Y (-, -, ti), G(L)(tj))/τ) ). (13) Continuous Learning. Large-scale pre-training usually requires massive computation resources which makes it highly inefficient when training from scratch. Therefore, to inject the semantic information in an efficient manner, we consider training our model based on the pre-trained weights from the original CLIP. This powerful initialization promotes the convergence of our model and greatly enhances the training efficiency. However, naively extending the training process with new data leads to severe forgetting problem that hampers the performance of the original models. To address this limitation, we adopt simple solutions to maintain CLIP performances while improving its ability to extract semantic features from knowledge graphs. (1) Besides the knowledge graph datasets, we also train our model on several widely adopted image-text datasets that share a similar data distribution with the training data in CLIP. To better fit our pre-training framework, we convert the original image-text pair into the form of triplets, with specifically designed relations ’image of’ and ’caption of’. (2) We also use the original CLIP model as the teacher, and use an auxiliary loss LKD to measure the KL distance between the output of CLIP and our model. Overall, the final pre-training objective of Knowledge-CLIP is formulated as: L = LE2E + LE2R + LG2E + LKD. (14) 5 Experiments 5.1 Implementation Details Experimental Setup. In all the experiments, we use the same model structure as CLIP [38]. A 12-layer Transformer model with 512 width is adopted for text encoder, and ViT-L/14 is adopted for image encoder. For text and image encoder, we use the pre-trained weights in the original CLIP as the initialization. For the multi-modal encoder, we consider a 4 layer Transformer model with 1024 width. The rate for drop path is set as 0.1 during training. As the added multi-modal encoder is trained from random initialization, we decrease the learning rate for the pre-trained weights from CLIP to achieve a more balanced step in the optimization. We train Knowledge-CLIP with an initial learning rate of 1e-5 for image and text encoders, and 1e-3 for the multi-modal encoder. Cosine learning rate with linear warmup is used in the training schedule. Weight decay and gradient clip are also adopted. See more details in the supplemental material. Pre-train Dataset. Three knowledge graph datasets are adopted in the pre-training process. VisualSem [2] is a high-quality multi-modal knowledge graph dataset for vision and language concepts, including entities with multilingual glosses, multiple illustrative images, and visually relevant relations, covering a total number of 90k nodes, 1.3M glosses and 938k images. 13 semantic relations are used to connect different entities in the graph, while the entities in VisualSem are linked to Wikipedia articles, WordNet [34], and high-quality images from ImageNet [10]. Visual Genome [24] is a knowledge-based scene graph dataset that connects structured image concepts with semantic relations. Visual Genome serves as the benchmark for various vision tasks, e.g., visual grounding, and scene graph generation. ConceptNet [46] is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources including expert-created resources and crowd-sourcing built on only language modality. Besides the three knowledge graph datasets, we also train our model on two widely adopted imagetext datasets that share the similar data distribution with the training data in CLIP. We practically add COCO Caption [8] and CC3M [42] to the training set, while large-scale datasets like CC12M [4] or YFCC [21] are not considered to maintain training efficiency. Downstream Task. To validate the effectiveness of our framework, we conduct experiments on various downstream tasks, including multi-modal tasks like text and image retrieval, visual question answering, and uni-modal tasks like image classification and natural language understanding. 5.2 Multi-modal Tasks Visual question answering / Visual Entailment. We also validate the effectiveness of Knowledge-CLIP on other vision-language tasks, including VQA [15] and SNLI-VE [64]. We show the comparison results in Tab. 2. Compared to competitive baselines including VILLA [14] and ALBEF [26], Knowledge-CLIP with ViT-L/14 shows better performances under all settings, while the smaller model also achieves competitive re- sults. Compared to the original CLIP model, our pre-trained model practically improves its transferability on downstream tasks, especially on the datasets like VQA that requires reasoning ability. Image and text retrieval. We first conduct experiments on Flickr30k [37] and COCO Caption [8] dataset to show the performances of our model on image-text retrieval tasks. Given input sets X and Y of images and texts, we use Knowledge-CLIP to extract features for each input, and model the joint probability with the cosine similarity between image and text pairs. We summarize the comparison results of Knowledge-CLIP with competitive baselines in Tab. 1. It is shown that our model consistently achieves better results over the original CLIP on both datasets, while comparable with competitive baselines like OSCAR. 5.3 Uni-modal Tasks Image Classification. To further demonstrate the generalization power of Knowledge-CLIP, we compare the performances of pre-train models on the ImageNet classification task [10]. We summarize the comparison results in Tab. 3, and show that Knowledge-CLIP can also handle vision tasks well. We argue the improvements over baselines may attribute to the scene graphs in our pre-training dataset, which emphasize the visual concepts in the images. Language Understanding. We validate the generalization performance of Knowledge-CLIP for language understanding tasks on the widely adopted GLUE dataset [55]. Specifically, we conduct experiments on 7 tasks in GLUE and summarize the comparison results in Tab. 4. It is shown that our model achieves comparable performances with competitive baseline models. Also, for tasks like QQP and MNLI that require sentence-pair matching, Knowledge-CLIP shows higher performances, due to the existence of language triplets in the pre-training dataset. 5.4 Ablation Studies To validate the effectiveness of the components in our work, we carefully design several settings, including (1) CLIP+continuous learning: we train vanilla CLIP (pretrained weights as initialization) on knowledge datasets adopted in our work; (2) Knowledge-CLIP-(t1, t2, t3): we remove the training objectives respectively in our work to analyze the contribution of each loss. For all experiments, we adopt a smaller model (ViT-B/32) as the image encoder of CLIP in the ablation study. Also, it is worth noticing that KD loss plays a vital role in the continuous learning scheme, without which will lead to a significant performance drop due to the model forgetting problem. Therefore, we use KD loss in all the ablation settings for a fair comparison. We show the comparison results on two representative tasks in Tab. 5, including the image/text retrieval task on Flickr30K, and the visual question answering task in VQA. Several observations can be made from the ablation: (1) All three training objectives (E2E, E2R, G2E) contribute to improving the model performance. Training the model without any of the objectives leads to inferior performances on downstream tasks. We argue that the E2E, E2R, and G2E loss promote the model from different perspectives by focusing on semantic understanding of concepts, complicated relations between entities, and structural information. Therefore, all three objectives are necessary for the framework and contribute to the improvement respectively. (2) By comparing the first and second row, we can see that simply training the CLIP model with extra time and data fails to improve the generalization performance. It also demonstrates that the improvements mainly come from the injected knowledge information rather than the continuous learning scheme. We also conduct an ablation study on the KD loss adopted for continuous learning and summarize the results in Tab. 6. The model achieves lower results after removing the KD loss, indicating its vital role in the continuous learning scheme. We argue the reason for this phenomenon is that the model suffers from the forgetting problem, which is widely spotted in the field of lifelong learning and continuous learning. 5.5 Analysis on particular semantics We also conduct experiments on carefully selected data which may better reflect how a visionlanguage model understands a particular type of input. Specifically, we select questions in the VQA dataset that contains (1) Negations; (2) Color attributes; (3) Position attributes; (4) Sizes. We summarize the comparison results of CLIP and our model on these sub-datasets in Tab. 7. As we can observe, our model achieves consistent improvements over CLIP on these specially designed datasets and shows significantly better results. Regarding questions with negation, our model achieves 2.1% higher accuracy. Regarding color and position attributes, our model shows even higher improvements. We believe these comparisons on different ’semantic domains’ demonstrate the effectiveness of injecting knowledge information into the current vision-language pretraining framework which practically enhances the model perception of semantic understanding. 6 Conclusion In this paper, we propose a novel vision-language pretraining framework that incorporates knowledge information to model the semantic connections between vision and language entities. We introduce three types of graph-structured datasets into the training process, and adopt a multi-modal encoder to model the joint distribution of entities and their semantic relations. Extensive experiments on various downstream tasks including multi-modal, uni-modal, and graph-based tasks validate the transfer and generalization ability of our model. Our approach is now limited in injecting knowledge information into the CLIP models. However, our training objectives and new knowledge graph datasets are technically compatible with other large-scale pretraining frameworks. We will explore the possibility of further applications in the future. 7 Acknowledgement This work is supported in part by the National Key R&D Program of China under Grant 2020AAA0105200, the National Natural Science Foundation of China under Grants 62022048, Guoqiang Institute of Tsinghua University and Beijing Academy of Artificial Intelligence. We also appreciate the generous donation of computing resources by High-Flyer AI.
1. What is the focus and contribution of the paper regarding incorporating semantic knowledge graphs into language-image models? 2. What are the strengths of the proposed approach, particularly in terms of its motivation, methodology, and experiments? 3. Do you have any concerns or weaknesses regarding the paper, such as discussing qualitative improvements or performing specific analyses? 4. Are there any questions regarding the code, data, trained models, figures, losses, computation, or presentation? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a new Contrastive Language-Image Pre-Training method that incorporates semantic knowledge graphs (knowledge-CLIP). The method builds on the existing CLIP architecture and introduces three KG-aware pretraining objectives on top: Entity-Entity loss, Entity-Relation loss, Graph-Entity loss. The authors experiment with this method using multiple knowledge graphs and show improved performance across multimodal, unimodal, and KG-based downstream tasks. Strengths And Weaknesses Strengths The paper is generally well-written and easy to understand. The problem they are solving (incorporating background semantic knowledge into language-image model) and their proposed method (KG-aware model architecture and training objectives) are well motivated. The idea of considering a triplet set that contains different forms, (Img, Rel, Img), (Img, Rel, Text), (Text, Rel, Text), is interesting. The experiments are solid (covers three types of KGs; covers both multimodal and unimodal downstream tasks) and shows that the proposed method makes moderate improvements over the baseline. Weaknesses While the authors show improved numbers on benchmark datasets, it would be nice to also show and discuss how the proposed knowledge-CLIP model is qualitatively improving over the baseline CLIP. For example, in Intro and Figure 1, the authors motivates this paper by arguing that the baseline CLIP only captures text-image co-occurrence and fails to adjust for negation in text, etc. - is this issue solved in the proposed knowledge-CLIP model? Some existing work that combines text and KG (e.g. https://arxiv.org/abs/2104.06378) has done closely-related analyses such as adding negation or changing entities in text to see if the KG-augmented method can robustly handle them. It would be very interesting if the authors perform such analysis on the proposed knowledge-CLIP model that combines image, text and KGs. Questions Please find my main questions and suggestions mentioned in the "Weaknesses" section above. Will the code/data/trained models be released? For Figure 1b, what is the expected prediction when changing "yellow" to "green"? I wonder "a photo of green __" may not really make sense for the given image because there is no green object in the image? For graph-based loss (Equation 11), do you compute Y(-,-,t) for all the triplets using knowledge-CLIP? Would that be very expensive? Also, what is the size of the graph considered in the GNN - is it the entire KG or some subgraph? A suggestion for the presentation: In Figure 2, maybe make the font consistent across A (left panel) and B (right panel)? Limitations The authors addressed the potential negative societal impact (in Appendix)
NIPS
Title Principal Components Bias in Deep Neural Networks Abstract Recent work suggests that convolutional neural networks of different architectures 1 learn to classify images in the same order. To understand this phenomenon, we 2 revisit the over-parametrized deep linear network model. Our asymptotic analysis, 3 assuming that the hidden layers are wide enough, reveals that the convergence rate 4 of this model’s parameters is exponentially faster along directions corresponding 5 to the larger principal components of the data, at a rate governed by the singular 6 values. We term this convergence pattern the Principal Components bias (PC-bias). 7 We show how the PC-bias streamlines the order of learning of both linear and non8 linear networks, more prominently at earlier stages of learning. We then compare 9 our results to the spectral bias, showing that both biases can be seen independently, 10 and affect the order of learning in different ways. Finally, we discuss how the 11 PC-bias may explain some benefits of early stopping and its connection to PCA, 12 and why deep networks converge more slowly when given random labels. 13 N/A Recent work suggests that convolutional neural networks of different architectures1 learn to classify images in the same order. To understand this phenomenon, we2 revisit the over-parametrized deep linear network model. Our asymptotic analysis,3 assuming that the hidden layers are wide enough, reveals that the convergence rate4 of this model’s parameters is exponentially faster along directions corresponding5 to the larger principal components of the data, at a rate governed by the singular6 values. We term this convergence pattern the Principal Components bias (PC-bias).7 We show how the PC-bias streamlines the order of learning of both linear and non-8 linear networks, more prominently at earlier stages of learning. We then compare9 our results to the spectral bias, showing that both biases can be seen independently,10 and affect the order of learning in different ways. Finally, we discuss how the11 PC-bias may explain some benefits of early stopping and its connection to PCA,12 and why deep networks converge more slowly when given random labels.13 1 Introduction14 The dynamics of learning in deep neural networks is an intriguing subject, not yet sufficiently15 understood. Diverse empirical data seems to support the hypothesis that neural networks start by16 learning a simple model, which then gains complexity as learning proceeds (Gunasekar et al., 2018;17 Soudry et al., 2018; Hu et al., 2020; Nakkiran et al., 2019; Gissin et al., 2019; Heckel & Soltanolkotabi,18 2019; Ulyanov et al., 2018; Valle-Perez et al., 2018). This phenomenon is sometimes called simplicity19 bias (Dingle et al., 2018; Shah et al., 2020).20 Recent work additionally shows that neural networks learn the training examples of natural datasets21 in a consistent order, and further impose a consistent order on the test set (Hacohen et al., 2020;22 Pliushch et al., 2021). Below we call this effect Learning Order Constancy (LOC). Currently, the23 characteristics of visual data, which may explain this consistently imposed order, remain unclear.24 Surprisingly, this universal order persists despite the variability introduced into the training of different25 models and architectures.26 To understand this phenomenon, we start by analyzing the deep linear network model (Saxe et al.,27 2013, 2019), defined by the concatenation of linear operators. While not a universal approximator, it28 is nevertheless trained by minimizing a non-convex objective function with a multitude of minima.29 The investigation of such networks is often employed to shed light on the learning dynamics when30 complex geometric landscapes are explored by GD (Fukumizu, 1998; Arora et al., 2018).31 In Section 2, we prove that the convergence of the weights of deep linear networks is governed32 by the eigendecomposition of the raw data in a phenomenon we term PC-bias. These asymptotic33 results, valid when the hidden layers are wide enough, can be seen as an extension of the known34 behavior of the single-layer convex linear model (Le Cun et al., 1991). Our work is closely related to35 (Saxe et al., 2013, 2019), where the deep linear model’s dynamics is analyzed as a function of the36 input and input-output statistics. Importantly, the analysis in (Saxe et al., 2013, 2019; Arora et al.,37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2018) incorporates the simplifying assumption that the data’s singular values are identical (whitened38 data), an assumption which unfortunately obscures the main result of our analysis – the direct39 dependence of convergence rate on the singular values of the data.40 In Section 3, we empirically show that this pattern of convergence is indeed observed in deep linear41 networks, validating the plausibility of our assumptions. We continue by showing that the LOC-effect42 in deep linear network is determined solely by their PC-bias. We prove a similar (weaker) result for43 the non-linear two-layer ReLU model introduced by Allen-Zhu et al. (2018), where this model is44 presented as a certain extension of NTK (Jacot et al., 2020). In this framework, convergence is fastest45 along the largest kernel’s principal components, a result related to the Spectral bias discussed below.46 In Section 4, we extend the study empirically to non-linear networks, and investigate the relation47 between the PC-bias and the LOC-effect in general deep networks. We first show that the order48 by which examples are learned by linear networks is highly correlated with the order induced by49 prevalent deep CNN models. We then show directly that the learning order of non-linear CNN models50 is affected by the principal decomposition of the data. Moreover, the LOC-effect diminishes when51 data is whitened, indicating a tight connection between the PC-bias and the LOC-effect.52 Our results are reminiscent of another phenomenon, termed Spectral bias (Rahaman et al., 2019;53 Cao et al., 2019), which associates the learning dynamics of neural networks with the Fourier54 decomposition of functions in the hypothesis space. Rahaman et al. (2019) empirically demonstrated55 that the complexity of classifiers learned by ReLU networks increases with time. Basri et al. (2019,56 2020) showed theoretically, by way of analyzing elementary neural network models, that these models57 first fit the data with low-frequency functions, and gradually add higher frequencies to improve the fit.58 Nevertheless, the spectral bias and PC-bias are inherently different. Indeed, the eigendecomposition59 of raw images is closely related to the Fourier analysis of images as long as the statistical properties60 of images are (approximately) translation-invariant (Simoncelli & Olshausen, 2001; Torralba & Oliva,61 2003). Still, the PC-bias is guided by spectral properties of the raw data and is additionally blind to62 class labels. On the other hand, the spectral bias, as well as the related frequency bias that has been63 shown to characterize NTK models (Basri et al., 2020), are all guided by spectral properties of the64 learned hypothesis, which strongly depends on label assignment.65 In Section 4.3 we investigate the relation between the PC-bias, spectral bias, and the LOC-effect.66 We find that the LOC-effect is very robust: (i) when we neutralize the spectral bias by using low67 complexity models such as deep linear networks, the effect is still observed; (ii) when we neutralize68 the PC-bias by using whitened data, the LOC-effect persists. We hypothesize that at the beginning of69 learning, the learning dynamics of neural models is controlled by the eigendecomposition of the raw70 data. As learning proceeds, control of the dynamics slowly shifts to other factors.71 The PC-bias has implications beyond the LOC-effect, as expanded in Section 5 and Suppl. §A:72 1. Early stopping. It is often observed that when training deep networks with real data, the highest73 generalization accuracy is obtained before convergence. Consequently, early stopping is often74 prescribed to improve generalization. Following the commonly used assumption that in natural75 images the lowest principal components correspond to noise (Torralba & Oliva, 2003), our results76 predict the benefits of early stopping, and relate it to PCA. In Section 5 we investigate the relevance77 of this conclusion to real non-linear networks (see, e.g., Basri et al. (2019); Li et al. (2020) for78 complementary accounts).79 2. Slower convergence with random labels. Zhang et al. (2016) showed that neural networks80 can learn any label assignment. However, training with random label assignments is known to81 converge slower as compared to training with the original labels (Krueger et al., 2017). We report a82 similar phenomenon when training deep linear networks. Our analysis shows that when the principal83 eigenvectors are correlated with class identity, as is often the case in natural images, the loss decreases84 faster when given true label assignments as against random label assignments. In Section 5 we85 investigate this hypothesis empirically in linear and non-linear networks.86 3. Weight initialization. Different weight initialization schemes have been proposed to stabilize the87 learning and minimize the hazard of "exploding gradients" (e.g., Glorot & Bengio, 2010; He et al.,88 2015). Our analysis (see Suppl. §A) identifies a related variant, which eliminates the hazard when89 all the hidden layers are roughly of equal width. In the deep linear model, it can be proven that the90 proposed normalization variant in a sense minimizes repeated gradient amplification.91 2 Theoretical analysis92 Notations. Let X = {(xi,yi)}ni=1 denote the training data, where x ∈ Rq denotes the i-th data93 point and y ∈ {0, 1}K its corresponding label. Let 1nimi denote the centroid (mean) of class i with94 ni points, and M = [m1 . . .mK ]>. Finally, let X and Y denote the matrices whose ith column95 is xi and yi respectively. ΣXX = XX> and ΣY X = Y X> denote the covariance matrix of X96 and cross-covariance of X and Y respectively. We note that ΣXX captures the structure of the data97 irrespective of class identity.98 Definition 1 (Principal coordinate system). The coordinate system obtained by rotating the data in Rq99 by an orthonormal matrixU>, where SV D(ΣXX)=UDU>. Now ΣXX =D, a diagnoal matrix whose100 elements are the singular values of XX>, arranged in decreasing order d1 ≥ d2 ≥ . . . ≥ dq ≥ 0.101 Definition 2 (Compact representation). Let f(x) denote a deep linear network. Then f(x) =102 (∏1 l=LWl ) x = Wx, where W ∈ RK×q is called the compact representation of the network.103 Definition 3 (Error matrix). For a deep linear network whose compact representation is W , the104 error matrix is Er = WΣXX − ΣY X . In the principal coordinate system, Er = WD −M .105 Assumptions. Our analysis assumes that the learning rate µ is infinitesimal, and therefore terms106 of size O(µ2) can be neglected. We further assume that the width of the hidden layers lies in107 [m,m+Mb], wherem→∞ denotes a very large number and Mb is fixed. Thus terms of size O( 1m )108 can also be neglected. In Fig. 1 we show the plausibility of these assumptions, where the predicted109 dynamics is seen throughout the training of deep linear networks, even for small values ofm.110 2.1 The dynamics of deep over-parametrized linear networks111 Consider a deep linear network with L layers, and let112 L(X) = 1 2 ‖WX − Y ‖2F W := 1∏ l=L Wl, Wl ∈ Rml×ml−1 (1) Above ml denotes the number of neurons in layer l, where m0 = q and mL = K.113 Theorem 1. In each time point s, the compact matrix representation W obeys the following dynamics,114 when using the notation Ers defined in Def. 3:115 W s+1 = W s − µ L∑ l=1 Asl · Ers ·Bsl +O(µ2) (2) Above µ denotes the learning rate. Asl and B s l are called gradient scale matrices, and are defined as116 Asl := ( l+1∏ j=L W sj )( l+1∏ j=L W sj )> ∈ RK×K Bsl := ( 1∏ j=l−1 W sj )>( 1∏ j=l−1 W sj ) ∈ Rq×q (3) The proof can be found in Suppl. §B.117 Gradient scale matrices. Some statistical properties of such matrices are established in Suppl. §A.118 Note that when the number of hidden layers is 0 (L = 1), both gradient scale matrices reduce to the119 identity matrix and the dynamics in (2) is reduced to the following known result (e.g., Le Cun et al.,120 1991): W s+1 = W s−µErs. Recall, however, that the focus of this paper is the over-parameterized121 linear model with L > 1, in which the loss is not convex. Since the difference between the convex122 linear model and the over-parametrized deep model boils down to these matrices, our convergence123 analysis henceforth focuses on the dynamics of the gradient scale matrices.124 In accordance, we analyze the evolution of the gradient scale matrices as learning proceeds. Let125 m = min (m1, ...,mL−1) denote the size of the smallest hidden layer. Initially for s = 0, all weight126 matrices W 0l are assumed to be initialized by sampling from a distribution with mean 0 and variance127 σ2l = O( 1 m ). The specific normalization factor, alluded to in O( 1 m ), is a variant of the Glorot128 initialization. Details and justification can be found in Suppl. §A.1.129 At time s, letAsl (m) andB s l (m) denote a sequence of random gradient scale matrices, corresponding130 to networks whose smallest hidden layer hasm neurons. From Suppl. §A we deduce that:131 Theorem 2. Using p−→ to denote convergence in probability asm→∞, and ∀s, l:132 Bsl (m) p−→ I, var[Bl(m)] = O ( 1 m ) Asl (m) p−→ I, var[Al(m)] = O ( 1 m ) Proof. Proof by induction on s. Initially when s = 0, the claim follows from Thm 4 and Corr 5.1.133 The induction step validity follows from Thm 6 and Thm 7 (see Suppl. §A.2).134 The detailed proof shows that the relevant constants are amplified with s. While they remain moderate135 andm is sufficiently large, Bsl (m) ≈ I and Asl (m) ≈ I ∀l. In this case, the dynamics of the over-136 parameterized model is identical to the dynamics of the convex linear model, W s+1 = W s − µErs.137 Convergence rate. In §A.2 we show that the convergence of Bsl (m) to I is governed to some extent138 by O ( K m ) , while the convergence of Asl (m) is governed by O ( q m ) . Recall that while m → ∞,139 q is the dimension of the data space which is fixed in advance and can be fairly large, while K is140 the number of classes which is fixed and quite small. Typically, K q. Thus we expect the right141 gradient scale matrices Bsl (m) to remain approximately I much longer than the left matrices A s l (m).142 Empirical validation. Since the results above are asymptotic, and to envision the difference between143 convergence governed by O ( K m ) vs. O ( q m ) , we resort to simulations whose results are shown in144 Fig. 1. These empirical results, recounting linear networks with 4 hidden layers of width 1024, clearly145 show that during a significant part of the training both gradient scale matrices remain approximately146 I . The difference between the convergence rate of Bsl and A s l is seen later on, when ∆A s l starts to147 increase shortly before convergence, while ∆Bsl remains essentially 0 throughout.148 2.2 Weight evolution149 K q entails that Bsl (m) remains approximately equal to I much longer than Asl (m). This is150 substantiated by the simulation results in Fig. 1. Consequently, while earlier on it is safe to assume151 that both Asl ≈ I and Bsl ≈ I , as learning proceeds only Bsl ≈ I is safe to assume.152 With this in mind, we obtain expressions for the evolution of W s separately for earlier and later in153 learning. We first shift to the principal coordinate system defined in Def 1. In this system we can154 analyze each column of W s separately, where wsj and mj denote the respective columns of W s and155 M . At the beginning of learning when both Asl ≈ I and Bsl ≈ I (see §B.3 for a detailed derivation):156 ws+1j = (λj) sw0j + [1− (λj)s] mj dj λj = 1− µdjL (4) 157 Eq. 4 is reminiscent of the well understood dynamics of training the convex one layer linear model. It158 is composed of two additive terms, revealing two parallel and independent processes:159 1. The dependence on random initialization tends to 0 exponentially with decline rate λj .160 2. The final value is the sum of a geometrical series with a common ratio λj .161 In either case, convergence is fastest for the largest singular eigenvalue, or the first column of W ,162 and slowest for the smallest singular value. This behavior is visualized in Fig. 2a. Importantly, the163 rate of convergence depends on the singular value dj , the number of layers L, and the learning rate µ.164 In later stages of learning, when we can only assume that Bsl ≈ I , the dynamic becomes:165 ws+1j = s∏ ν=1 (I − µdjAν)w0j + µ [ s∑ ν=1 s∏ ρ=ν+1 (I − µdjAρ)Aν ] mj (5) where As = ∑L l=1A s l . The proof is provided in §B.3. Although the dynamics now depends on166 matrices As as well, it is still the case that the convergence of each column is governed by its singular167 value dj . This suggests that while the PC-bias is more pronounced in earlier stages of learning, its168 effect persists throughout.169 The analysis above is extended to a simple non-linear ReLU model (cf. Arora et al., 2019) as detailed170 in §B.2, with qualitatively similar results (albeit under unrealistic assumptions). Empirical results,171 shown in Fig. 2b, indicate that the results are indicative beyond the assumed circumstances.172 3 PC-bias: empirical study173 In this section, we first analyze deep linear networks, showing that the convergence rate is indeed174 governed by the principal singular values of the data, which demonstrates the plausibility of the175 assumptions made in Section 2. We continue by extending the scope of the investigation to non-linear176 neural networks, finding there evidence for the PC-bias mostly in the earlier stages of learning.177 3.1 Methodology178 We say that a linear network is L-layered when it has L − 1 hidden fully connected (FC) layers179 (without convolutional layers). In our empirical study we relaxed some assumptions of the theoretical180 study, in order to increase the resemblance of the trained networks to networks in common use.181 Specifically, we changed the initialization to the commonly used Glorot initialization, replaced the182 L2 loss with the cross-entropy loss, and employed SGD instead of the deterministic GD. Notably,183 the original assumptions yielded similar results. The results presented summarize experiments with184 networks of equal width across all hidden layers, specifically the moderate value of m = 1024,185 keeping in mind that we test the relevance of asymptotic results form→∞. Using a different width186 for each layer yielded similar qualitative results. Details regarding the hyper-parameters, architectures,187 and datasets can be found in §D.1, §D.3 and §D.4 respectively.188 3.2 PC-bias in deep linear networks189 In this section, we train L-layered linear networks, then compute their compact representations190 W rotated to align with the canonical coordinate system (Def. 1). Note that each row wr in W191 essentially defines the one-vs-all separating hyper-plane corresponding to class r.192 To examine both the variability between models and their convergence rate, we inspect wr at different193 time points during learning. The rate of convergence can be measured directly, by observing the194 changes in the weights of each element in wr. These weight values1 should be compared with195 the optimal values in each row wr of Wopt = Y XT (XXT ). The variability between models is196 measured by calculating the standard deviation (std) of each wr across N models.197 We begin with linear networks. We trained 10 5-layered FC linear networks, and 10 linear st-VGG198 convolutional networks. When analyzing the compact representation of such networks we observe199 similar behavior – weights corresponding to larger principal components converge faster to the200 optimal value, and their variability across models converges faster to 0 (Figs. 3a,3b). Thus, while the201 theoretical results are asymptotic, PC-bias is empirically seen throughout the entire learning process202 of deep linear networks.203 Whitened data. The PC-bias is neutralized when the data is whitened, at which point ΣXX is the204 scaled identity matrix. In Fig. 3c, we plot the results of the same experimental protocol while using a205 ZCA-whitened dataset. As predicted, the networks no longer show any bias towards any principal206 direction. Weights in all directions are scaled similarly, and the std over all models is the same in207 each epoch, irrespective of the principal direction. (Additional experiments show that this is not an208 artifact of the lack of uniqueness when deriving the principal components of a white signal).209 1We note that the weights tend to start larger for smaller principal components, as can be seen in Fig. 3a left. 3.3 PC-bias in general CNNs210 In this section, we investigate the manifestation of the PC-bias in non-linear deep convolutional211 networks. As we cannot directly track the learning dynamics separately in each principal direction of212 non-linear networks, we adopt two different evaluation mechanisms:213 Linear approximation. We considered several linear approximations, but since all of them showed214 the same qualitative behavior, we report results with the simplest one. Specifically, to obtain a linear215 approximation of a non-linear network, without max-pooling or batch-normalization layers, we216 follow the definition of the compact representation from Section 2 while ignoring any non-linear217 activation. We then align this matrix with the canonical coordinate system (Def. 1), and observe the218 evolution of the weights and their std across models along the principal directions during learning.219 Note that now the networks do not converge to the same compact representation, which is not unique.220 Nevertheless, we see that the PC-bias governs the weight dynamics to a noticeable extent.221 More specifically, in these networks a large fraction of the lowest principal components hardly changes222 during learning, as good as being ignored. Nevertheless, the PC-bias affects the higher principal223 components, most notably at the beginning of training (see Fig. 3d). Thus weights corresponding to224 higher principal components converge faster, and the std across models of such weights decreases225 faster for higher principal components.226 Projection to higher PC’s. We created a modified test-set, by project-227 ing each test example on the span of the first P principal components.228 This is equivalent to reducing the dimensionality of the test set to P us-229 ing PCA. We trained an ensemble of N=100 st-VGG networks on the230 original small mammals training set, then evaluated these networks dur-231 ing training on 4 versions of the test-set, reduced to P=1,10,100,1000232 dimensions respectively. Mean accuracy is plotted in Fig. 4. Similar233 results are obtained when training VGG-19 networks on CIFAR-10,234 see §C.3.235 Taking a closer look at Fig. 4, we see that when evaluated on lower236 dimensionality test-data (P=1,10), the networks’ accuracy peaks after237 a few epochs, at which point performance starts to decrease. This result suggests that the networks238 rely more heavily on these dimensions in the earlier phases of learning, and then continue to learn239 other things. In contrast, when evaluated on higher dimensionality test-data (P=100,1000), accuracy240 continues to rise, longer so for larger P . This suggests that significant learning of the additional241 dimensions continues in later stages of the learning.242 4 PC-bias: Learning Order Constancy243 In this section, we show that the PC-bias is significantly correlated with the learning order of deep244 neural networks, and can therefore partially account for the LOC-effect described in Section 1.245 Following Hacohen et al. (2020), we measure the "speed of learning" of each example by computing246 its accessibility score. This score is given per example, and characterizes how fast an ensemble of247 N networks learns it. Formally, accessibility(x) = E [1(fei (x) = y(x))], where fei (x) denotes248 the outcome of the i-th network trained over e epochs, and the mean is taken over networks and249 epochs. For the set of datapoints {(xj ,yj)}nj=1, Learning Order Constancy is manifested by the high250 correlation between 2 instances of accessibility(x), each computed from a different ensemble.251 PC-bias is shown to pertain to LOC in two ways: First, in Section 4.1 we show high correlation252 between the learning order in deep linear and non-linear networks. Since the PC-bias fully accounts253 for LOC in deep linear networks, this suggests it also accounts (at least partially) for the observed254 LOC in non-linear networks. Comparison with the critical principal component verifies this assertion.255 Second, we show in Section 4.2 that when the PC-bias is neutralized, LOC diminishes as well. In256 Section 4.3 we discuss the relationship between the spectral bias, PC-bias and the LOC-effect.257 4.1 PC-Bias is correlated with LOC258 We first compare the order of learning of non-linear models and deep linear networks by computing259 the correlation between the accessibility scores of both models. This comparison reveals high260 correlation (r = 0.85, p < 10−45), as seen in Fig. 5a. To investigate directly the connection between261 the PC-bias and LOC, we define the critical principal component of an example to be the first262 principal component P , such that a linear classifier trained on the original data can classify the263 example correctly when projected to P principal components. We trained N=100 st-VGG networks264 on the cats and dogs dataset, and computed for each example its accessibility score and critical265 principal component. In Fig. 5b we see strong negative correlation between the two scores (p=−0.93,266 r<10−4), suggesting that the PC-bias affects the order of learning as measured by accessibility.267 4.2 Neutralizing the PC-bias leads to diminishing LOC268 Whitening the data eliminates the PC-bias as shown in Fig. 3c, since all the singular values are now269 identical. Here we use this observation to further probe into the dependency of the Learning Order270 Constancy on the PC-bias. Starting with the linear case, we train 4 ensembles of N=10 2-layered271 linear networks on the cats and dogs dataset, 2 with and 2 without ZCA-whitening. We compute the272 accessibility score for each ensemble separately, and correlate the scores of the 2 ensembles in each273 test case. Each correlation captures the consistency of the LOC-effect for the respective condition.274 This correlation is expected to be very high for natural images. Low correlation implies that the275 LOC-effect is weak, as training the same network multiple times yields a different learning order.276 2As non-linear models achieve the accuracy of linear models within an epoch or 2, low learning rate is used. Fig. 6a shows the results for deep linear networks. As expected, the correlation when using natural277 images is very high. However, when using whitened images, correlation plummets, indicating that278 the LOC-effect is highly dependent on the PC-bias. We note that the drop in the correlation is much279 higher when considering only the 20% "fastest learned" examples, suggesting that the PC-bias affects280 learning order more evidently at earlier stages of learning.281 Fig. 6b shows the results when repeating this experiment with non-linear networks, training 2282 collections of N=10 VGG-19 networks on CIFAR-10. We find that the elimination of the PC-bias283 in this case affects LOC much less, suggesting that the PC-bias can only partially account for the284 LOC-effect in the non-linear case. However, we note that at the beginning of learning, when the285 PC-bias is most pronounced, once again the drop is much larger and very significant (half).286 4.3 Spectral bias, PC-bias and LOC287 The spectral bias (Rahaman et al., 2019) characterizes the dynamics of learning in neural networks288 differently, asserting that initially neural models can be described by low frequencies only. This may289 provide an alternative explanation to LOC. Recall that LOC is manifested in the consistency of the290 accessibility score across networks. To compare between the spectral bias and accessibility score,291 we first need to estimate for each example whether it can be correctly classified by a low frequency292 model. Accordingly, we define for each example a discriminability measure – the percentage out293 of its k neighbors that share with it class identity. Intuitively, an example has a low discriminability294 score when it is surrounded by examples from other classes, which forces the learned boundary to295 incorporate high frequencies. In §C.2 we show that in the 2D case analyzed by Rahaman et al. (2019),296 this measure strongly correlates (r=−0.8, p < 10−2) with the spectral bias.297 We trained several networks (VGG-19 and st-VGG) on several real datasets, including small-298 mammals, STL-10, CIFAR-10/100 and a subset of ImageNet-20. For each network and dataset,299 we computed the accessibility score as well as the discriminability of each example. The vector300 space, in which discriminability is evaluated, is either the raw data or the network’s perceptual space301 (penultimate layer activation). The correlation between these scores is shown in Table 1.302 Using raw data, low correlation is still seen between the accessibility and discriminability scores303 when inspecting the smaller datasets (small mammals, CIFAR-100 and STL10). This correlation304 vanishes when considering the larger ImageNet-20 dataset. It would appear that on its own, the305 spectral bias cannot adequately explain the LOC-effect. On the other hand, in the perceptual space,306 the correlation between discriminability and accessibility is quite significant for all datasets. Contrary307 to our supposition, it seems that networks learn a representation where the spectral bias is evident,308 but this bias does not necessarily govern its learning before the representation has been learned.309 5 PC-bias: further implications310 Early Stopping and the Generalization Gap. Considering natural images, it is often assumed that311 the least significant principal components of the data represent noise (Torralba & Oliva, 2003). In312 such cases, our analysis predicts that as noise dominates the components learned later in learning,313 early stopping is likely to be beneficial. To test this hypothesis directly, we manipulated CIFAR-10314 to amplify the signal in either the 1.5% most significant (higher) or 1.5% least significant (lower)315 principal components (see examples in Fig. 16, Suppl. §D). Accuracy over the original test set,316 after training 10 st-VGG and linear st-VGG networks on these manipulated images, can be seen317 in Fig. 7. Both in linear and non-linear networks, early stopping is more beneficial when lower318 principal components are amplified, and significantly less so when higher components are amplified,319 as predicted by the PC-bias.320 Slower Convergence with Random Labels. Deep neural models can learn any random label321 assignment to a given training set (Zhang et al., 2016). However, when trained on randomly labeled322 data, convergence appears to be much slower (Krueger et al., 2017). Assume, as before, that in natural323 images the lower principal components are dominated by noise. We argue that the PC-bias now324 predicts this empirical result, since learning randomly labeled examples requires signal present in325 lower principal components. To test this hypothesis directly, we trained 10 2-layered linear networks326 on datasets of natural images. Indeed, these networks converge slower with random labels (see327 Fig. 8a). In Fig. 8b we repeat this experiment after having whitened the images, to neutralize the328 PC-bias. Now convergence rate is identical, whether the labels are original or shuffled. Clearly, in329 deep linear networks the PC-bias gives a full account of this phenomenon.330 To further check the relevance of this account to non-linear networks, we artificially generate datasets331 where only the first P principal components are discriminative, while the remaining components332 become noise by design. We constructed two such datasets: in one the labels are correlated with the333 original labels, in the other they are not. Specifically, PCA is used to reduce the dimensionality of a334 two-class dataset to P , and the optimal linear separator in the reduced representation is computed.335 Next, all the labels of points that are incorrectly classified by the optimal linear separator are switched,336 so that the train and test sets are linearly separable by this separator. Note that the modified labels337 are still highly correlated with the original labels (for P = 500: p = 0.82, r < 10−10). The338 second dataset is generated by repeating the process while starting from randomly shuffled labels.339 This dataset is likewise fully separable when projected to the first P components, but its labels are340 uncorrelated with the original labels (for P = 500: p = 0.06, r < 10−10).341 The mean training accuracy of 10 non-linear networks with P=10,50,500 is plotted in Fig. 9a (first342 dataset) and Fig. 9b (second dataset). In both cases, the lower P is (namely, only the first few principal343 components are discriminative), the faster the data is learned by the non-linear network. Whether the344 labels are real or shuffled makes little qualitative difference, as predicted by the PC-bias.345 6 Summary and discussion346 When trained with gradient descent, the convergence rate of the over-parameterized deep linear347 network model is provably governed by the eigendecomposition of the data, and specifically, pa-348 rameters corresponding to the most significant principal components converge faster than the least349 significant components. Empirical evidence is provided for the relevance of these results to more350 realistic non-linear networks. We term this effect PC-bias. This result provides a complementary351 account for some prevalent empirical observations, including the benefit of early stopping and the352 slower convergence rate with shuffled labels.353 We use the PC-bias to explicate the Learning Order Constancy (LOC), showing that examples354 learned at earlier stages are more distinguishable by the higher principal components, demonstrating355 that networks’ training relies more heavily on higher principal components early on. A causal link356 between the PC-bias and the LOC-effect is demonstrated, as the LOC-effect diminishes when the357 PC-bias is eliminated by whitening the images. We analyze these findings in view of a related358 phenomenon termed spectral bias. While the PC-bias may be more prominent early on, the spectral359 bias may be more important in later stages of learning.360 References361 Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural362 networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018.363 Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by364 overparameterization. In International Conference on Machine Learning, pp. 244–253, 2018.365 Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generaliza-366 tion for overparameterized two-layer neural networks. In International Conference on Machine367 Learning, pp. 322–332, 2019.368 Basri, R., Jacobs, D. W., Kasten, Y., and Kritchman, S. The convergence rate of neural networks for369 learned functions of different frequencies. In Advances in Neural Information Processing Systems,370 pp. 4761–4771, 2019.371 Basri, R., Galun, M., Geifman, A., Jacobs, D., Kasten, Y., and Kritchman, S. Frequency bias in neural372 networks for input of non-uniform density. In International Conference on Machine Learning, pp.373 685–694. PMLR, 2020.374 Cao, Y., Fang, Z., Wu, Y., Zhou, D.-X., and Gu, Q. Towards understanding the spectral bias of deep375 learning. arXiv preprint arXiv:1912.01198, 2019.376 Dingle, K., Camargo, C. Q., and Louis, A. A. Input–output maps are strongly biased towards simple377 outputs. Nature communications, 9(1):1–7, 2018.378 Fukumizu, K. Effect of batch learning in multilayer neural networks. Gen, 1(04):1E–03, 1998.379 Gissin, D., Shalev-Shwartz, S., and Daniely, A. The implicit bias of depth: How incremental learning380 drives generalization. arXiv preprint arXiv:1909.12051, 2019.381 Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks.382 In Proceedings of the thirteenth international conference on artificial intelligence and statistics,383 pp. 249–256, 2010.384 Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear385 convolutional networks. arXiv preprint arXiv:1806.00468, 2018.386 Hacohen, G., Choshen, L., and Weinshall, D. Let’s agree to agree: Neural networks share classification387 order on real datasets. In International Conference on Machine Learning, pp. 3950–3960. PMLR,388 2020.389 He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level390 performance on imagenet classification. In Proceedings of the IEEE International Conference on391 Computer Vision (ICCV), December 2015.392 Heckel, R. and Soltanolkotabi, M. Denoising and regularization via exploiting the structural bias of393 convolutional generators. arXiv preprint arXiv:1910.14634, 2019.394 Hu, W., Xiao, L., Adlam, B., and Pennington, J. The surprising simplicity of the early-time learning395 dynamics of neural networks. arXiv preprint arXiv:2006.14599, 2020.396 Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in397 neural networks, 2020.398 Krueger, D., Ballas, N., Jastrzebski, S., Arpit, D., Kanwal, M. S., Maharaj, T., Bengio, E., Fischer, A.,399 and Courville, A. Deep nets don’t learn via memorization. 2017.400 Le Cun, Y., Kanter, I., and Solla, A. S. Second order properties of error surfaces learning time and401 generalization. Advances in neural information processing systems, 3:918–924, 1991.402 Li, M., Soltanolkotabi, M., and Oymak, S. Gradient descent with early stopping is provably robust403 to label noise for overparameterized neural networks. In International Conference on Artificial404 Intelligence and Statistics, pp. 4313–4324. PMLR, 2020.405 Nakkiran, P., Kaplun, G., Kalimeris, D., Yang, T., Edelman, B. L., Zhang, F., and Barak, B. Sgd406 on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604,407 2019.408 Pliushch, I., Mundt, M., Lupp, N., and Ramesh, V. When deep classifiers agree: Analyzing409 correlations between learning order and image statistics. arXiv preprint arXiv:2105.08997, 2021.410 Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville,411 A. On the spectral bias of neural networks. In International Conference on Machine Learning, pp.412 5301–5310. PMLR, 2019.413 Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning414 in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.415 Saxe, A. M., McClelland, J. L., and Ganguli, S. A mathematical theory of semantic development in416 deep neural networks. Proceedings of the National Academy of Sciences, 116(23):11537–11546,417 2019.418 Shah, H., Tamuly, K., Raghunathan, A., Jain, P., and Netrapalli, P. The pitfalls of simplicity bias in419 neural networks. arXiv preprint arXiv:2006.07710, 2020.420 Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual421 review of neuroscience, 24(1):1193–1216, 2001.422 Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient423 descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.424 Torralba, A. and Oliva, A. Statistics of natural image categories. Network: computation in neural425 systems, 14(3):391–412, 2003.426 Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE conference427 on computer vision and pattern recognition, pp. 9446–9454, 2018.428 Valle-Perez, G., Camargo, C. Q., and Louis, A. A. Deep learning generalizes because the parameter-429 function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, 2018.430 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires431 rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.432 Checklist433 1. For all authors...434 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s435 contributions and scope? [Yes]436 (b) Did you describe the limitations of your work? [Yes]437 (c) Did you discuss any potential negative societal impacts of your work? [N/A]438 (d) Have you read the ethics review guidelines and ensured that your paper conforms to439 them? [Yes]440 2. If you are including theoretical results...441 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See the442 "Assumptions" paragraph as Section 2443 (b) Did you include complete proofs of all theoretical results? [Yes] Each theorem reference444 to its proof. Proofs can be found in Suppl. §A,B445 3. If you ran experiments...446 (a) Did you include the code, data, and instructions needed to reproduce the main ex-447 perimental results (either in the supplemental material or as a URL)? [No] All data,448 instructions and hyper-parameters are explictly written in the main paper and/or in the449 Suppl. (see §D.4,D.4). The code itself will be provided once the anonymity will be450 lifted.451 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they452 were chosen)? [Yes]453 (c) Did you report error bars (e.g., with respect to the random seed after running experi-454 ments multiple times)? [Yes]455 (d) Did you include the total amount of compute and the type of resources used (e.g., type456 of GPUs, internal cluster, or cloud provider)? [N/A]457 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...458 (a) If your work uses existing assets, did you cite the creators? [Yes]459 (b) Did you mention the license of the assets? [N/A]460 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]461 462 (d) Did you discuss whether and how consent was obtained from people whose data you’re463 using/curating? [N/A]464 (e) Did you discuss whether the data you are using/curating contains personally identifiable465 information or offensive content? [N/A]466 5. If you used crowdsourcing or conducted research with human subjects...467 (a) Did you include the full text of instructions given to participants and screenshots, if468 applicable? [N/A]469 (b) Did you describe any potential participant risks, with links to Institutional Review470 Board (IRB) approvals, if applicable? [N/A]471 (c) Did you include the estimated hourly wage paid to participants and the total amount472 spent on participant compensation? [N/A]473
1. What is the focus of the paper regarding deep linear networks? 2. What are the strengths of the paper, particularly in terms of theoretical analysis and empirical results? 3. Are there any limitations or weaknesses in the paper, such as the scope of the study or connections to prior works? 4. How does the reviewer assess the novelty and significance of the proposed PC-bias convergence pattern and its relation to LOC-effect? 5. What are some interesting observations made by the authors in their empirical studies?
Summary Of The Paper Review
Summary Of The Paper This work studies the training dynamics of over-parameterized deep linear networks. The authors propose the Principal Components bias (PC-bias) convergence pattern that characterizes the convergence behavior of deep linear networks training, which is supported by theoretical analysis and empirical results. This works also investigates the Learning Order Constancy effect (LOC-effect) and identified the connection between PC-bias and LOC-effect. Empirically, the authors also study several implications of PC-bias, including early stopping, convergence behavior under label noise, and provide interesting observations. Review This paper identifies several interesting phenomenons for deep (non-)linear neural networks, including the effect of whitened data, connections between the PC-bias, LOC-effect, and spectral bias. Pros: Theoretically, this paper provides precise characterizations of the training dynamics of deep linear networks in Section 2. Empirically, the authors investigated the proposed PC-bias in several settings as well as its connections to LOC-effect and spectral bias. Cons: The theoretical and empirical results are somehow limited to simple linear models, and some of the interesting observations have been studied in previous works, for example, [1] studied the dynamics of deep linear network training and found that the 'networks sequentially learn the solutions of a reduced-rank regression with a gradually increasing rank'. The connection between whitened data, label noise, and convergence speed is interesting, it would be better to precisely characterize this phenomenon in a lemma/theorem by making some reasonable assumptions. [1]. Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks. Gauthier Gidel, Francis Bach, and Simon Lacoste-Julien, NeurIPS 2019.
NIPS
Title Principal Components Bias in Deep Neural Networks Abstract Recent work suggests that convolutional neural networks of different architectures 1 learn to classify images in the same order. To understand this phenomenon, we 2 revisit the over-parametrized deep linear network model. Our asymptotic analysis, 3 assuming that the hidden layers are wide enough, reveals that the convergence rate 4 of this model’s parameters is exponentially faster along directions corresponding 5 to the larger principal components of the data, at a rate governed by the singular 6 values. We term this convergence pattern the Principal Components bias (PC-bias). 7 We show how the PC-bias streamlines the order of learning of both linear and non8 linear networks, more prominently at earlier stages of learning. We then compare 9 our results to the spectral bias, showing that both biases can be seen independently, 10 and affect the order of learning in different ways. Finally, we discuss how the 11 PC-bias may explain some benefits of early stopping and its connection to PCA, 12 and why deep networks converge more slowly when given random labels. 13 N/A Recent work suggests that convolutional neural networks of different architectures1 learn to classify images in the same order. To understand this phenomenon, we2 revisit the over-parametrized deep linear network model. Our asymptotic analysis,3 assuming that the hidden layers are wide enough, reveals that the convergence rate4 of this model’s parameters is exponentially faster along directions corresponding5 to the larger principal components of the data, at a rate governed by the singular6 values. We term this convergence pattern the Principal Components bias (PC-bias).7 We show how the PC-bias streamlines the order of learning of both linear and non-8 linear networks, more prominently at earlier stages of learning. We then compare9 our results to the spectral bias, showing that both biases can be seen independently,10 and affect the order of learning in different ways. Finally, we discuss how the11 PC-bias may explain some benefits of early stopping and its connection to PCA,12 and why deep networks converge more slowly when given random labels.13 1 Introduction14 The dynamics of learning in deep neural networks is an intriguing subject, not yet sufficiently15 understood. Diverse empirical data seems to support the hypothesis that neural networks start by16 learning a simple model, which then gains complexity as learning proceeds (Gunasekar et al., 2018;17 Soudry et al., 2018; Hu et al., 2020; Nakkiran et al., 2019; Gissin et al., 2019; Heckel & Soltanolkotabi,18 2019; Ulyanov et al., 2018; Valle-Perez et al., 2018). This phenomenon is sometimes called simplicity19 bias (Dingle et al., 2018; Shah et al., 2020).20 Recent work additionally shows that neural networks learn the training examples of natural datasets21 in a consistent order, and further impose a consistent order on the test set (Hacohen et al., 2020;22 Pliushch et al., 2021). Below we call this effect Learning Order Constancy (LOC). Currently, the23 characteristics of visual data, which may explain this consistently imposed order, remain unclear.24 Surprisingly, this universal order persists despite the variability introduced into the training of different25 models and architectures.26 To understand this phenomenon, we start by analyzing the deep linear network model (Saxe et al.,27 2013, 2019), defined by the concatenation of linear operators. While not a universal approximator, it28 is nevertheless trained by minimizing a non-convex objective function with a multitude of minima.29 The investigation of such networks is often employed to shed light on the learning dynamics when30 complex geometric landscapes are explored by GD (Fukumizu, 1998; Arora et al., 2018).31 In Section 2, we prove that the convergence of the weights of deep linear networks is governed32 by the eigendecomposition of the raw data in a phenomenon we term PC-bias. These asymptotic33 results, valid when the hidden layers are wide enough, can be seen as an extension of the known34 behavior of the single-layer convex linear model (Le Cun et al., 1991). Our work is closely related to35 (Saxe et al., 2013, 2019), where the deep linear model’s dynamics is analyzed as a function of the36 input and input-output statistics. Importantly, the analysis in (Saxe et al., 2013, 2019; Arora et al.,37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2018) incorporates the simplifying assumption that the data’s singular values are identical (whitened38 data), an assumption which unfortunately obscures the main result of our analysis – the direct39 dependence of convergence rate on the singular values of the data.40 In Section 3, we empirically show that this pattern of convergence is indeed observed in deep linear41 networks, validating the plausibility of our assumptions. We continue by showing that the LOC-effect42 in deep linear network is determined solely by their PC-bias. We prove a similar (weaker) result for43 the non-linear two-layer ReLU model introduced by Allen-Zhu et al. (2018), where this model is44 presented as a certain extension of NTK (Jacot et al., 2020). In this framework, convergence is fastest45 along the largest kernel’s principal components, a result related to the Spectral bias discussed below.46 In Section 4, we extend the study empirically to non-linear networks, and investigate the relation47 between the PC-bias and the LOC-effect in general deep networks. We first show that the order48 by which examples are learned by linear networks is highly correlated with the order induced by49 prevalent deep CNN models. We then show directly that the learning order of non-linear CNN models50 is affected by the principal decomposition of the data. Moreover, the LOC-effect diminishes when51 data is whitened, indicating a tight connection between the PC-bias and the LOC-effect.52 Our results are reminiscent of another phenomenon, termed Spectral bias (Rahaman et al., 2019;53 Cao et al., 2019), which associates the learning dynamics of neural networks with the Fourier54 decomposition of functions in the hypothesis space. Rahaman et al. (2019) empirically demonstrated55 that the complexity of classifiers learned by ReLU networks increases with time. Basri et al. (2019,56 2020) showed theoretically, by way of analyzing elementary neural network models, that these models57 first fit the data with low-frequency functions, and gradually add higher frequencies to improve the fit.58 Nevertheless, the spectral bias and PC-bias are inherently different. Indeed, the eigendecomposition59 of raw images is closely related to the Fourier analysis of images as long as the statistical properties60 of images are (approximately) translation-invariant (Simoncelli & Olshausen, 2001; Torralba & Oliva,61 2003). Still, the PC-bias is guided by spectral properties of the raw data and is additionally blind to62 class labels. On the other hand, the spectral bias, as well as the related frequency bias that has been63 shown to characterize NTK models (Basri et al., 2020), are all guided by spectral properties of the64 learned hypothesis, which strongly depends on label assignment.65 In Section 4.3 we investigate the relation between the PC-bias, spectral bias, and the LOC-effect.66 We find that the LOC-effect is very robust: (i) when we neutralize the spectral bias by using low67 complexity models such as deep linear networks, the effect is still observed; (ii) when we neutralize68 the PC-bias by using whitened data, the LOC-effect persists. We hypothesize that at the beginning of69 learning, the learning dynamics of neural models is controlled by the eigendecomposition of the raw70 data. As learning proceeds, control of the dynamics slowly shifts to other factors.71 The PC-bias has implications beyond the LOC-effect, as expanded in Section 5 and Suppl. §A:72 1. Early stopping. It is often observed that when training deep networks with real data, the highest73 generalization accuracy is obtained before convergence. Consequently, early stopping is often74 prescribed to improve generalization. Following the commonly used assumption that in natural75 images the lowest principal components correspond to noise (Torralba & Oliva, 2003), our results76 predict the benefits of early stopping, and relate it to PCA. In Section 5 we investigate the relevance77 of this conclusion to real non-linear networks (see, e.g., Basri et al. (2019); Li et al. (2020) for78 complementary accounts).79 2. Slower convergence with random labels. Zhang et al. (2016) showed that neural networks80 can learn any label assignment. However, training with random label assignments is known to81 converge slower as compared to training with the original labels (Krueger et al., 2017). We report a82 similar phenomenon when training deep linear networks. Our analysis shows that when the principal83 eigenvectors are correlated with class identity, as is often the case in natural images, the loss decreases84 faster when given true label assignments as against random label assignments. In Section 5 we85 investigate this hypothesis empirically in linear and non-linear networks.86 3. Weight initialization. Different weight initialization schemes have been proposed to stabilize the87 learning and minimize the hazard of "exploding gradients" (e.g., Glorot & Bengio, 2010; He et al.,88 2015). Our analysis (see Suppl. §A) identifies a related variant, which eliminates the hazard when89 all the hidden layers are roughly of equal width. In the deep linear model, it can be proven that the90 proposed normalization variant in a sense minimizes repeated gradient amplification.91 2 Theoretical analysis92 Notations. Let X = {(xi,yi)}ni=1 denote the training data, where x ∈ Rq denotes the i-th data93 point and y ∈ {0, 1}K its corresponding label. Let 1nimi denote the centroid (mean) of class i with94 ni points, and M = [m1 . . .mK ]>. Finally, let X and Y denote the matrices whose ith column95 is xi and yi respectively. ΣXX = XX> and ΣY X = Y X> denote the covariance matrix of X96 and cross-covariance of X and Y respectively. We note that ΣXX captures the structure of the data97 irrespective of class identity.98 Definition 1 (Principal coordinate system). The coordinate system obtained by rotating the data in Rq99 by an orthonormal matrixU>, where SV D(ΣXX)=UDU>. Now ΣXX =D, a diagnoal matrix whose100 elements are the singular values of XX>, arranged in decreasing order d1 ≥ d2 ≥ . . . ≥ dq ≥ 0.101 Definition 2 (Compact representation). Let f(x) denote a deep linear network. Then f(x) =102 (∏1 l=LWl ) x = Wx, where W ∈ RK×q is called the compact representation of the network.103 Definition 3 (Error matrix). For a deep linear network whose compact representation is W , the104 error matrix is Er = WΣXX − ΣY X . In the principal coordinate system, Er = WD −M .105 Assumptions. Our analysis assumes that the learning rate µ is infinitesimal, and therefore terms106 of size O(µ2) can be neglected. We further assume that the width of the hidden layers lies in107 [m,m+Mb], wherem→∞ denotes a very large number and Mb is fixed. Thus terms of size O( 1m )108 can also be neglected. In Fig. 1 we show the plausibility of these assumptions, where the predicted109 dynamics is seen throughout the training of deep linear networks, even for small values ofm.110 2.1 The dynamics of deep over-parametrized linear networks111 Consider a deep linear network with L layers, and let112 L(X) = 1 2 ‖WX − Y ‖2F W := 1∏ l=L Wl, Wl ∈ Rml×ml−1 (1) Above ml denotes the number of neurons in layer l, where m0 = q and mL = K.113 Theorem 1. In each time point s, the compact matrix representation W obeys the following dynamics,114 when using the notation Ers defined in Def. 3:115 W s+1 = W s − µ L∑ l=1 Asl · Ers ·Bsl +O(µ2) (2) Above µ denotes the learning rate. Asl and B s l are called gradient scale matrices, and are defined as116 Asl := ( l+1∏ j=L W sj )( l+1∏ j=L W sj )> ∈ RK×K Bsl := ( 1∏ j=l−1 W sj )>( 1∏ j=l−1 W sj ) ∈ Rq×q (3) The proof can be found in Suppl. §B.117 Gradient scale matrices. Some statistical properties of such matrices are established in Suppl. §A.118 Note that when the number of hidden layers is 0 (L = 1), both gradient scale matrices reduce to the119 identity matrix and the dynamics in (2) is reduced to the following known result (e.g., Le Cun et al.,120 1991): W s+1 = W s−µErs. Recall, however, that the focus of this paper is the over-parameterized121 linear model with L > 1, in which the loss is not convex. Since the difference between the convex122 linear model and the over-parametrized deep model boils down to these matrices, our convergence123 analysis henceforth focuses on the dynamics of the gradient scale matrices.124 In accordance, we analyze the evolution of the gradient scale matrices as learning proceeds. Let125 m = min (m1, ...,mL−1) denote the size of the smallest hidden layer. Initially for s = 0, all weight126 matrices W 0l are assumed to be initialized by sampling from a distribution with mean 0 and variance127 σ2l = O( 1 m ). The specific normalization factor, alluded to in O( 1 m ), is a variant of the Glorot128 initialization. Details and justification can be found in Suppl. §A.1.129 At time s, letAsl (m) andB s l (m) denote a sequence of random gradient scale matrices, corresponding130 to networks whose smallest hidden layer hasm neurons. From Suppl. §A we deduce that:131 Theorem 2. Using p−→ to denote convergence in probability asm→∞, and ∀s, l:132 Bsl (m) p−→ I, var[Bl(m)] = O ( 1 m ) Asl (m) p−→ I, var[Al(m)] = O ( 1 m ) Proof. Proof by induction on s. Initially when s = 0, the claim follows from Thm 4 and Corr 5.1.133 The induction step validity follows from Thm 6 and Thm 7 (see Suppl. §A.2).134 The detailed proof shows that the relevant constants are amplified with s. While they remain moderate135 andm is sufficiently large, Bsl (m) ≈ I and Asl (m) ≈ I ∀l. In this case, the dynamics of the over-136 parameterized model is identical to the dynamics of the convex linear model, W s+1 = W s − µErs.137 Convergence rate. In §A.2 we show that the convergence of Bsl (m) to I is governed to some extent138 by O ( K m ) , while the convergence of Asl (m) is governed by O ( q m ) . Recall that while m → ∞,139 q is the dimension of the data space which is fixed in advance and can be fairly large, while K is140 the number of classes which is fixed and quite small. Typically, K q. Thus we expect the right141 gradient scale matrices Bsl (m) to remain approximately I much longer than the left matrices A s l (m).142 Empirical validation. Since the results above are asymptotic, and to envision the difference between143 convergence governed by O ( K m ) vs. O ( q m ) , we resort to simulations whose results are shown in144 Fig. 1. These empirical results, recounting linear networks with 4 hidden layers of width 1024, clearly145 show that during a significant part of the training both gradient scale matrices remain approximately146 I . The difference between the convergence rate of Bsl and A s l is seen later on, when ∆A s l starts to147 increase shortly before convergence, while ∆Bsl remains essentially 0 throughout.148 2.2 Weight evolution149 K q entails that Bsl (m) remains approximately equal to I much longer than Asl (m). This is150 substantiated by the simulation results in Fig. 1. Consequently, while earlier on it is safe to assume151 that both Asl ≈ I and Bsl ≈ I , as learning proceeds only Bsl ≈ I is safe to assume.152 With this in mind, we obtain expressions for the evolution of W s separately for earlier and later in153 learning. We first shift to the principal coordinate system defined in Def 1. In this system we can154 analyze each column of W s separately, where wsj and mj denote the respective columns of W s and155 M . At the beginning of learning when both Asl ≈ I and Bsl ≈ I (see §B.3 for a detailed derivation):156 ws+1j = (λj) sw0j + [1− (λj)s] mj dj λj = 1− µdjL (4) 157 Eq. 4 is reminiscent of the well understood dynamics of training the convex one layer linear model. It158 is composed of two additive terms, revealing two parallel and independent processes:159 1. The dependence on random initialization tends to 0 exponentially with decline rate λj .160 2. The final value is the sum of a geometrical series with a common ratio λj .161 In either case, convergence is fastest for the largest singular eigenvalue, or the first column of W ,162 and slowest for the smallest singular value. This behavior is visualized in Fig. 2a. Importantly, the163 rate of convergence depends on the singular value dj , the number of layers L, and the learning rate µ.164 In later stages of learning, when we can only assume that Bsl ≈ I , the dynamic becomes:165 ws+1j = s∏ ν=1 (I − µdjAν)w0j + µ [ s∑ ν=1 s∏ ρ=ν+1 (I − µdjAρ)Aν ] mj (5) where As = ∑L l=1A s l . The proof is provided in §B.3. Although the dynamics now depends on166 matrices As as well, it is still the case that the convergence of each column is governed by its singular167 value dj . This suggests that while the PC-bias is more pronounced in earlier stages of learning, its168 effect persists throughout.169 The analysis above is extended to a simple non-linear ReLU model (cf. Arora et al., 2019) as detailed170 in §B.2, with qualitatively similar results (albeit under unrealistic assumptions). Empirical results,171 shown in Fig. 2b, indicate that the results are indicative beyond the assumed circumstances.172 3 PC-bias: empirical study173 In this section, we first analyze deep linear networks, showing that the convergence rate is indeed174 governed by the principal singular values of the data, which demonstrates the plausibility of the175 assumptions made in Section 2. We continue by extending the scope of the investigation to non-linear176 neural networks, finding there evidence for the PC-bias mostly in the earlier stages of learning.177 3.1 Methodology178 We say that a linear network is L-layered when it has L − 1 hidden fully connected (FC) layers179 (without convolutional layers). In our empirical study we relaxed some assumptions of the theoretical180 study, in order to increase the resemblance of the trained networks to networks in common use.181 Specifically, we changed the initialization to the commonly used Glorot initialization, replaced the182 L2 loss with the cross-entropy loss, and employed SGD instead of the deterministic GD. Notably,183 the original assumptions yielded similar results. The results presented summarize experiments with184 networks of equal width across all hidden layers, specifically the moderate value of m = 1024,185 keeping in mind that we test the relevance of asymptotic results form→∞. Using a different width186 for each layer yielded similar qualitative results. Details regarding the hyper-parameters, architectures,187 and datasets can be found in §D.1, §D.3 and §D.4 respectively.188 3.2 PC-bias in deep linear networks189 In this section, we train L-layered linear networks, then compute their compact representations190 W rotated to align with the canonical coordinate system (Def. 1). Note that each row wr in W191 essentially defines the one-vs-all separating hyper-plane corresponding to class r.192 To examine both the variability between models and their convergence rate, we inspect wr at different193 time points during learning. The rate of convergence can be measured directly, by observing the194 changes in the weights of each element in wr. These weight values1 should be compared with195 the optimal values in each row wr of Wopt = Y XT (XXT ). The variability between models is196 measured by calculating the standard deviation (std) of each wr across N models.197 We begin with linear networks. We trained 10 5-layered FC linear networks, and 10 linear st-VGG198 convolutional networks. When analyzing the compact representation of such networks we observe199 similar behavior – weights corresponding to larger principal components converge faster to the200 optimal value, and their variability across models converges faster to 0 (Figs. 3a,3b). Thus, while the201 theoretical results are asymptotic, PC-bias is empirically seen throughout the entire learning process202 of deep linear networks.203 Whitened data. The PC-bias is neutralized when the data is whitened, at which point ΣXX is the204 scaled identity matrix. In Fig. 3c, we plot the results of the same experimental protocol while using a205 ZCA-whitened dataset. As predicted, the networks no longer show any bias towards any principal206 direction. Weights in all directions are scaled similarly, and the std over all models is the same in207 each epoch, irrespective of the principal direction. (Additional experiments show that this is not an208 artifact of the lack of uniqueness when deriving the principal components of a white signal).209 1We note that the weights tend to start larger for smaller principal components, as can be seen in Fig. 3a left. 3.3 PC-bias in general CNNs210 In this section, we investigate the manifestation of the PC-bias in non-linear deep convolutional211 networks. As we cannot directly track the learning dynamics separately in each principal direction of212 non-linear networks, we adopt two different evaluation mechanisms:213 Linear approximation. We considered several linear approximations, but since all of them showed214 the same qualitative behavior, we report results with the simplest one. Specifically, to obtain a linear215 approximation of a non-linear network, without max-pooling or batch-normalization layers, we216 follow the definition of the compact representation from Section 2 while ignoring any non-linear217 activation. We then align this matrix with the canonical coordinate system (Def. 1), and observe the218 evolution of the weights and their std across models along the principal directions during learning.219 Note that now the networks do not converge to the same compact representation, which is not unique.220 Nevertheless, we see that the PC-bias governs the weight dynamics to a noticeable extent.221 More specifically, in these networks a large fraction of the lowest principal components hardly changes222 during learning, as good as being ignored. Nevertheless, the PC-bias affects the higher principal223 components, most notably at the beginning of training (see Fig. 3d). Thus weights corresponding to224 higher principal components converge faster, and the std across models of such weights decreases225 faster for higher principal components.226 Projection to higher PC’s. We created a modified test-set, by project-227 ing each test example on the span of the first P principal components.228 This is equivalent to reducing the dimensionality of the test set to P us-229 ing PCA. We trained an ensemble of N=100 st-VGG networks on the230 original small mammals training set, then evaluated these networks dur-231 ing training on 4 versions of the test-set, reduced to P=1,10,100,1000232 dimensions respectively. Mean accuracy is plotted in Fig. 4. Similar233 results are obtained when training VGG-19 networks on CIFAR-10,234 see §C.3.235 Taking a closer look at Fig. 4, we see that when evaluated on lower236 dimensionality test-data (P=1,10), the networks’ accuracy peaks after237 a few epochs, at which point performance starts to decrease. This result suggests that the networks238 rely more heavily on these dimensions in the earlier phases of learning, and then continue to learn239 other things. In contrast, when evaluated on higher dimensionality test-data (P=100,1000), accuracy240 continues to rise, longer so for larger P . This suggests that significant learning of the additional241 dimensions continues in later stages of the learning.242 4 PC-bias: Learning Order Constancy243 In this section, we show that the PC-bias is significantly correlated with the learning order of deep244 neural networks, and can therefore partially account for the LOC-effect described in Section 1.245 Following Hacohen et al. (2020), we measure the "speed of learning" of each example by computing246 its accessibility score. This score is given per example, and characterizes how fast an ensemble of247 N networks learns it. Formally, accessibility(x) = E [1(fei (x) = y(x))], where fei (x) denotes248 the outcome of the i-th network trained over e epochs, and the mean is taken over networks and249 epochs. For the set of datapoints {(xj ,yj)}nj=1, Learning Order Constancy is manifested by the high250 correlation between 2 instances of accessibility(x), each computed from a different ensemble.251 PC-bias is shown to pertain to LOC in two ways: First, in Section 4.1 we show high correlation252 between the learning order in deep linear and non-linear networks. Since the PC-bias fully accounts253 for LOC in deep linear networks, this suggests it also accounts (at least partially) for the observed254 LOC in non-linear networks. Comparison with the critical principal component verifies this assertion.255 Second, we show in Section 4.2 that when the PC-bias is neutralized, LOC diminishes as well. In256 Section 4.3 we discuss the relationship between the spectral bias, PC-bias and the LOC-effect.257 4.1 PC-Bias is correlated with LOC258 We first compare the order of learning of non-linear models and deep linear networks by computing259 the correlation between the accessibility scores of both models. This comparison reveals high260 correlation (r = 0.85, p < 10−45), as seen in Fig. 5a. To investigate directly the connection between261 the PC-bias and LOC, we define the critical principal component of an example to be the first262 principal component P , such that a linear classifier trained on the original data can classify the263 example correctly when projected to P principal components. We trained N=100 st-VGG networks264 on the cats and dogs dataset, and computed for each example its accessibility score and critical265 principal component. In Fig. 5b we see strong negative correlation between the two scores (p=−0.93,266 r<10−4), suggesting that the PC-bias affects the order of learning as measured by accessibility.267 4.2 Neutralizing the PC-bias leads to diminishing LOC268 Whitening the data eliminates the PC-bias as shown in Fig. 3c, since all the singular values are now269 identical. Here we use this observation to further probe into the dependency of the Learning Order270 Constancy on the PC-bias. Starting with the linear case, we train 4 ensembles of N=10 2-layered271 linear networks on the cats and dogs dataset, 2 with and 2 without ZCA-whitening. We compute the272 accessibility score for each ensemble separately, and correlate the scores of the 2 ensembles in each273 test case. Each correlation captures the consistency of the LOC-effect for the respective condition.274 This correlation is expected to be very high for natural images. Low correlation implies that the275 LOC-effect is weak, as training the same network multiple times yields a different learning order.276 2As non-linear models achieve the accuracy of linear models within an epoch or 2, low learning rate is used. Fig. 6a shows the results for deep linear networks. As expected, the correlation when using natural277 images is very high. However, when using whitened images, correlation plummets, indicating that278 the LOC-effect is highly dependent on the PC-bias. We note that the drop in the correlation is much279 higher when considering only the 20% "fastest learned" examples, suggesting that the PC-bias affects280 learning order more evidently at earlier stages of learning.281 Fig. 6b shows the results when repeating this experiment with non-linear networks, training 2282 collections of N=10 VGG-19 networks on CIFAR-10. We find that the elimination of the PC-bias283 in this case affects LOC much less, suggesting that the PC-bias can only partially account for the284 LOC-effect in the non-linear case. However, we note that at the beginning of learning, when the285 PC-bias is most pronounced, once again the drop is much larger and very significant (half).286 4.3 Spectral bias, PC-bias and LOC287 The spectral bias (Rahaman et al., 2019) characterizes the dynamics of learning in neural networks288 differently, asserting that initially neural models can be described by low frequencies only. This may289 provide an alternative explanation to LOC. Recall that LOC is manifested in the consistency of the290 accessibility score across networks. To compare between the spectral bias and accessibility score,291 we first need to estimate for each example whether it can be correctly classified by a low frequency292 model. Accordingly, we define for each example a discriminability measure – the percentage out293 of its k neighbors that share with it class identity. Intuitively, an example has a low discriminability294 score when it is surrounded by examples from other classes, which forces the learned boundary to295 incorporate high frequencies. In §C.2 we show that in the 2D case analyzed by Rahaman et al. (2019),296 this measure strongly correlates (r=−0.8, p < 10−2) with the spectral bias.297 We trained several networks (VGG-19 and st-VGG) on several real datasets, including small-298 mammals, STL-10, CIFAR-10/100 and a subset of ImageNet-20. For each network and dataset,299 we computed the accessibility score as well as the discriminability of each example. The vector300 space, in which discriminability is evaluated, is either the raw data or the network’s perceptual space301 (penultimate layer activation). The correlation between these scores is shown in Table 1.302 Using raw data, low correlation is still seen between the accessibility and discriminability scores303 when inspecting the smaller datasets (small mammals, CIFAR-100 and STL10). This correlation304 vanishes when considering the larger ImageNet-20 dataset. It would appear that on its own, the305 spectral bias cannot adequately explain the LOC-effect. On the other hand, in the perceptual space,306 the correlation between discriminability and accessibility is quite significant for all datasets. Contrary307 to our supposition, it seems that networks learn a representation where the spectral bias is evident,308 but this bias does not necessarily govern its learning before the representation has been learned.309 5 PC-bias: further implications310 Early Stopping and the Generalization Gap. Considering natural images, it is often assumed that311 the least significant principal components of the data represent noise (Torralba & Oliva, 2003). In312 such cases, our analysis predicts that as noise dominates the components learned later in learning,313 early stopping is likely to be beneficial. To test this hypothesis directly, we manipulated CIFAR-10314 to amplify the signal in either the 1.5% most significant (higher) or 1.5% least significant (lower)315 principal components (see examples in Fig. 16, Suppl. §D). Accuracy over the original test set,316 after training 10 st-VGG and linear st-VGG networks on these manipulated images, can be seen317 in Fig. 7. Both in linear and non-linear networks, early stopping is more beneficial when lower318 principal components are amplified, and significantly less so when higher components are amplified,319 as predicted by the PC-bias.320 Slower Convergence with Random Labels. Deep neural models can learn any random label321 assignment to a given training set (Zhang et al., 2016). However, when trained on randomly labeled322 data, convergence appears to be much slower (Krueger et al., 2017). Assume, as before, that in natural323 images the lower principal components are dominated by noise. We argue that the PC-bias now324 predicts this empirical result, since learning randomly labeled examples requires signal present in325 lower principal components. To test this hypothesis directly, we trained 10 2-layered linear networks326 on datasets of natural images. Indeed, these networks converge slower with random labels (see327 Fig. 8a). In Fig. 8b we repeat this experiment after having whitened the images, to neutralize the328 PC-bias. Now convergence rate is identical, whether the labels are original or shuffled. Clearly, in329 deep linear networks the PC-bias gives a full account of this phenomenon.330 To further check the relevance of this account to non-linear networks, we artificially generate datasets331 where only the first P principal components are discriminative, while the remaining components332 become noise by design. We constructed two such datasets: in one the labels are correlated with the333 original labels, in the other they are not. Specifically, PCA is used to reduce the dimensionality of a334 two-class dataset to P , and the optimal linear separator in the reduced representation is computed.335 Next, all the labels of points that are incorrectly classified by the optimal linear separator are switched,336 so that the train and test sets are linearly separable by this separator. Note that the modified labels337 are still highly correlated with the original labels (for P = 500: p = 0.82, r < 10−10). The338 second dataset is generated by repeating the process while starting from randomly shuffled labels.339 This dataset is likewise fully separable when projected to the first P components, but its labels are340 uncorrelated with the original labels (for P = 500: p = 0.06, r < 10−10).341 The mean training accuracy of 10 non-linear networks with P=10,50,500 is plotted in Fig. 9a (first342 dataset) and Fig. 9b (second dataset). In both cases, the lower P is (namely, only the first few principal343 components are discriminative), the faster the data is learned by the non-linear network. Whether the344 labels are real or shuffled makes little qualitative difference, as predicted by the PC-bias.345 6 Summary and discussion346 When trained with gradient descent, the convergence rate of the over-parameterized deep linear347 network model is provably governed by the eigendecomposition of the data, and specifically, pa-348 rameters corresponding to the most significant principal components converge faster than the least349 significant components. Empirical evidence is provided for the relevance of these results to more350 realistic non-linear networks. We term this effect PC-bias. This result provides a complementary351 account for some prevalent empirical observations, including the benefit of early stopping and the352 slower convergence rate with shuffled labels.353 We use the PC-bias to explicate the Learning Order Constancy (LOC), showing that examples354 learned at earlier stages are more distinguishable by the higher principal components, demonstrating355 that networks’ training relies more heavily on higher principal components early on. A causal link356 between the PC-bias and the LOC-effect is demonstrated, as the LOC-effect diminishes when the357 PC-bias is eliminated by whitening the images. We analyze these findings in view of a related358 phenomenon termed spectral bias. While the PC-bias may be more prominent early on, the spectral359 bias may be more important in later stages of learning.360 References361 Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural362 networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018.363 Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by364 overparameterization. In International Conference on Machine Learning, pp. 244–253, 2018.365 Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generaliza-366 tion for overparameterized two-layer neural networks. In International Conference on Machine367 Learning, pp. 322–332, 2019.368 Basri, R., Jacobs, D. W., Kasten, Y., and Kritchman, S. The convergence rate of neural networks for369 learned functions of different frequencies. In Advances in Neural Information Processing Systems,370 pp. 4761–4771, 2019.371 Basri, R., Galun, M., Geifman, A., Jacobs, D., Kasten, Y., and Kritchman, S. Frequency bias in neural372 networks for input of non-uniform density. In International Conference on Machine Learning, pp.373 685–694. PMLR, 2020.374 Cao, Y., Fang, Z., Wu, Y., Zhou, D.-X., and Gu, Q. Towards understanding the spectral bias of deep375 learning. arXiv preprint arXiv:1912.01198, 2019.376 Dingle, K., Camargo, C. Q., and Louis, A. A. Input–output maps are strongly biased towards simple377 outputs. Nature communications, 9(1):1–7, 2018.378 Fukumizu, K. Effect of batch learning in multilayer neural networks. Gen, 1(04):1E–03, 1998.379 Gissin, D., Shalev-Shwartz, S., and Daniely, A. The implicit bias of depth: How incremental learning380 drives generalization. arXiv preprint arXiv:1909.12051, 2019.381 Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks.382 In Proceedings of the thirteenth international conference on artificial intelligence and statistics,383 pp. 249–256, 2010.384 Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear385 convolutional networks. arXiv preprint arXiv:1806.00468, 2018.386 Hacohen, G., Choshen, L., and Weinshall, D. Let’s agree to agree: Neural networks share classification387 order on real datasets. In International Conference on Machine Learning, pp. 3950–3960. PMLR,388 2020.389 He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level390 performance on imagenet classification. In Proceedings of the IEEE International Conference on391 Computer Vision (ICCV), December 2015.392 Heckel, R. and Soltanolkotabi, M. Denoising and regularization via exploiting the structural bias of393 convolutional generators. arXiv preprint arXiv:1910.14634, 2019.394 Hu, W., Xiao, L., Adlam, B., and Pennington, J. The surprising simplicity of the early-time learning395 dynamics of neural networks. arXiv preprint arXiv:2006.14599, 2020.396 Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in397 neural networks, 2020.398 Krueger, D., Ballas, N., Jastrzebski, S., Arpit, D., Kanwal, M. S., Maharaj, T., Bengio, E., Fischer, A.,399 and Courville, A. Deep nets don’t learn via memorization. 2017.400 Le Cun, Y., Kanter, I., and Solla, A. S. Second order properties of error surfaces learning time and401 generalization. Advances in neural information processing systems, 3:918–924, 1991.402 Li, M., Soltanolkotabi, M., and Oymak, S. Gradient descent with early stopping is provably robust403 to label noise for overparameterized neural networks. In International Conference on Artificial404 Intelligence and Statistics, pp. 4313–4324. PMLR, 2020.405 Nakkiran, P., Kaplun, G., Kalimeris, D., Yang, T., Edelman, B. L., Zhang, F., and Barak, B. Sgd406 on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604,407 2019.408 Pliushch, I., Mundt, M., Lupp, N., and Ramesh, V. When deep classifiers agree: Analyzing409 correlations between learning order and image statistics. arXiv preprint arXiv:2105.08997, 2021.410 Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville,411 A. On the spectral bias of neural networks. In International Conference on Machine Learning, pp.412 5301–5310. PMLR, 2019.413 Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning414 in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.415 Saxe, A. M., McClelland, J. L., and Ganguli, S. A mathematical theory of semantic development in416 deep neural networks. Proceedings of the National Academy of Sciences, 116(23):11537–11546,417 2019.418 Shah, H., Tamuly, K., Raghunathan, A., Jain, P., and Netrapalli, P. The pitfalls of simplicity bias in419 neural networks. arXiv preprint arXiv:2006.07710, 2020.420 Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual421 review of neuroscience, 24(1):1193–1216, 2001.422 Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient423 descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.424 Torralba, A. and Oliva, A. Statistics of natural image categories. Network: computation in neural425 systems, 14(3):391–412, 2003.426 Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE conference427 on computer vision and pattern recognition, pp. 9446–9454, 2018.428 Valle-Perez, G., Camargo, C. Q., and Louis, A. A. Deep learning generalizes because the parameter-429 function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, 2018.430 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires431 rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.432 Checklist433 1. For all authors...434 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s435 contributions and scope? [Yes]436 (b) Did you describe the limitations of your work? [Yes]437 (c) Did you discuss any potential negative societal impacts of your work? [N/A]438 (d) Have you read the ethics review guidelines and ensured that your paper conforms to439 them? [Yes]440 2. If you are including theoretical results...441 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See the442 "Assumptions" paragraph as Section 2443 (b) Did you include complete proofs of all theoretical results? [Yes] Each theorem reference444 to its proof. Proofs can be found in Suppl. §A,B445 3. If you ran experiments...446 (a) Did you include the code, data, and instructions needed to reproduce the main ex-447 perimental results (either in the supplemental material or as a URL)? [No] All data,448 instructions and hyper-parameters are explictly written in the main paper and/or in the449 Suppl. (see §D.4,D.4). The code itself will be provided once the anonymity will be450 lifted.451 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they452 were chosen)? [Yes]453 (c) Did you report error bars (e.g., with respect to the random seed after running experi-454 ments multiple times)? [Yes]455 (d) Did you include the total amount of compute and the type of resources used (e.g., type456 of GPUs, internal cluster, or cloud provider)? [N/A]457 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...458 (a) If your work uses existing assets, did you cite the creators? [Yes]459 (b) Did you mention the license of the assets? [N/A]460 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]461 462 (d) Did you discuss whether and how consent was obtained from people whose data you’re463 using/curating? [N/A]464 (e) Did you discuss whether the data you are using/curating contains personally identifiable465 information or offensive content? [N/A]466 5. If you used crowdsourcing or conducted research with human subjects...467 (a) Did you include the full text of instructions given to participants and screenshots, if468 applicable? [N/A]469 (b) Did you describe any potential participant risks, with links to Institutional Review470 Board (IRB) approvals, if applicable? [N/A]471 (c) Did you include the estimated hourly wage paid to participants and the total amount472 spent on participant compensation? [N/A]473
1. What is the focus of the paper regarding deep over-parametrized linear networks under gradient descent? 2. What are the strengths and weaknesses of the paper's theoretical analysis, particularly Thm 1 and Thm 2? 3. How does the reviewer assess the novelty of the proof techniques used in the paper compared to standard over-parametrized literature? 4. What suggestions does the reviewer have for improving the paper's organization and content, such as rearranging section 2.2 and including nonlinear case results? 5. What concerns does the reviewer have regarding the empirical study, specifically the changes made in the experimental setup? 6. Does the reviewer think that the paper's contribution is sufficient for a top-tier conference?
Summary Of The Paper Review
Summary Of The Paper The paper studies the evolution of a deep over-parametrized linear network under gradient descent. The main claim of the paper is that the convergence rate of the weights is faster along directions corresponding to the larger principal components of the data, at a rate governed by the singular values. The paper also supports its argument in an extensive experimental study. Review The paper touches on an important question about the inductive bias of neural nets. This problem has paramount importance to the deep learning community. Moreover, the motivation of the results is clearly explained in the paper. However, I'm not sure whether the theoretical side is strong enough to justify a new publication. Specifically: Thm 1 is a direct calculation of the update at each iteration, its contribution is limited. Moreover, it seems to me like a minor modification of the update presented in "Width Provably Matters in Optimization for Deep Linear Neural Networks" by Du and Hu Thm 2 (which is the main theoretical claim of the paper) presenters a new result as far as I know. However, I'm not sure about its novelty given the techniques that are used in standard over-parametrized literature (for example "Width Provably Matters in Optimization for Deep Linear Neural Networks" by Du and Hu or "Gradient descent finds global minima of deep neural networks" by Du et al). I'll be happy if the authors can address the novelty of their proof techniques and compare it to standard lazy training technique. I would suggest rearranging section 2.2 to be formulated as formal theorems where the condition and results are rigorously organized. Moreover, separating more accurately between the two phases of learning might add to the paper's contribution. I think it is important to include the results of the nonlinear case (section B.2 in the appendix) in the main paper. Although additional assumptions are required. These results might hint that the phenomena presented in the paper happens also outside the linear network world. Regarding the empirical side, I think that the empirical study is impressive and definitely contributes to the paper. My main concern on the empirical side is why are the settings were changed in the empirical part? (specifically the loss and initialization). I suspect that the theory doesn't hold in practice which might weaken the paper. To summarize my review, I didn't find major flaws in the paper. In my opinion, each part alone (the empirical and the theoretical) doesn't enough for a new publication. However, since both parts were combined together, I find the total contribution of the paper on the borderline for a top-tier conference.
NIPS
Title Principal Components Bias in Deep Neural Networks Abstract Recent work suggests that convolutional neural networks of different architectures 1 learn to classify images in the same order. To understand this phenomenon, we 2 revisit the over-parametrized deep linear network model. Our asymptotic analysis, 3 assuming that the hidden layers are wide enough, reveals that the convergence rate 4 of this model’s parameters is exponentially faster along directions corresponding 5 to the larger principal components of the data, at a rate governed by the singular 6 values. We term this convergence pattern the Principal Components bias (PC-bias). 7 We show how the PC-bias streamlines the order of learning of both linear and non8 linear networks, more prominently at earlier stages of learning. We then compare 9 our results to the spectral bias, showing that both biases can be seen independently, 10 and affect the order of learning in different ways. Finally, we discuss how the 11 PC-bias may explain some benefits of early stopping and its connection to PCA, 12 and why deep networks converge more slowly when given random labels. 13 N/A Recent work suggests that convolutional neural networks of different architectures1 learn to classify images in the same order. To understand this phenomenon, we2 revisit the over-parametrized deep linear network model. Our asymptotic analysis,3 assuming that the hidden layers are wide enough, reveals that the convergence rate4 of this model’s parameters is exponentially faster along directions corresponding5 to the larger principal components of the data, at a rate governed by the singular6 values. We term this convergence pattern the Principal Components bias (PC-bias).7 We show how the PC-bias streamlines the order of learning of both linear and non-8 linear networks, more prominently at earlier stages of learning. We then compare9 our results to the spectral bias, showing that both biases can be seen independently,10 and affect the order of learning in different ways. Finally, we discuss how the11 PC-bias may explain some benefits of early stopping and its connection to PCA,12 and why deep networks converge more slowly when given random labels.13 1 Introduction14 The dynamics of learning in deep neural networks is an intriguing subject, not yet sufficiently15 understood. Diverse empirical data seems to support the hypothesis that neural networks start by16 learning a simple model, which then gains complexity as learning proceeds (Gunasekar et al., 2018;17 Soudry et al., 2018; Hu et al., 2020; Nakkiran et al., 2019; Gissin et al., 2019; Heckel & Soltanolkotabi,18 2019; Ulyanov et al., 2018; Valle-Perez et al., 2018). This phenomenon is sometimes called simplicity19 bias (Dingle et al., 2018; Shah et al., 2020).20 Recent work additionally shows that neural networks learn the training examples of natural datasets21 in a consistent order, and further impose a consistent order on the test set (Hacohen et al., 2020;22 Pliushch et al., 2021). Below we call this effect Learning Order Constancy (LOC). Currently, the23 characteristics of visual data, which may explain this consistently imposed order, remain unclear.24 Surprisingly, this universal order persists despite the variability introduced into the training of different25 models and architectures.26 To understand this phenomenon, we start by analyzing the deep linear network model (Saxe et al.,27 2013, 2019), defined by the concatenation of linear operators. While not a universal approximator, it28 is nevertheless trained by minimizing a non-convex objective function with a multitude of minima.29 The investigation of such networks is often employed to shed light on the learning dynamics when30 complex geometric landscapes are explored by GD (Fukumizu, 1998; Arora et al., 2018).31 In Section 2, we prove that the convergence of the weights of deep linear networks is governed32 by the eigendecomposition of the raw data in a phenomenon we term PC-bias. These asymptotic33 results, valid when the hidden layers are wide enough, can be seen as an extension of the known34 behavior of the single-layer convex linear model (Le Cun et al., 1991). Our work is closely related to35 (Saxe et al., 2013, 2019), where the deep linear model’s dynamics is analyzed as a function of the36 input and input-output statistics. Importantly, the analysis in (Saxe et al., 2013, 2019; Arora et al.,37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2018) incorporates the simplifying assumption that the data’s singular values are identical (whitened38 data), an assumption which unfortunately obscures the main result of our analysis – the direct39 dependence of convergence rate on the singular values of the data.40 In Section 3, we empirically show that this pattern of convergence is indeed observed in deep linear41 networks, validating the plausibility of our assumptions. We continue by showing that the LOC-effect42 in deep linear network is determined solely by their PC-bias. We prove a similar (weaker) result for43 the non-linear two-layer ReLU model introduced by Allen-Zhu et al. (2018), where this model is44 presented as a certain extension of NTK (Jacot et al., 2020). In this framework, convergence is fastest45 along the largest kernel’s principal components, a result related to the Spectral bias discussed below.46 In Section 4, we extend the study empirically to non-linear networks, and investigate the relation47 between the PC-bias and the LOC-effect in general deep networks. We first show that the order48 by which examples are learned by linear networks is highly correlated with the order induced by49 prevalent deep CNN models. We then show directly that the learning order of non-linear CNN models50 is affected by the principal decomposition of the data. Moreover, the LOC-effect diminishes when51 data is whitened, indicating a tight connection between the PC-bias and the LOC-effect.52 Our results are reminiscent of another phenomenon, termed Spectral bias (Rahaman et al., 2019;53 Cao et al., 2019), which associates the learning dynamics of neural networks with the Fourier54 decomposition of functions in the hypothesis space. Rahaman et al. (2019) empirically demonstrated55 that the complexity of classifiers learned by ReLU networks increases with time. Basri et al. (2019,56 2020) showed theoretically, by way of analyzing elementary neural network models, that these models57 first fit the data with low-frequency functions, and gradually add higher frequencies to improve the fit.58 Nevertheless, the spectral bias and PC-bias are inherently different. Indeed, the eigendecomposition59 of raw images is closely related to the Fourier analysis of images as long as the statistical properties60 of images are (approximately) translation-invariant (Simoncelli & Olshausen, 2001; Torralba & Oliva,61 2003). Still, the PC-bias is guided by spectral properties of the raw data and is additionally blind to62 class labels. On the other hand, the spectral bias, as well as the related frequency bias that has been63 shown to characterize NTK models (Basri et al., 2020), are all guided by spectral properties of the64 learned hypothesis, which strongly depends on label assignment.65 In Section 4.3 we investigate the relation between the PC-bias, spectral bias, and the LOC-effect.66 We find that the LOC-effect is very robust: (i) when we neutralize the spectral bias by using low67 complexity models such as deep linear networks, the effect is still observed; (ii) when we neutralize68 the PC-bias by using whitened data, the LOC-effect persists. We hypothesize that at the beginning of69 learning, the learning dynamics of neural models is controlled by the eigendecomposition of the raw70 data. As learning proceeds, control of the dynamics slowly shifts to other factors.71 The PC-bias has implications beyond the LOC-effect, as expanded in Section 5 and Suppl. §A:72 1. Early stopping. It is often observed that when training deep networks with real data, the highest73 generalization accuracy is obtained before convergence. Consequently, early stopping is often74 prescribed to improve generalization. Following the commonly used assumption that in natural75 images the lowest principal components correspond to noise (Torralba & Oliva, 2003), our results76 predict the benefits of early stopping, and relate it to PCA. In Section 5 we investigate the relevance77 of this conclusion to real non-linear networks (see, e.g., Basri et al. (2019); Li et al. (2020) for78 complementary accounts).79 2. Slower convergence with random labels. Zhang et al. (2016) showed that neural networks80 can learn any label assignment. However, training with random label assignments is known to81 converge slower as compared to training with the original labels (Krueger et al., 2017). We report a82 similar phenomenon when training deep linear networks. Our analysis shows that when the principal83 eigenvectors are correlated with class identity, as is often the case in natural images, the loss decreases84 faster when given true label assignments as against random label assignments. In Section 5 we85 investigate this hypothesis empirically in linear and non-linear networks.86 3. Weight initialization. Different weight initialization schemes have been proposed to stabilize the87 learning and minimize the hazard of "exploding gradients" (e.g., Glorot & Bengio, 2010; He et al.,88 2015). Our analysis (see Suppl. §A) identifies a related variant, which eliminates the hazard when89 all the hidden layers are roughly of equal width. In the deep linear model, it can be proven that the90 proposed normalization variant in a sense minimizes repeated gradient amplification.91 2 Theoretical analysis92 Notations. Let X = {(xi,yi)}ni=1 denote the training data, where x ∈ Rq denotes the i-th data93 point and y ∈ {0, 1}K its corresponding label. Let 1nimi denote the centroid (mean) of class i with94 ni points, and M = [m1 . . .mK ]>. Finally, let X and Y denote the matrices whose ith column95 is xi and yi respectively. ΣXX = XX> and ΣY X = Y X> denote the covariance matrix of X96 and cross-covariance of X and Y respectively. We note that ΣXX captures the structure of the data97 irrespective of class identity.98 Definition 1 (Principal coordinate system). The coordinate system obtained by rotating the data in Rq99 by an orthonormal matrixU>, where SV D(ΣXX)=UDU>. Now ΣXX =D, a diagnoal matrix whose100 elements are the singular values of XX>, arranged in decreasing order d1 ≥ d2 ≥ . . . ≥ dq ≥ 0.101 Definition 2 (Compact representation). Let f(x) denote a deep linear network. Then f(x) =102 (∏1 l=LWl ) x = Wx, where W ∈ RK×q is called the compact representation of the network.103 Definition 3 (Error matrix). For a deep linear network whose compact representation is W , the104 error matrix is Er = WΣXX − ΣY X . In the principal coordinate system, Er = WD −M .105 Assumptions. Our analysis assumes that the learning rate µ is infinitesimal, and therefore terms106 of size O(µ2) can be neglected. We further assume that the width of the hidden layers lies in107 [m,m+Mb], wherem→∞ denotes a very large number and Mb is fixed. Thus terms of size O( 1m )108 can also be neglected. In Fig. 1 we show the plausibility of these assumptions, where the predicted109 dynamics is seen throughout the training of deep linear networks, even for small values ofm.110 2.1 The dynamics of deep over-parametrized linear networks111 Consider a deep linear network with L layers, and let112 L(X) = 1 2 ‖WX − Y ‖2F W := 1∏ l=L Wl, Wl ∈ Rml×ml−1 (1) Above ml denotes the number of neurons in layer l, where m0 = q and mL = K.113 Theorem 1. In each time point s, the compact matrix representation W obeys the following dynamics,114 when using the notation Ers defined in Def. 3:115 W s+1 = W s − µ L∑ l=1 Asl · Ers ·Bsl +O(µ2) (2) Above µ denotes the learning rate. Asl and B s l are called gradient scale matrices, and are defined as116 Asl := ( l+1∏ j=L W sj )( l+1∏ j=L W sj )> ∈ RK×K Bsl := ( 1∏ j=l−1 W sj )>( 1∏ j=l−1 W sj ) ∈ Rq×q (3) The proof can be found in Suppl. §B.117 Gradient scale matrices. Some statistical properties of such matrices are established in Suppl. §A.118 Note that when the number of hidden layers is 0 (L = 1), both gradient scale matrices reduce to the119 identity matrix and the dynamics in (2) is reduced to the following known result (e.g., Le Cun et al.,120 1991): W s+1 = W s−µErs. Recall, however, that the focus of this paper is the over-parameterized121 linear model with L > 1, in which the loss is not convex. Since the difference between the convex122 linear model and the over-parametrized deep model boils down to these matrices, our convergence123 analysis henceforth focuses on the dynamics of the gradient scale matrices.124 In accordance, we analyze the evolution of the gradient scale matrices as learning proceeds. Let125 m = min (m1, ...,mL−1) denote the size of the smallest hidden layer. Initially for s = 0, all weight126 matrices W 0l are assumed to be initialized by sampling from a distribution with mean 0 and variance127 σ2l = O( 1 m ). The specific normalization factor, alluded to in O( 1 m ), is a variant of the Glorot128 initialization. Details and justification can be found in Suppl. §A.1.129 At time s, letAsl (m) andB s l (m) denote a sequence of random gradient scale matrices, corresponding130 to networks whose smallest hidden layer hasm neurons. From Suppl. §A we deduce that:131 Theorem 2. Using p−→ to denote convergence in probability asm→∞, and ∀s, l:132 Bsl (m) p−→ I, var[Bl(m)] = O ( 1 m ) Asl (m) p−→ I, var[Al(m)] = O ( 1 m ) Proof. Proof by induction on s. Initially when s = 0, the claim follows from Thm 4 and Corr 5.1.133 The induction step validity follows from Thm 6 and Thm 7 (see Suppl. §A.2).134 The detailed proof shows that the relevant constants are amplified with s. While they remain moderate135 andm is sufficiently large, Bsl (m) ≈ I and Asl (m) ≈ I ∀l. In this case, the dynamics of the over-136 parameterized model is identical to the dynamics of the convex linear model, W s+1 = W s − µErs.137 Convergence rate. In §A.2 we show that the convergence of Bsl (m) to I is governed to some extent138 by O ( K m ) , while the convergence of Asl (m) is governed by O ( q m ) . Recall that while m → ∞,139 q is the dimension of the data space which is fixed in advance and can be fairly large, while K is140 the number of classes which is fixed and quite small. Typically, K q. Thus we expect the right141 gradient scale matrices Bsl (m) to remain approximately I much longer than the left matrices A s l (m).142 Empirical validation. Since the results above are asymptotic, and to envision the difference between143 convergence governed by O ( K m ) vs. O ( q m ) , we resort to simulations whose results are shown in144 Fig. 1. These empirical results, recounting linear networks with 4 hidden layers of width 1024, clearly145 show that during a significant part of the training both gradient scale matrices remain approximately146 I . The difference between the convergence rate of Bsl and A s l is seen later on, when ∆A s l starts to147 increase shortly before convergence, while ∆Bsl remains essentially 0 throughout.148 2.2 Weight evolution149 K q entails that Bsl (m) remains approximately equal to I much longer than Asl (m). This is150 substantiated by the simulation results in Fig. 1. Consequently, while earlier on it is safe to assume151 that both Asl ≈ I and Bsl ≈ I , as learning proceeds only Bsl ≈ I is safe to assume.152 With this in mind, we obtain expressions for the evolution of W s separately for earlier and later in153 learning. We first shift to the principal coordinate system defined in Def 1. In this system we can154 analyze each column of W s separately, where wsj and mj denote the respective columns of W s and155 M . At the beginning of learning when both Asl ≈ I and Bsl ≈ I (see §B.3 for a detailed derivation):156 ws+1j = (λj) sw0j + [1− (λj)s] mj dj λj = 1− µdjL (4) 157 Eq. 4 is reminiscent of the well understood dynamics of training the convex one layer linear model. It158 is composed of two additive terms, revealing two parallel and independent processes:159 1. The dependence on random initialization tends to 0 exponentially with decline rate λj .160 2. The final value is the sum of a geometrical series with a common ratio λj .161 In either case, convergence is fastest for the largest singular eigenvalue, or the first column of W ,162 and slowest for the smallest singular value. This behavior is visualized in Fig. 2a. Importantly, the163 rate of convergence depends on the singular value dj , the number of layers L, and the learning rate µ.164 In later stages of learning, when we can only assume that Bsl ≈ I , the dynamic becomes:165 ws+1j = s∏ ν=1 (I − µdjAν)w0j + µ [ s∑ ν=1 s∏ ρ=ν+1 (I − µdjAρ)Aν ] mj (5) where As = ∑L l=1A s l . The proof is provided in §B.3. Although the dynamics now depends on166 matrices As as well, it is still the case that the convergence of each column is governed by its singular167 value dj . This suggests that while the PC-bias is more pronounced in earlier stages of learning, its168 effect persists throughout.169 The analysis above is extended to a simple non-linear ReLU model (cf. Arora et al., 2019) as detailed170 in §B.2, with qualitatively similar results (albeit under unrealistic assumptions). Empirical results,171 shown in Fig. 2b, indicate that the results are indicative beyond the assumed circumstances.172 3 PC-bias: empirical study173 In this section, we first analyze deep linear networks, showing that the convergence rate is indeed174 governed by the principal singular values of the data, which demonstrates the plausibility of the175 assumptions made in Section 2. We continue by extending the scope of the investigation to non-linear176 neural networks, finding there evidence for the PC-bias mostly in the earlier stages of learning.177 3.1 Methodology178 We say that a linear network is L-layered when it has L − 1 hidden fully connected (FC) layers179 (without convolutional layers). In our empirical study we relaxed some assumptions of the theoretical180 study, in order to increase the resemblance of the trained networks to networks in common use.181 Specifically, we changed the initialization to the commonly used Glorot initialization, replaced the182 L2 loss with the cross-entropy loss, and employed SGD instead of the deterministic GD. Notably,183 the original assumptions yielded similar results. The results presented summarize experiments with184 networks of equal width across all hidden layers, specifically the moderate value of m = 1024,185 keeping in mind that we test the relevance of asymptotic results form→∞. Using a different width186 for each layer yielded similar qualitative results. Details regarding the hyper-parameters, architectures,187 and datasets can be found in §D.1, §D.3 and §D.4 respectively.188 3.2 PC-bias in deep linear networks189 In this section, we train L-layered linear networks, then compute their compact representations190 W rotated to align with the canonical coordinate system (Def. 1). Note that each row wr in W191 essentially defines the one-vs-all separating hyper-plane corresponding to class r.192 To examine both the variability between models and their convergence rate, we inspect wr at different193 time points during learning. The rate of convergence can be measured directly, by observing the194 changes in the weights of each element in wr. These weight values1 should be compared with195 the optimal values in each row wr of Wopt = Y XT (XXT ). The variability between models is196 measured by calculating the standard deviation (std) of each wr across N models.197 We begin with linear networks. We trained 10 5-layered FC linear networks, and 10 linear st-VGG198 convolutional networks. When analyzing the compact representation of such networks we observe199 similar behavior – weights corresponding to larger principal components converge faster to the200 optimal value, and their variability across models converges faster to 0 (Figs. 3a,3b). Thus, while the201 theoretical results are asymptotic, PC-bias is empirically seen throughout the entire learning process202 of deep linear networks.203 Whitened data. The PC-bias is neutralized when the data is whitened, at which point ΣXX is the204 scaled identity matrix. In Fig. 3c, we plot the results of the same experimental protocol while using a205 ZCA-whitened dataset. As predicted, the networks no longer show any bias towards any principal206 direction. Weights in all directions are scaled similarly, and the std over all models is the same in207 each epoch, irrespective of the principal direction. (Additional experiments show that this is not an208 artifact of the lack of uniqueness when deriving the principal components of a white signal).209 1We note that the weights tend to start larger for smaller principal components, as can be seen in Fig. 3a left. 3.3 PC-bias in general CNNs210 In this section, we investigate the manifestation of the PC-bias in non-linear deep convolutional211 networks. As we cannot directly track the learning dynamics separately in each principal direction of212 non-linear networks, we adopt two different evaluation mechanisms:213 Linear approximation. We considered several linear approximations, but since all of them showed214 the same qualitative behavior, we report results with the simplest one. Specifically, to obtain a linear215 approximation of a non-linear network, without max-pooling or batch-normalization layers, we216 follow the definition of the compact representation from Section 2 while ignoring any non-linear217 activation. We then align this matrix with the canonical coordinate system (Def. 1), and observe the218 evolution of the weights and their std across models along the principal directions during learning.219 Note that now the networks do not converge to the same compact representation, which is not unique.220 Nevertheless, we see that the PC-bias governs the weight dynamics to a noticeable extent.221 More specifically, in these networks a large fraction of the lowest principal components hardly changes222 during learning, as good as being ignored. Nevertheless, the PC-bias affects the higher principal223 components, most notably at the beginning of training (see Fig. 3d). Thus weights corresponding to224 higher principal components converge faster, and the std across models of such weights decreases225 faster for higher principal components.226 Projection to higher PC’s. We created a modified test-set, by project-227 ing each test example on the span of the first P principal components.228 This is equivalent to reducing the dimensionality of the test set to P us-229 ing PCA. We trained an ensemble of N=100 st-VGG networks on the230 original small mammals training set, then evaluated these networks dur-231 ing training on 4 versions of the test-set, reduced to P=1,10,100,1000232 dimensions respectively. Mean accuracy is plotted in Fig. 4. Similar233 results are obtained when training VGG-19 networks on CIFAR-10,234 see §C.3.235 Taking a closer look at Fig. 4, we see that when evaluated on lower236 dimensionality test-data (P=1,10), the networks’ accuracy peaks after237 a few epochs, at which point performance starts to decrease. This result suggests that the networks238 rely more heavily on these dimensions in the earlier phases of learning, and then continue to learn239 other things. In contrast, when evaluated on higher dimensionality test-data (P=100,1000), accuracy240 continues to rise, longer so for larger P . This suggests that significant learning of the additional241 dimensions continues in later stages of the learning.242 4 PC-bias: Learning Order Constancy243 In this section, we show that the PC-bias is significantly correlated with the learning order of deep244 neural networks, and can therefore partially account for the LOC-effect described in Section 1.245 Following Hacohen et al. (2020), we measure the "speed of learning" of each example by computing246 its accessibility score. This score is given per example, and characterizes how fast an ensemble of247 N networks learns it. Formally, accessibility(x) = E [1(fei (x) = y(x))], where fei (x) denotes248 the outcome of the i-th network trained over e epochs, and the mean is taken over networks and249 epochs. For the set of datapoints {(xj ,yj)}nj=1, Learning Order Constancy is manifested by the high250 correlation between 2 instances of accessibility(x), each computed from a different ensemble.251 PC-bias is shown to pertain to LOC in two ways: First, in Section 4.1 we show high correlation252 between the learning order in deep linear and non-linear networks. Since the PC-bias fully accounts253 for LOC in deep linear networks, this suggests it also accounts (at least partially) for the observed254 LOC in non-linear networks. Comparison with the critical principal component verifies this assertion.255 Second, we show in Section 4.2 that when the PC-bias is neutralized, LOC diminishes as well. In256 Section 4.3 we discuss the relationship between the spectral bias, PC-bias and the LOC-effect.257 4.1 PC-Bias is correlated with LOC258 We first compare the order of learning of non-linear models and deep linear networks by computing259 the correlation between the accessibility scores of both models. This comparison reveals high260 correlation (r = 0.85, p < 10−45), as seen in Fig. 5a. To investigate directly the connection between261 the PC-bias and LOC, we define the critical principal component of an example to be the first262 principal component P , such that a linear classifier trained on the original data can classify the263 example correctly when projected to P principal components. We trained N=100 st-VGG networks264 on the cats and dogs dataset, and computed for each example its accessibility score and critical265 principal component. In Fig. 5b we see strong negative correlation between the two scores (p=−0.93,266 r<10−4), suggesting that the PC-bias affects the order of learning as measured by accessibility.267 4.2 Neutralizing the PC-bias leads to diminishing LOC268 Whitening the data eliminates the PC-bias as shown in Fig. 3c, since all the singular values are now269 identical. Here we use this observation to further probe into the dependency of the Learning Order270 Constancy on the PC-bias. Starting with the linear case, we train 4 ensembles of N=10 2-layered271 linear networks on the cats and dogs dataset, 2 with and 2 without ZCA-whitening. We compute the272 accessibility score for each ensemble separately, and correlate the scores of the 2 ensembles in each273 test case. Each correlation captures the consistency of the LOC-effect for the respective condition.274 This correlation is expected to be very high for natural images. Low correlation implies that the275 LOC-effect is weak, as training the same network multiple times yields a different learning order.276 2As non-linear models achieve the accuracy of linear models within an epoch or 2, low learning rate is used. Fig. 6a shows the results for deep linear networks. As expected, the correlation when using natural277 images is very high. However, when using whitened images, correlation plummets, indicating that278 the LOC-effect is highly dependent on the PC-bias. We note that the drop in the correlation is much279 higher when considering only the 20% "fastest learned" examples, suggesting that the PC-bias affects280 learning order more evidently at earlier stages of learning.281 Fig. 6b shows the results when repeating this experiment with non-linear networks, training 2282 collections of N=10 VGG-19 networks on CIFAR-10. We find that the elimination of the PC-bias283 in this case affects LOC much less, suggesting that the PC-bias can only partially account for the284 LOC-effect in the non-linear case. However, we note that at the beginning of learning, when the285 PC-bias is most pronounced, once again the drop is much larger and very significant (half).286 4.3 Spectral bias, PC-bias and LOC287 The spectral bias (Rahaman et al., 2019) characterizes the dynamics of learning in neural networks288 differently, asserting that initially neural models can be described by low frequencies only. This may289 provide an alternative explanation to LOC. Recall that LOC is manifested in the consistency of the290 accessibility score across networks. To compare between the spectral bias and accessibility score,291 we first need to estimate for each example whether it can be correctly classified by a low frequency292 model. Accordingly, we define for each example a discriminability measure – the percentage out293 of its k neighbors that share with it class identity. Intuitively, an example has a low discriminability294 score when it is surrounded by examples from other classes, which forces the learned boundary to295 incorporate high frequencies. In §C.2 we show that in the 2D case analyzed by Rahaman et al. (2019),296 this measure strongly correlates (r=−0.8, p < 10−2) with the spectral bias.297 We trained several networks (VGG-19 and st-VGG) on several real datasets, including small-298 mammals, STL-10, CIFAR-10/100 and a subset of ImageNet-20. For each network and dataset,299 we computed the accessibility score as well as the discriminability of each example. The vector300 space, in which discriminability is evaluated, is either the raw data or the network’s perceptual space301 (penultimate layer activation). The correlation between these scores is shown in Table 1.302 Using raw data, low correlation is still seen between the accessibility and discriminability scores303 when inspecting the smaller datasets (small mammals, CIFAR-100 and STL10). This correlation304 vanishes when considering the larger ImageNet-20 dataset. It would appear that on its own, the305 spectral bias cannot adequately explain the LOC-effect. On the other hand, in the perceptual space,306 the correlation between discriminability and accessibility is quite significant for all datasets. Contrary307 to our supposition, it seems that networks learn a representation where the spectral bias is evident,308 but this bias does not necessarily govern its learning before the representation has been learned.309 5 PC-bias: further implications310 Early Stopping and the Generalization Gap. Considering natural images, it is often assumed that311 the least significant principal components of the data represent noise (Torralba & Oliva, 2003). In312 such cases, our analysis predicts that as noise dominates the components learned later in learning,313 early stopping is likely to be beneficial. To test this hypothesis directly, we manipulated CIFAR-10314 to amplify the signal in either the 1.5% most significant (higher) or 1.5% least significant (lower)315 principal components (see examples in Fig. 16, Suppl. §D). Accuracy over the original test set,316 after training 10 st-VGG and linear st-VGG networks on these manipulated images, can be seen317 in Fig. 7. Both in linear and non-linear networks, early stopping is more beneficial when lower318 principal components are amplified, and significantly less so when higher components are amplified,319 as predicted by the PC-bias.320 Slower Convergence with Random Labels. Deep neural models can learn any random label321 assignment to a given training set (Zhang et al., 2016). However, when trained on randomly labeled322 data, convergence appears to be much slower (Krueger et al., 2017). Assume, as before, that in natural323 images the lower principal components are dominated by noise. We argue that the PC-bias now324 predicts this empirical result, since learning randomly labeled examples requires signal present in325 lower principal components. To test this hypothesis directly, we trained 10 2-layered linear networks326 on datasets of natural images. Indeed, these networks converge slower with random labels (see327 Fig. 8a). In Fig. 8b we repeat this experiment after having whitened the images, to neutralize the328 PC-bias. Now convergence rate is identical, whether the labels are original or shuffled. Clearly, in329 deep linear networks the PC-bias gives a full account of this phenomenon.330 To further check the relevance of this account to non-linear networks, we artificially generate datasets331 where only the first P principal components are discriminative, while the remaining components332 become noise by design. We constructed two such datasets: in one the labels are correlated with the333 original labels, in the other they are not. Specifically, PCA is used to reduce the dimensionality of a334 two-class dataset to P , and the optimal linear separator in the reduced representation is computed.335 Next, all the labels of points that are incorrectly classified by the optimal linear separator are switched,336 so that the train and test sets are linearly separable by this separator. Note that the modified labels337 are still highly correlated with the original labels (for P = 500: p = 0.82, r < 10−10). The338 second dataset is generated by repeating the process while starting from randomly shuffled labels.339 This dataset is likewise fully separable when projected to the first P components, but its labels are340 uncorrelated with the original labels (for P = 500: p = 0.06, r < 10−10).341 The mean training accuracy of 10 non-linear networks with P=10,50,500 is plotted in Fig. 9a (first342 dataset) and Fig. 9b (second dataset). In both cases, the lower P is (namely, only the first few principal343 components are discriminative), the faster the data is learned by the non-linear network. Whether the344 labels are real or shuffled makes little qualitative difference, as predicted by the PC-bias.345 6 Summary and discussion346 When trained with gradient descent, the convergence rate of the over-parameterized deep linear347 network model is provably governed by the eigendecomposition of the data, and specifically, pa-348 rameters corresponding to the most significant principal components converge faster than the least349 significant components. Empirical evidence is provided for the relevance of these results to more350 realistic non-linear networks. We term this effect PC-bias. This result provides a complementary351 account for some prevalent empirical observations, including the benefit of early stopping and the352 slower convergence rate with shuffled labels.353 We use the PC-bias to explicate the Learning Order Constancy (LOC), showing that examples354 learned at earlier stages are more distinguishable by the higher principal components, demonstrating355 that networks’ training relies more heavily on higher principal components early on. A causal link356 between the PC-bias and the LOC-effect is demonstrated, as the LOC-effect diminishes when the357 PC-bias is eliminated by whitening the images. We analyze these findings in view of a related358 phenomenon termed spectral bias. While the PC-bias may be more prominent early on, the spectral359 bias may be more important in later stages of learning.360 References361 Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural362 networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018.363 Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by364 overparameterization. In International Conference on Machine Learning, pp. 244–253, 2018.365 Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generaliza-366 tion for overparameterized two-layer neural networks. In International Conference on Machine367 Learning, pp. 322–332, 2019.368 Basri, R., Jacobs, D. W., Kasten, Y., and Kritchman, S. The convergence rate of neural networks for369 learned functions of different frequencies. In Advances in Neural Information Processing Systems,370 pp. 4761–4771, 2019.371 Basri, R., Galun, M., Geifman, A., Jacobs, D., Kasten, Y., and Kritchman, S. Frequency bias in neural372 networks for input of non-uniform density. In International Conference on Machine Learning, pp.373 685–694. PMLR, 2020.374 Cao, Y., Fang, Z., Wu, Y., Zhou, D.-X., and Gu, Q. Towards understanding the spectral bias of deep375 learning. arXiv preprint arXiv:1912.01198, 2019.376 Dingle, K., Camargo, C. Q., and Louis, A. A. Input–output maps are strongly biased towards simple377 outputs. Nature communications, 9(1):1–7, 2018.378 Fukumizu, K. Effect of batch learning in multilayer neural networks. Gen, 1(04):1E–03, 1998.379 Gissin, D., Shalev-Shwartz, S., and Daniely, A. The implicit bias of depth: How incremental learning380 drives generalization. arXiv preprint arXiv:1909.12051, 2019.381 Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks.382 In Proceedings of the thirteenth international conference on artificial intelligence and statistics,383 pp. 249–256, 2010.384 Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear385 convolutional networks. arXiv preprint arXiv:1806.00468, 2018.386 Hacohen, G., Choshen, L., and Weinshall, D. Let’s agree to agree: Neural networks share classification387 order on real datasets. In International Conference on Machine Learning, pp. 3950–3960. PMLR,388 2020.389 He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level390 performance on imagenet classification. In Proceedings of the IEEE International Conference on391 Computer Vision (ICCV), December 2015.392 Heckel, R. and Soltanolkotabi, M. Denoising and regularization via exploiting the structural bias of393 convolutional generators. arXiv preprint arXiv:1910.14634, 2019.394 Hu, W., Xiao, L., Adlam, B., and Pennington, J. The surprising simplicity of the early-time learning395 dynamics of neural networks. arXiv preprint arXiv:2006.14599, 2020.396 Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in397 neural networks, 2020.398 Krueger, D., Ballas, N., Jastrzebski, S., Arpit, D., Kanwal, M. S., Maharaj, T., Bengio, E., Fischer, A.,399 and Courville, A. Deep nets don’t learn via memorization. 2017.400 Le Cun, Y., Kanter, I., and Solla, A. S. Second order properties of error surfaces learning time and401 generalization. Advances in neural information processing systems, 3:918–924, 1991.402 Li, M., Soltanolkotabi, M., and Oymak, S. Gradient descent with early stopping is provably robust403 to label noise for overparameterized neural networks. In International Conference on Artificial404 Intelligence and Statistics, pp. 4313–4324. PMLR, 2020.405 Nakkiran, P., Kaplun, G., Kalimeris, D., Yang, T., Edelman, B. L., Zhang, F., and Barak, B. Sgd406 on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604,407 2019.408 Pliushch, I., Mundt, M., Lupp, N., and Ramesh, V. When deep classifiers agree: Analyzing409 correlations between learning order and image statistics. arXiv preprint arXiv:2105.08997, 2021.410 Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville,411 A. On the spectral bias of neural networks. In International Conference on Machine Learning, pp.412 5301–5310. PMLR, 2019.413 Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning414 in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.415 Saxe, A. M., McClelland, J. L., and Ganguli, S. A mathematical theory of semantic development in416 deep neural networks. Proceedings of the National Academy of Sciences, 116(23):11537–11546,417 2019.418 Shah, H., Tamuly, K., Raghunathan, A., Jain, P., and Netrapalli, P. The pitfalls of simplicity bias in419 neural networks. arXiv preprint arXiv:2006.07710, 2020.420 Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual421 review of neuroscience, 24(1):1193–1216, 2001.422 Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient423 descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.424 Torralba, A. and Oliva, A. Statistics of natural image categories. Network: computation in neural425 systems, 14(3):391–412, 2003.426 Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE conference427 on computer vision and pattern recognition, pp. 9446–9454, 2018.428 Valle-Perez, G., Camargo, C. Q., and Louis, A. A. Deep learning generalizes because the parameter-429 function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, 2018.430 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires431 rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.432 Checklist433 1. For all authors...434 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s435 contributions and scope? [Yes]436 (b) Did you describe the limitations of your work? [Yes]437 (c) Did you discuss any potential negative societal impacts of your work? [N/A]438 (d) Have you read the ethics review guidelines and ensured that your paper conforms to439 them? [Yes]440 2. If you are including theoretical results...441 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See the442 "Assumptions" paragraph as Section 2443 (b) Did you include complete proofs of all theoretical results? [Yes] Each theorem reference444 to its proof. Proofs can be found in Suppl. §A,B445 3. If you ran experiments...446 (a) Did you include the code, data, and instructions needed to reproduce the main ex-447 perimental results (either in the supplemental material or as a URL)? [No] All data,448 instructions and hyper-parameters are explictly written in the main paper and/or in the449 Suppl. (see §D.4,D.4). The code itself will be provided once the anonymity will be450 lifted.451 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they452 were chosen)? [Yes]453 (c) Did you report error bars (e.g., with respect to the random seed after running experi-454 ments multiple times)? [Yes]455 (d) Did you include the total amount of compute and the type of resources used (e.g., type456 of GPUs, internal cluster, or cloud provider)? [N/A]457 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...458 (a) If your work uses existing assets, did you cite the creators? [Yes]459 (b) Did you mention the license of the assets? [N/A]460 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]461 462 (d) Did you discuss whether and how consent was obtained from people whose data you’re463 using/curating? [N/A]464 (e) Did you discuss whether the data you are using/curating contains personally identifiable465 information or offensive content? [N/A]466 5. If you used crowdsourcing or conducted research with human subjects...467 (a) Did you include the full text of instructions given to participants and screenshots, if468 applicable? [N/A]469 (b) Did you describe any potential participant risks, with links to Institutional Review470 Board (IRB) approvals, if applicable? [N/A]471 (c) Did you include the estimated hourly wage paid to participants and the total amount472 spent on participant compensation? [N/A]473
1. What is the main focus of the paper regarding deep wide nets? 2. What is the author's concern about the theoretical analysis in the paper? 3. How does the paper connect PC bias with common techniques like PC amplification, whitening, and random labeling? 4. What are some potential issues with the proof of Theorem 5 in the paper? 5. How does the reviewer assess the significance of the paper's contributions despite some limitations in the theoretical part?
Summary Of The Paper Review
Summary Of The Paper The paper argues for the existence of the so-called “principal component bias” in the learning of deep wide nets, with the main focus on linear networks. This refers to the phenomenon that the learning that is associated with larger PCs of the data is typically faster. The paper does so via a theoretical analysis of the early stage of learning in linear nets and a series of experiments, involving both linear and nonlinear models. Review This is a paper with quite very interesting experiments. Understanding deep nets is difficult, so the ability to have a partial understanding via a simple pattern like PC bias is valuable, even though PC bias does not give the full picture as admitted in the paper. The theoretical analysis is non-rigorous and appears to be the weaker part; it is the series of small but informing experiments that give useful insights. Specifically the paper draws a nice connection between PC bias and common techniques like PC amplification, whitening and random labelling, demonstrating that PC bias can be a useful guide for subsequent studies that involve these techniques. My main criticism lies in the theoretical part. Certain parts are not rigorous; for example: In the use of big O notation, this notation hides a lot of important dependencies on e.g. time, weight magnitude, etc. This has serious implication to Theorem 2; for example, if the conclusion can only hold until time ∼ O ( 1 / m 99 ) , this implies Theorem 2 is not very useful (even though Fig. 1 is). The proof of Theorem 5 is actually bad: from line 515 (which is an entry-wise bound as written) to line 516, one requires a certain union bounding over m 2 entries, and so the correct probability bound becomes void. This would have been avoided if the metric of interest (denoted by | ⋅ | as in the paper) is not entry-wise absolute value, but some appropriate norm. These can be fixed, though I think this is the less important part of the paper. Moreover previously there have been rigorous theoretical results that convey a certain sense of PC bias, by solving the complete solution of the learning dynamics, in linear networks (e.g. Gidel et al 2020) as well as nonlinear networks (e.g. Nguyen 2021), without the whitened data assumption just like this paper, but of course in tractable theoretical settings. As such, despite the weakness in the theoretical part, I would not say PC bias has no (or unreliable) theoretical basis, and I would rather emphasize on the experimental findings. While I do not know the experimental literature sufficiently well to judge on novelty, supposing there is no issue with that and unless there are flaws in experimental designs spotted by other reviewers, I think the paper is a worthy contribution to the broader community. References: Gidel, Bach, Lacote-Julien, “Implicit regularization of discrete gradient dynamics in linear neural networks”, 2020. Nguyen, “Analysis of feature learning in weight-tied autoencoders via the mean field lens”, 2021.
NIPS
Title Principal Components Bias in Deep Neural Networks Abstract Recent work suggests that convolutional neural networks of different architectures 1 learn to classify images in the same order. To understand this phenomenon, we 2 revisit the over-parametrized deep linear network model. Our asymptotic analysis, 3 assuming that the hidden layers are wide enough, reveals that the convergence rate 4 of this model’s parameters is exponentially faster along directions corresponding 5 to the larger principal components of the data, at a rate governed by the singular 6 values. We term this convergence pattern the Principal Components bias (PC-bias). 7 We show how the PC-bias streamlines the order of learning of both linear and non8 linear networks, more prominently at earlier stages of learning. We then compare 9 our results to the spectral bias, showing that both biases can be seen independently, 10 and affect the order of learning in different ways. Finally, we discuss how the 11 PC-bias may explain some benefits of early stopping and its connection to PCA, 12 and why deep networks converge more slowly when given random labels. 13 N/A Recent work suggests that convolutional neural networks of different architectures1 learn to classify images in the same order. To understand this phenomenon, we2 revisit the over-parametrized deep linear network model. Our asymptotic analysis,3 assuming that the hidden layers are wide enough, reveals that the convergence rate4 of this model’s parameters is exponentially faster along directions corresponding5 to the larger principal components of the data, at a rate governed by the singular6 values. We term this convergence pattern the Principal Components bias (PC-bias).7 We show how the PC-bias streamlines the order of learning of both linear and non-8 linear networks, more prominently at earlier stages of learning. We then compare9 our results to the spectral bias, showing that both biases can be seen independently,10 and affect the order of learning in different ways. Finally, we discuss how the11 PC-bias may explain some benefits of early stopping and its connection to PCA,12 and why deep networks converge more slowly when given random labels.13 1 Introduction14 The dynamics of learning in deep neural networks is an intriguing subject, not yet sufficiently15 understood. Diverse empirical data seems to support the hypothesis that neural networks start by16 learning a simple model, which then gains complexity as learning proceeds (Gunasekar et al., 2018;17 Soudry et al., 2018; Hu et al., 2020; Nakkiran et al., 2019; Gissin et al., 2019; Heckel & Soltanolkotabi,18 2019; Ulyanov et al., 2018; Valle-Perez et al., 2018). This phenomenon is sometimes called simplicity19 bias (Dingle et al., 2018; Shah et al., 2020).20 Recent work additionally shows that neural networks learn the training examples of natural datasets21 in a consistent order, and further impose a consistent order on the test set (Hacohen et al., 2020;22 Pliushch et al., 2021). Below we call this effect Learning Order Constancy (LOC). Currently, the23 characteristics of visual data, which may explain this consistently imposed order, remain unclear.24 Surprisingly, this universal order persists despite the variability introduced into the training of different25 models and architectures.26 To understand this phenomenon, we start by analyzing the deep linear network model (Saxe et al.,27 2013, 2019), defined by the concatenation of linear operators. While not a universal approximator, it28 is nevertheless trained by minimizing a non-convex objective function with a multitude of minima.29 The investigation of such networks is often employed to shed light on the learning dynamics when30 complex geometric landscapes are explored by GD (Fukumizu, 1998; Arora et al., 2018).31 In Section 2, we prove that the convergence of the weights of deep linear networks is governed32 by the eigendecomposition of the raw data in a phenomenon we term PC-bias. These asymptotic33 results, valid when the hidden layers are wide enough, can be seen as an extension of the known34 behavior of the single-layer convex linear model (Le Cun et al., 1991). Our work is closely related to35 (Saxe et al., 2013, 2019), where the deep linear model’s dynamics is analyzed as a function of the36 input and input-output statistics. Importantly, the analysis in (Saxe et al., 2013, 2019; Arora et al.,37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2018) incorporates the simplifying assumption that the data’s singular values are identical (whitened38 data), an assumption which unfortunately obscures the main result of our analysis – the direct39 dependence of convergence rate on the singular values of the data.40 In Section 3, we empirically show that this pattern of convergence is indeed observed in deep linear41 networks, validating the plausibility of our assumptions. We continue by showing that the LOC-effect42 in deep linear network is determined solely by their PC-bias. We prove a similar (weaker) result for43 the non-linear two-layer ReLU model introduced by Allen-Zhu et al. (2018), where this model is44 presented as a certain extension of NTK (Jacot et al., 2020). In this framework, convergence is fastest45 along the largest kernel’s principal components, a result related to the Spectral bias discussed below.46 In Section 4, we extend the study empirically to non-linear networks, and investigate the relation47 between the PC-bias and the LOC-effect in general deep networks. We first show that the order48 by which examples are learned by linear networks is highly correlated with the order induced by49 prevalent deep CNN models. We then show directly that the learning order of non-linear CNN models50 is affected by the principal decomposition of the data. Moreover, the LOC-effect diminishes when51 data is whitened, indicating a tight connection between the PC-bias and the LOC-effect.52 Our results are reminiscent of another phenomenon, termed Spectral bias (Rahaman et al., 2019;53 Cao et al., 2019), which associates the learning dynamics of neural networks with the Fourier54 decomposition of functions in the hypothesis space. Rahaman et al. (2019) empirically demonstrated55 that the complexity of classifiers learned by ReLU networks increases with time. Basri et al. (2019,56 2020) showed theoretically, by way of analyzing elementary neural network models, that these models57 first fit the data with low-frequency functions, and gradually add higher frequencies to improve the fit.58 Nevertheless, the spectral bias and PC-bias are inherently different. Indeed, the eigendecomposition59 of raw images is closely related to the Fourier analysis of images as long as the statistical properties60 of images are (approximately) translation-invariant (Simoncelli & Olshausen, 2001; Torralba & Oliva,61 2003). Still, the PC-bias is guided by spectral properties of the raw data and is additionally blind to62 class labels. On the other hand, the spectral bias, as well as the related frequency bias that has been63 shown to characterize NTK models (Basri et al., 2020), are all guided by spectral properties of the64 learned hypothesis, which strongly depends on label assignment.65 In Section 4.3 we investigate the relation between the PC-bias, spectral bias, and the LOC-effect.66 We find that the LOC-effect is very robust: (i) when we neutralize the spectral bias by using low67 complexity models such as deep linear networks, the effect is still observed; (ii) when we neutralize68 the PC-bias by using whitened data, the LOC-effect persists. We hypothesize that at the beginning of69 learning, the learning dynamics of neural models is controlled by the eigendecomposition of the raw70 data. As learning proceeds, control of the dynamics slowly shifts to other factors.71 The PC-bias has implications beyond the LOC-effect, as expanded in Section 5 and Suppl. §A:72 1. Early stopping. It is often observed that when training deep networks with real data, the highest73 generalization accuracy is obtained before convergence. Consequently, early stopping is often74 prescribed to improve generalization. Following the commonly used assumption that in natural75 images the lowest principal components correspond to noise (Torralba & Oliva, 2003), our results76 predict the benefits of early stopping, and relate it to PCA. In Section 5 we investigate the relevance77 of this conclusion to real non-linear networks (see, e.g., Basri et al. (2019); Li et al. (2020) for78 complementary accounts).79 2. Slower convergence with random labels. Zhang et al. (2016) showed that neural networks80 can learn any label assignment. However, training with random label assignments is known to81 converge slower as compared to training with the original labels (Krueger et al., 2017). We report a82 similar phenomenon when training deep linear networks. Our analysis shows that when the principal83 eigenvectors are correlated with class identity, as is often the case in natural images, the loss decreases84 faster when given true label assignments as against random label assignments. In Section 5 we85 investigate this hypothesis empirically in linear and non-linear networks.86 3. Weight initialization. Different weight initialization schemes have been proposed to stabilize the87 learning and minimize the hazard of "exploding gradients" (e.g., Glorot & Bengio, 2010; He et al.,88 2015). Our analysis (see Suppl. §A) identifies a related variant, which eliminates the hazard when89 all the hidden layers are roughly of equal width. In the deep linear model, it can be proven that the90 proposed normalization variant in a sense minimizes repeated gradient amplification.91 2 Theoretical analysis92 Notations. Let X = {(xi,yi)}ni=1 denote the training data, where x ∈ Rq denotes the i-th data93 point and y ∈ {0, 1}K its corresponding label. Let 1nimi denote the centroid (mean) of class i with94 ni points, and M = [m1 . . .mK ]>. Finally, let X and Y denote the matrices whose ith column95 is xi and yi respectively. ΣXX = XX> and ΣY X = Y X> denote the covariance matrix of X96 and cross-covariance of X and Y respectively. We note that ΣXX captures the structure of the data97 irrespective of class identity.98 Definition 1 (Principal coordinate system). The coordinate system obtained by rotating the data in Rq99 by an orthonormal matrixU>, where SV D(ΣXX)=UDU>. Now ΣXX =D, a diagnoal matrix whose100 elements are the singular values of XX>, arranged in decreasing order d1 ≥ d2 ≥ . . . ≥ dq ≥ 0.101 Definition 2 (Compact representation). Let f(x) denote a deep linear network. Then f(x) =102 (∏1 l=LWl ) x = Wx, where W ∈ RK×q is called the compact representation of the network.103 Definition 3 (Error matrix). For a deep linear network whose compact representation is W , the104 error matrix is Er = WΣXX − ΣY X . In the principal coordinate system, Er = WD −M .105 Assumptions. Our analysis assumes that the learning rate µ is infinitesimal, and therefore terms106 of size O(µ2) can be neglected. We further assume that the width of the hidden layers lies in107 [m,m+Mb], wherem→∞ denotes a very large number and Mb is fixed. Thus terms of size O( 1m )108 can also be neglected. In Fig. 1 we show the plausibility of these assumptions, where the predicted109 dynamics is seen throughout the training of deep linear networks, even for small values ofm.110 2.1 The dynamics of deep over-parametrized linear networks111 Consider a deep linear network with L layers, and let112 L(X) = 1 2 ‖WX − Y ‖2F W := 1∏ l=L Wl, Wl ∈ Rml×ml−1 (1) Above ml denotes the number of neurons in layer l, where m0 = q and mL = K.113 Theorem 1. In each time point s, the compact matrix representation W obeys the following dynamics,114 when using the notation Ers defined in Def. 3:115 W s+1 = W s − µ L∑ l=1 Asl · Ers ·Bsl +O(µ2) (2) Above µ denotes the learning rate. Asl and B s l are called gradient scale matrices, and are defined as116 Asl := ( l+1∏ j=L W sj )( l+1∏ j=L W sj )> ∈ RK×K Bsl := ( 1∏ j=l−1 W sj )>( 1∏ j=l−1 W sj ) ∈ Rq×q (3) The proof can be found in Suppl. §B.117 Gradient scale matrices. Some statistical properties of such matrices are established in Suppl. §A.118 Note that when the number of hidden layers is 0 (L = 1), both gradient scale matrices reduce to the119 identity matrix and the dynamics in (2) is reduced to the following known result (e.g., Le Cun et al.,120 1991): W s+1 = W s−µErs. Recall, however, that the focus of this paper is the over-parameterized121 linear model with L > 1, in which the loss is not convex. Since the difference between the convex122 linear model and the over-parametrized deep model boils down to these matrices, our convergence123 analysis henceforth focuses on the dynamics of the gradient scale matrices.124 In accordance, we analyze the evolution of the gradient scale matrices as learning proceeds. Let125 m = min (m1, ...,mL−1) denote the size of the smallest hidden layer. Initially for s = 0, all weight126 matrices W 0l are assumed to be initialized by sampling from a distribution with mean 0 and variance127 σ2l = O( 1 m ). The specific normalization factor, alluded to in O( 1 m ), is a variant of the Glorot128 initialization. Details and justification can be found in Suppl. §A.1.129 At time s, letAsl (m) andB s l (m) denote a sequence of random gradient scale matrices, corresponding130 to networks whose smallest hidden layer hasm neurons. From Suppl. §A we deduce that:131 Theorem 2. Using p−→ to denote convergence in probability asm→∞, and ∀s, l:132 Bsl (m) p−→ I, var[Bl(m)] = O ( 1 m ) Asl (m) p−→ I, var[Al(m)] = O ( 1 m ) Proof. Proof by induction on s. Initially when s = 0, the claim follows from Thm 4 and Corr 5.1.133 The induction step validity follows from Thm 6 and Thm 7 (see Suppl. §A.2).134 The detailed proof shows that the relevant constants are amplified with s. While they remain moderate135 andm is sufficiently large, Bsl (m) ≈ I and Asl (m) ≈ I ∀l. In this case, the dynamics of the over-136 parameterized model is identical to the dynamics of the convex linear model, W s+1 = W s − µErs.137 Convergence rate. In §A.2 we show that the convergence of Bsl (m) to I is governed to some extent138 by O ( K m ) , while the convergence of Asl (m) is governed by O ( q m ) . Recall that while m → ∞,139 q is the dimension of the data space which is fixed in advance and can be fairly large, while K is140 the number of classes which is fixed and quite small. Typically, K q. Thus we expect the right141 gradient scale matrices Bsl (m) to remain approximately I much longer than the left matrices A s l (m).142 Empirical validation. Since the results above are asymptotic, and to envision the difference between143 convergence governed by O ( K m ) vs. O ( q m ) , we resort to simulations whose results are shown in144 Fig. 1. These empirical results, recounting linear networks with 4 hidden layers of width 1024, clearly145 show that during a significant part of the training both gradient scale matrices remain approximately146 I . The difference between the convergence rate of Bsl and A s l is seen later on, when ∆A s l starts to147 increase shortly before convergence, while ∆Bsl remains essentially 0 throughout.148 2.2 Weight evolution149 K q entails that Bsl (m) remains approximately equal to I much longer than Asl (m). This is150 substantiated by the simulation results in Fig. 1. Consequently, while earlier on it is safe to assume151 that both Asl ≈ I and Bsl ≈ I , as learning proceeds only Bsl ≈ I is safe to assume.152 With this in mind, we obtain expressions for the evolution of W s separately for earlier and later in153 learning. We first shift to the principal coordinate system defined in Def 1. In this system we can154 analyze each column of W s separately, where wsj and mj denote the respective columns of W s and155 M . At the beginning of learning when both Asl ≈ I and Bsl ≈ I (see §B.3 for a detailed derivation):156 ws+1j = (λj) sw0j + [1− (λj)s] mj dj λj = 1− µdjL (4) 157 Eq. 4 is reminiscent of the well understood dynamics of training the convex one layer linear model. It158 is composed of two additive terms, revealing two parallel and independent processes:159 1. The dependence on random initialization tends to 0 exponentially with decline rate λj .160 2. The final value is the sum of a geometrical series with a common ratio λj .161 In either case, convergence is fastest for the largest singular eigenvalue, or the first column of W ,162 and slowest for the smallest singular value. This behavior is visualized in Fig. 2a. Importantly, the163 rate of convergence depends on the singular value dj , the number of layers L, and the learning rate µ.164 In later stages of learning, when we can only assume that Bsl ≈ I , the dynamic becomes:165 ws+1j = s∏ ν=1 (I − µdjAν)w0j + µ [ s∑ ν=1 s∏ ρ=ν+1 (I − µdjAρ)Aν ] mj (5) where As = ∑L l=1A s l . The proof is provided in §B.3. Although the dynamics now depends on166 matrices As as well, it is still the case that the convergence of each column is governed by its singular167 value dj . This suggests that while the PC-bias is more pronounced in earlier stages of learning, its168 effect persists throughout.169 The analysis above is extended to a simple non-linear ReLU model (cf. Arora et al., 2019) as detailed170 in §B.2, with qualitatively similar results (albeit under unrealistic assumptions). Empirical results,171 shown in Fig. 2b, indicate that the results are indicative beyond the assumed circumstances.172 3 PC-bias: empirical study173 In this section, we first analyze deep linear networks, showing that the convergence rate is indeed174 governed by the principal singular values of the data, which demonstrates the plausibility of the175 assumptions made in Section 2. We continue by extending the scope of the investigation to non-linear176 neural networks, finding there evidence for the PC-bias mostly in the earlier stages of learning.177 3.1 Methodology178 We say that a linear network is L-layered when it has L − 1 hidden fully connected (FC) layers179 (without convolutional layers). In our empirical study we relaxed some assumptions of the theoretical180 study, in order to increase the resemblance of the trained networks to networks in common use.181 Specifically, we changed the initialization to the commonly used Glorot initialization, replaced the182 L2 loss with the cross-entropy loss, and employed SGD instead of the deterministic GD. Notably,183 the original assumptions yielded similar results. The results presented summarize experiments with184 networks of equal width across all hidden layers, specifically the moderate value of m = 1024,185 keeping in mind that we test the relevance of asymptotic results form→∞. Using a different width186 for each layer yielded similar qualitative results. Details regarding the hyper-parameters, architectures,187 and datasets can be found in §D.1, §D.3 and §D.4 respectively.188 3.2 PC-bias in deep linear networks189 In this section, we train L-layered linear networks, then compute their compact representations190 W rotated to align with the canonical coordinate system (Def. 1). Note that each row wr in W191 essentially defines the one-vs-all separating hyper-plane corresponding to class r.192 To examine both the variability between models and their convergence rate, we inspect wr at different193 time points during learning. The rate of convergence can be measured directly, by observing the194 changes in the weights of each element in wr. These weight values1 should be compared with195 the optimal values in each row wr of Wopt = Y XT (XXT ). The variability between models is196 measured by calculating the standard deviation (std) of each wr across N models.197 We begin with linear networks. We trained 10 5-layered FC linear networks, and 10 linear st-VGG198 convolutional networks. When analyzing the compact representation of such networks we observe199 similar behavior – weights corresponding to larger principal components converge faster to the200 optimal value, and their variability across models converges faster to 0 (Figs. 3a,3b). Thus, while the201 theoretical results are asymptotic, PC-bias is empirically seen throughout the entire learning process202 of deep linear networks.203 Whitened data. The PC-bias is neutralized when the data is whitened, at which point ΣXX is the204 scaled identity matrix. In Fig. 3c, we plot the results of the same experimental protocol while using a205 ZCA-whitened dataset. As predicted, the networks no longer show any bias towards any principal206 direction. Weights in all directions are scaled similarly, and the std over all models is the same in207 each epoch, irrespective of the principal direction. (Additional experiments show that this is not an208 artifact of the lack of uniqueness when deriving the principal components of a white signal).209 1We note that the weights tend to start larger for smaller principal components, as can be seen in Fig. 3a left. 3.3 PC-bias in general CNNs210 In this section, we investigate the manifestation of the PC-bias in non-linear deep convolutional211 networks. As we cannot directly track the learning dynamics separately in each principal direction of212 non-linear networks, we adopt two different evaluation mechanisms:213 Linear approximation. We considered several linear approximations, but since all of them showed214 the same qualitative behavior, we report results with the simplest one. Specifically, to obtain a linear215 approximation of a non-linear network, without max-pooling or batch-normalization layers, we216 follow the definition of the compact representation from Section 2 while ignoring any non-linear217 activation. We then align this matrix with the canonical coordinate system (Def. 1), and observe the218 evolution of the weights and their std across models along the principal directions during learning.219 Note that now the networks do not converge to the same compact representation, which is not unique.220 Nevertheless, we see that the PC-bias governs the weight dynamics to a noticeable extent.221 More specifically, in these networks a large fraction of the lowest principal components hardly changes222 during learning, as good as being ignored. Nevertheless, the PC-bias affects the higher principal223 components, most notably at the beginning of training (see Fig. 3d). Thus weights corresponding to224 higher principal components converge faster, and the std across models of such weights decreases225 faster for higher principal components.226 Projection to higher PC’s. We created a modified test-set, by project-227 ing each test example on the span of the first P principal components.228 This is equivalent to reducing the dimensionality of the test set to P us-229 ing PCA. We trained an ensemble of N=100 st-VGG networks on the230 original small mammals training set, then evaluated these networks dur-231 ing training on 4 versions of the test-set, reduced to P=1,10,100,1000232 dimensions respectively. Mean accuracy is plotted in Fig. 4. Similar233 results are obtained when training VGG-19 networks on CIFAR-10,234 see §C.3.235 Taking a closer look at Fig. 4, we see that when evaluated on lower236 dimensionality test-data (P=1,10), the networks’ accuracy peaks after237 a few epochs, at which point performance starts to decrease. This result suggests that the networks238 rely more heavily on these dimensions in the earlier phases of learning, and then continue to learn239 other things. In contrast, when evaluated on higher dimensionality test-data (P=100,1000), accuracy240 continues to rise, longer so for larger P . This suggests that significant learning of the additional241 dimensions continues in later stages of the learning.242 4 PC-bias: Learning Order Constancy243 In this section, we show that the PC-bias is significantly correlated with the learning order of deep244 neural networks, and can therefore partially account for the LOC-effect described in Section 1.245 Following Hacohen et al. (2020), we measure the "speed of learning" of each example by computing246 its accessibility score. This score is given per example, and characterizes how fast an ensemble of247 N networks learns it. Formally, accessibility(x) = E [1(fei (x) = y(x))], where fei (x) denotes248 the outcome of the i-th network trained over e epochs, and the mean is taken over networks and249 epochs. For the set of datapoints {(xj ,yj)}nj=1, Learning Order Constancy is manifested by the high250 correlation between 2 instances of accessibility(x), each computed from a different ensemble.251 PC-bias is shown to pertain to LOC in two ways: First, in Section 4.1 we show high correlation252 between the learning order in deep linear and non-linear networks. Since the PC-bias fully accounts253 for LOC in deep linear networks, this suggests it also accounts (at least partially) for the observed254 LOC in non-linear networks. Comparison with the critical principal component verifies this assertion.255 Second, we show in Section 4.2 that when the PC-bias is neutralized, LOC diminishes as well. In256 Section 4.3 we discuss the relationship between the spectral bias, PC-bias and the LOC-effect.257 4.1 PC-Bias is correlated with LOC258 We first compare the order of learning of non-linear models and deep linear networks by computing259 the correlation between the accessibility scores of both models. This comparison reveals high260 correlation (r = 0.85, p < 10−45), as seen in Fig. 5a. To investigate directly the connection between261 the PC-bias and LOC, we define the critical principal component of an example to be the first262 principal component P , such that a linear classifier trained on the original data can classify the263 example correctly when projected to P principal components. We trained N=100 st-VGG networks264 on the cats and dogs dataset, and computed for each example its accessibility score and critical265 principal component. In Fig. 5b we see strong negative correlation between the two scores (p=−0.93,266 r<10−4), suggesting that the PC-bias affects the order of learning as measured by accessibility.267 4.2 Neutralizing the PC-bias leads to diminishing LOC268 Whitening the data eliminates the PC-bias as shown in Fig. 3c, since all the singular values are now269 identical. Here we use this observation to further probe into the dependency of the Learning Order270 Constancy on the PC-bias. Starting with the linear case, we train 4 ensembles of N=10 2-layered271 linear networks on the cats and dogs dataset, 2 with and 2 without ZCA-whitening. We compute the272 accessibility score for each ensemble separately, and correlate the scores of the 2 ensembles in each273 test case. Each correlation captures the consistency of the LOC-effect for the respective condition.274 This correlation is expected to be very high for natural images. Low correlation implies that the275 LOC-effect is weak, as training the same network multiple times yields a different learning order.276 2As non-linear models achieve the accuracy of linear models within an epoch or 2, low learning rate is used. Fig. 6a shows the results for deep linear networks. As expected, the correlation when using natural277 images is very high. However, when using whitened images, correlation plummets, indicating that278 the LOC-effect is highly dependent on the PC-bias. We note that the drop in the correlation is much279 higher when considering only the 20% "fastest learned" examples, suggesting that the PC-bias affects280 learning order more evidently at earlier stages of learning.281 Fig. 6b shows the results when repeating this experiment with non-linear networks, training 2282 collections of N=10 VGG-19 networks on CIFAR-10. We find that the elimination of the PC-bias283 in this case affects LOC much less, suggesting that the PC-bias can only partially account for the284 LOC-effect in the non-linear case. However, we note that at the beginning of learning, when the285 PC-bias is most pronounced, once again the drop is much larger and very significant (half).286 4.3 Spectral bias, PC-bias and LOC287 The spectral bias (Rahaman et al., 2019) characterizes the dynamics of learning in neural networks288 differently, asserting that initially neural models can be described by low frequencies only. This may289 provide an alternative explanation to LOC. Recall that LOC is manifested in the consistency of the290 accessibility score across networks. To compare between the spectral bias and accessibility score,291 we first need to estimate for each example whether it can be correctly classified by a low frequency292 model. Accordingly, we define for each example a discriminability measure – the percentage out293 of its k neighbors that share with it class identity. Intuitively, an example has a low discriminability294 score when it is surrounded by examples from other classes, which forces the learned boundary to295 incorporate high frequencies. In §C.2 we show that in the 2D case analyzed by Rahaman et al. (2019),296 this measure strongly correlates (r=−0.8, p < 10−2) with the spectral bias.297 We trained several networks (VGG-19 and st-VGG) on several real datasets, including small-298 mammals, STL-10, CIFAR-10/100 and a subset of ImageNet-20. For each network and dataset,299 we computed the accessibility score as well as the discriminability of each example. The vector300 space, in which discriminability is evaluated, is either the raw data or the network’s perceptual space301 (penultimate layer activation). The correlation between these scores is shown in Table 1.302 Using raw data, low correlation is still seen between the accessibility and discriminability scores303 when inspecting the smaller datasets (small mammals, CIFAR-100 and STL10). This correlation304 vanishes when considering the larger ImageNet-20 dataset. It would appear that on its own, the305 spectral bias cannot adequately explain the LOC-effect. On the other hand, in the perceptual space,306 the correlation between discriminability and accessibility is quite significant for all datasets. Contrary307 to our supposition, it seems that networks learn a representation where the spectral bias is evident,308 but this bias does not necessarily govern its learning before the representation has been learned.309 5 PC-bias: further implications310 Early Stopping and the Generalization Gap. Considering natural images, it is often assumed that311 the least significant principal components of the data represent noise (Torralba & Oliva, 2003). In312 such cases, our analysis predicts that as noise dominates the components learned later in learning,313 early stopping is likely to be beneficial. To test this hypothesis directly, we manipulated CIFAR-10314 to amplify the signal in either the 1.5% most significant (higher) or 1.5% least significant (lower)315 principal components (see examples in Fig. 16, Suppl. §D). Accuracy over the original test set,316 after training 10 st-VGG and linear st-VGG networks on these manipulated images, can be seen317 in Fig. 7. Both in linear and non-linear networks, early stopping is more beneficial when lower318 principal components are amplified, and significantly less so when higher components are amplified,319 as predicted by the PC-bias.320 Slower Convergence with Random Labels. Deep neural models can learn any random label321 assignment to a given training set (Zhang et al., 2016). However, when trained on randomly labeled322 data, convergence appears to be much slower (Krueger et al., 2017). Assume, as before, that in natural323 images the lower principal components are dominated by noise. We argue that the PC-bias now324 predicts this empirical result, since learning randomly labeled examples requires signal present in325 lower principal components. To test this hypothesis directly, we trained 10 2-layered linear networks326 on datasets of natural images. Indeed, these networks converge slower with random labels (see327 Fig. 8a). In Fig. 8b we repeat this experiment after having whitened the images, to neutralize the328 PC-bias. Now convergence rate is identical, whether the labels are original or shuffled. Clearly, in329 deep linear networks the PC-bias gives a full account of this phenomenon.330 To further check the relevance of this account to non-linear networks, we artificially generate datasets331 where only the first P principal components are discriminative, while the remaining components332 become noise by design. We constructed two such datasets: in one the labels are correlated with the333 original labels, in the other they are not. Specifically, PCA is used to reduce the dimensionality of a334 two-class dataset to P , and the optimal linear separator in the reduced representation is computed.335 Next, all the labels of points that are incorrectly classified by the optimal linear separator are switched,336 so that the train and test sets are linearly separable by this separator. Note that the modified labels337 are still highly correlated with the original labels (for P = 500: p = 0.82, r < 10−10). The338 second dataset is generated by repeating the process while starting from randomly shuffled labels.339 This dataset is likewise fully separable when projected to the first P components, but its labels are340 uncorrelated with the original labels (for P = 500: p = 0.06, r < 10−10).341 The mean training accuracy of 10 non-linear networks with P=10,50,500 is plotted in Fig. 9a (first342 dataset) and Fig. 9b (second dataset). In both cases, the lower P is (namely, only the first few principal343 components are discriminative), the faster the data is learned by the non-linear network. Whether the344 labels are real or shuffled makes little qualitative difference, as predicted by the PC-bias.345 6 Summary and discussion346 When trained with gradient descent, the convergence rate of the over-parameterized deep linear347 network model is provably governed by the eigendecomposition of the data, and specifically, pa-348 rameters corresponding to the most significant principal components converge faster than the least349 significant components. Empirical evidence is provided for the relevance of these results to more350 realistic non-linear networks. We term this effect PC-bias. This result provides a complementary351 account for some prevalent empirical observations, including the benefit of early stopping and the352 slower convergence rate with shuffled labels.353 We use the PC-bias to explicate the Learning Order Constancy (LOC), showing that examples354 learned at earlier stages are more distinguishable by the higher principal components, demonstrating355 that networks’ training relies more heavily on higher principal components early on. A causal link356 between the PC-bias and the LOC-effect is demonstrated, as the LOC-effect diminishes when the357 PC-bias is eliminated by whitening the images. We analyze these findings in view of a related358 phenomenon termed spectral bias. While the PC-bias may be more prominent early on, the spectral359 bias may be more important in later stages of learning.360 References361 Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural362 networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018.363 Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by364 overparameterization. In International Conference on Machine Learning, pp. 244–253, 2018.365 Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generaliza-366 tion for overparameterized two-layer neural networks. In International Conference on Machine367 Learning, pp. 322–332, 2019.368 Basri, R., Jacobs, D. W., Kasten, Y., and Kritchman, S. The convergence rate of neural networks for369 learned functions of different frequencies. In Advances in Neural Information Processing Systems,370 pp. 4761–4771, 2019.371 Basri, R., Galun, M., Geifman, A., Jacobs, D., Kasten, Y., and Kritchman, S. Frequency bias in neural372 networks for input of non-uniform density. In International Conference on Machine Learning, pp.373 685–694. PMLR, 2020.374 Cao, Y., Fang, Z., Wu, Y., Zhou, D.-X., and Gu, Q. Towards understanding the spectral bias of deep375 learning. arXiv preprint arXiv:1912.01198, 2019.376 Dingle, K., Camargo, C. Q., and Louis, A. A. Input–output maps are strongly biased towards simple377 outputs. Nature communications, 9(1):1–7, 2018.378 Fukumizu, K. Effect of batch learning in multilayer neural networks. Gen, 1(04):1E–03, 1998.379 Gissin, D., Shalev-Shwartz, S., and Daniely, A. The implicit bias of depth: How incremental learning380 drives generalization. arXiv preprint arXiv:1909.12051, 2019.381 Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks.382 In Proceedings of the thirteenth international conference on artificial intelligence and statistics,383 pp. 249–256, 2010.384 Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear385 convolutional networks. arXiv preprint arXiv:1806.00468, 2018.386 Hacohen, G., Choshen, L., and Weinshall, D. Let’s agree to agree: Neural networks share classification387 order on real datasets. In International Conference on Machine Learning, pp. 3950–3960. PMLR,388 2020.389 He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level390 performance on imagenet classification. In Proceedings of the IEEE International Conference on391 Computer Vision (ICCV), December 2015.392 Heckel, R. and Soltanolkotabi, M. Denoising and regularization via exploiting the structural bias of393 convolutional generators. arXiv preprint arXiv:1910.14634, 2019.394 Hu, W., Xiao, L., Adlam, B., and Pennington, J. The surprising simplicity of the early-time learning395 dynamics of neural networks. arXiv preprint arXiv:2006.14599, 2020.396 Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in397 neural networks, 2020.398 Krueger, D., Ballas, N., Jastrzebski, S., Arpit, D., Kanwal, M. S., Maharaj, T., Bengio, E., Fischer, A.,399 and Courville, A. Deep nets don’t learn via memorization. 2017.400 Le Cun, Y., Kanter, I., and Solla, A. S. Second order properties of error surfaces learning time and401 generalization. Advances in neural information processing systems, 3:918–924, 1991.402 Li, M., Soltanolkotabi, M., and Oymak, S. Gradient descent with early stopping is provably robust403 to label noise for overparameterized neural networks. In International Conference on Artificial404 Intelligence and Statistics, pp. 4313–4324. PMLR, 2020.405 Nakkiran, P., Kaplun, G., Kalimeris, D., Yang, T., Edelman, B. L., Zhang, F., and Barak, B. Sgd406 on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604,407 2019.408 Pliushch, I., Mundt, M., Lupp, N., and Ramesh, V. When deep classifiers agree: Analyzing409 correlations between learning order and image statistics. arXiv preprint arXiv:2105.08997, 2021.410 Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville,411 A. On the spectral bias of neural networks. In International Conference on Machine Learning, pp.412 5301–5310. PMLR, 2019.413 Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning414 in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.415 Saxe, A. M., McClelland, J. L., and Ganguli, S. A mathematical theory of semantic development in416 deep neural networks. Proceedings of the National Academy of Sciences, 116(23):11537–11546,417 2019.418 Shah, H., Tamuly, K., Raghunathan, A., Jain, P., and Netrapalli, P. The pitfalls of simplicity bias in419 neural networks. arXiv preprint arXiv:2006.07710, 2020.420 Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual421 review of neuroscience, 24(1):1193–1216, 2001.422 Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient423 descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.424 Torralba, A. and Oliva, A. Statistics of natural image categories. Network: computation in neural425 systems, 14(3):391–412, 2003.426 Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE conference427 on computer vision and pattern recognition, pp. 9446–9454, 2018.428 Valle-Perez, G., Camargo, C. Q., and Louis, A. A. Deep learning generalizes because the parameter-429 function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, 2018.430 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires431 rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.432 Checklist433 1. For all authors...434 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s435 contributions and scope? [Yes]436 (b) Did you describe the limitations of your work? [Yes]437 (c) Did you discuss any potential negative societal impacts of your work? [N/A]438 (d) Have you read the ethics review guidelines and ensured that your paper conforms to439 them? [Yes]440 2. If you are including theoretical results...441 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See the442 "Assumptions" paragraph as Section 2443 (b) Did you include complete proofs of all theoretical results? [Yes] Each theorem reference444 to its proof. Proofs can be found in Suppl. §A,B445 3. If you ran experiments...446 (a) Did you include the code, data, and instructions needed to reproduce the main ex-447 perimental results (either in the supplemental material or as a URL)? [No] All data,448 instructions and hyper-parameters are explictly written in the main paper and/or in the449 Suppl. (see §D.4,D.4). The code itself will be provided once the anonymity will be450 lifted.451 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they452 were chosen)? [Yes]453 (c) Did you report error bars (e.g., with respect to the random seed after running experi-454 ments multiple times)? [Yes]455 (d) Did you include the total amount of compute and the type of resources used (e.g., type456 of GPUs, internal cluster, or cloud provider)? [N/A]457 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...458 (a) If your work uses existing assets, did you cite the creators? [Yes]459 (b) Did you mention the license of the assets? [N/A]460 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]461 462 (d) Did you discuss whether and how consent was obtained from people whose data you’re463 using/curating? [N/A]464 (e) Did you discuss whether the data you are using/curating contains personally identifiable465 information or offensive content? [N/A]466 5. If you used crowdsourcing or conducted research with human subjects...467 (a) Did you include the full text of instructions given to participants and screenshots, if468 applicable? [N/A]469 (b) Did you describe any potential participant risks, with links to Institutional Review470 Board (IRB) approvals, if applicable? [N/A]471 (c) Did you include the estimated hourly wage paid to participants and the total amount472 spent on participant compensation? [N/A]473
1. What is the focus of the paper regarding deep linear neural networks and their training using gradient descent? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its incremental nature compared to previous works? 3. How does the reviewer assess the novelty and significance of the PC bias described in the paper? 4. What are the concerns regarding the experimental section and its connection to the theoretical analysis? 5. How does the reviewer evaluate the overall quality and impact of the paper?
Summary Of The Paper Review
Summary Of The Paper The authors study a principal component bias of deep linear neural networks when trained with gradient descent using a small learning rate. They show that when the network is sufficiently wide, deep linear networks behave like single layer linear networks during the first phase of training, where the rate of convergence is governed by the largest principled components of the data. It is further shown that at later stages of training, under some assumptions, the PC bias remains to some extent. Review My main concern with the paper is that it is overall quite incremental over previous results: It is known for some time at this point that sufficiently wide networks evolve linearly, where convergence rate is governed by the principled components of the neural tangent kernel (NTK). For deep linear networks, the NTK itself is given by the Gram matrix of the data (\Sigma_xx using the notations in the paper) scaled by the depth L, which is identical to the NTK of a single layer linear networks up to scale. Hence, it is trivial that when the width M is large, the convergence rate in both shallow and deep linear models will exhibit the PC bias as detailed by the authors. In other words, the large width assumption when applied to linear models strips it of all the interesting properties provided by depth and relegates it to a 1 layer linear network. For the second phase of training, i find the assumptions made by the authors quite unrealistic. I would argue that in most tasks of interest k/m cannot be considered small. Moreover, the presented results show no explicit dependency on training time, which makes it difficult to understand the regimes in which the results hold. In general, i find the PC bias as described in the paper lacking in its ability to explain any interesting phenomena in deep learning since it is inherently present in shallow networks as well. In addition, it does not trivially apply to deep nonlinear models. Even when only considering linear models, the incremental technical novelty in the paper does not meet the threshold for acceptance in my opinion. Question: The experimental section includes results on VGG style linear architectures, however the theoretical section does not deal with convolutions. How should the PC bias be interpreted in this case? Shouldnt the gram matrix \Sigma_xx be computed differently for convolutional architectures? Post Rebuttal_ I appreciate the authors response to my review. However, my score remains unchanged for the following reasons. In my view the theoretical contribution is a restatement of known results - Theorem 2 discusses leading corrections to the infinite width limit, which have been discussed in numerous papers [1,2], and theorem 1 should be relegated to a proposition. Moreover, the actual claimed theoretical contribution - namely the O(q/m) and O(K/m) corrections, is not formally presented, and seems to be brushed aside due to technical issues (it is not clearly presented in the supplementary either). In general i find the empirical section intriguing, however i doubt the claim of the paper carries over to larger datasets, and i feel this should be addressed in future versions of the paper. [1] Wide Neural Networks of Any Depth Evolve asLinear Models Under Gradient Descent [2] Asymptotics of Wide Networks from Feynman Diagrams
NIPS
Title SeqPATE: Differentially Private Text Generation via Knowledge Distillation Abstract Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases. 1 Introduction Recent work has shown that sensitive user information in training corpora, such as addresses and names, can be extracted from text generation models [6]. Providing privacy guarantees to the training corpora of text generation models has become a critical problem. Differential privacy (DP) provides provable guarantees against detecting individuals in datasets. Deep learning models with DP guarantees ensure that the existence of a specific training sample cannot be detected. NoisySGD [42, 3, 1] is a popular DP algorithm for deep learning that adds noise to the gradients. PATE [31] is another type of DP learning algorithm that transfers knowledge from teachers trained on private data to a student model, where noises are added to teacher predictions to satisfy DP. PATE is model-agnostic, and its privacy cost derives from the knowledge distillation process instead of the model gradients in NoisySGD [42, 24]. Therefore, the noises required by PATE do not scale with model size. Given this benefit, PATE has great potential for text generation, since large language ∗This paper was partially done when Zhiliang Tian was a Ph.D. student at HKUST and a visiting scholar at NYU. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). models (e.g., GPT-2 [35]) have become the backbone of most text generation models. However, NoisySGD and PATE are used to protect sample-level privacy [51, 24] and not customized to protect sensitive phrases in the data with a low privacy cost [22, 39, 50]. Additionally, PATE, originally designed for classification tasks, is not customized for sequential generation on a large output space (i.e., the natural language vocabulary), which is very common in text generation. In this paper, we propose SeqPATE, a DP learning algorithm for text generation to protect the privacy of training corpora. By satisfying DP, SeqPATE has the guarantee of preventing the existence of training samples and sensitive phrases in the training corpora from being detected. Similarly to PATE, SeqPATE employs a teacher-student framework: (i) a student model learns to generate text from nonsensitive samples; and (ii) a number of teacher models, trained on sensitive text, supervise the student through noised outputs of aggregated teachers. The calibrated noise added to the output ensures that SeqPATE satisfies the DP requirements. This framework still faces some challenges in text generation. First, it suffers from the high costs of GPU memory and time. To obtain sentence-level supervision for text generation, the model needs to roll out all teachers to produce a sentence (i.e. all teachers vote to generate a word, which is then used as the input for the next word prediction). It results in a high inference cost with a large number of teachers (e.g. 2k teachers which are common in PATE). Second, the large output space (i.e., the vocabulary) in text generation leads to (i) low agreement rates among teachers and (ii) large noises required by DP, both of which significantly hurt the task performance. To address the challenges, we generate pseudo-data using a pre-trained language model so that teachers only need to provide token-level supervision given the pseudo inputs. To handle the large output space and reduce the noise, we propose to dynamically filter the candidate words and select only words with high probabilities. Also, we aggregate teachers’ outputs by interpolating their output distributions instead of voting with argmax predictions. DP learning methods provide privacy protection by adding noise, which also reduces the utility of the model. To reduce utility loss, we avoid unnecessary knowledge distillation by selectively applying knowledge distillation to generation steps where the student struggles. Most DP learning methods, including SeqPATE, prevent samples from being extracted. SeqPATE has further advantages in protecting users’ secret phrases that occur multiple times in the corpora. We evaluate SeqPATE on a sentence completion task, which demonstrates its advantage in protecting samples and phrases compared to the baselines. Our contribution is twofold: (i) We propose SeqPATE that provides privacy at both the sample level and the phrase level with theoretical analyses. (ii) We propose several strategies for SeqPATE to handle autoregressive text generation models with a large vocabulary. 2 Problem Setup Our goal is to achieve the privacy protection quantified by DP in text generation to prevent attackers from inferring whether a sample or an n-gram appears in the training set. Our setting contains two types of textual datasets: (1) a private set Dpri from a corpus with sensitive information, (2) a public set Dpub that contains no sensitive information or comes from data contributors (e.g., volunteers) who have no objection to publishing their data. We aim to protect the privacy on the private set and can ignore the privacy protection on the public set. Our application, sentence completion, aims to complete the whole sentence given the prefix. We train a language model to accomplish the task. The public set Dpub consists of prefixes, which can hardly contain sensitive information. The private set Dpri consists of whole sentences. Such a setting fits some real-world text generation applications: in dialog systems, the training samples from online services consist of questions and responses. The questions from customer service staff or service robots can be public, and the response from users carrying individual information should be private. 3 Background on DP and PATE Definition 3.1. [Differential privacy (DP) [13, 14]] For any two neighboring datasets D,D′ (differ in only one individual), a randomized algorithm M : Xn → Y is (ε, δ)-differentially private if, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ, ∀S ⊆ Y, where ε > 0, δ ≥ 0. (1) By definition, DP is a quantifiable definition of privacy that provides guarantees on identifications of individual data (preventing an adversary from inferring whether the input is D or D′). ML models with DP ensure that each training sample has a degree of plausible deniability, i.e., the trained model is just as likely as to be trained on an alternative dataset without that sample. In SeqPATE, M is the entire training and inference process, S is the vocabulary, and Pr[·] denotes the output distribution of generating a word. Attackers cannot tell whether a sample is in the training set or not, since the output distributions of the datasets with or without that sample are very similar (bounded by Eq. 1). PATE [31], designed for classification tasks, takes advantage of an unlabeled public dataset Dpub and also trains on a labeled private set Dpri in a semi-supervised scenario. PATE achieves DP through a teacher-student framework with M teacher models and a student model, where the student learns from the private set via knowledge distillation through teachers. PATE has three parts: (i) The teacher models are trained on the private set Dpri, which is shuffled and divided into M disjoint subsets. Each teacher is trained on one subset. (ii) Teacher aggregation merges the teachers’ outputs. Each of the trained teachers then provides supervision to the student’s unlabeled public set Dpub. We use noised majority votes from teachers as labels to supervise the student. (iii) A student model is trained on the public set Dpub with the supervision of the aggregated teachers. 4 Approach Fig. 1 shows an overview of SeqPATE. Given the public prefix (e.g., “Cats sit”), we first obtain the pseudo-inputs by completing the sentence (e.g., “Cats sit on the mats”) using a pre-trained language model (Sec. 4.1). At each word, we then aggregate the teachers’ prediction of the next word as supervision for training the student model (Sec. 4.2). To reduce the noise required by DP for a large output space of the size of the vocabulary, we reduce the output space by dynamically filtering unimportant words. To reduce the number of teacher queries that incur privacy losses, we propose an efficient knowledge distillation strategy that only queries teacher labels on uncertain examples (Sec. 4.3). We show the training algorithm in App. B and a running example in App. K. 4.1 Pseudo Input Generation Conventional text generation models generate words sequentially from left to right. Thus, naively applying PATE to text generation requires rolling out all teachers word by word, i.e., iteratively sampling the next word from the aggregated teacher prediction. This is costly in both computation (running inference for hundreds of teacher models) and privacy costs (querying teachers at every step). To tackle this challenge, we use a pre-trained language model to complete the public prefixes into pseudo sentences; thus, we only need to query teachers on the next word given a (pseudo) context. 4.2 Teacher Aggregation PATE aggregates teacher predictions by majority vote. While it works for classification problems with a relatively small number of classes, the output space of text generation models contains all words in the vocabulary. As a result, the number of votes for each candidate word may be very low without a clear winner. For example, multiple candidates may tie for the top-1 prediction. Inspired by Chen et al. [9, 17], we aggregate teacher results by averaging their output distributions. We first train M teacher models on disjoint subsets of the private data. To produce the aggregated next word distribution given a context c, we average the teachers’ output distributions, add calibrated noises, and then renormalize the results into a proper distribution. Following Papernot et al. [32], we apply the Gaussian mechanism. Formally, let pmϕ (· | c) be the prediction of the m-th teacher. The aggregated distribution is pagg(· | c) ∝ 1M ∑M m=1(p m ϕ (· | c)+N (0, σ2)), 2 where the Gaussian noise is added to the aggregated output distribution. The way of SeqPATE satisfies DP guarantee (Eq. 1) is to add that calibrated noise to the teachers’ output as mentioned above (detailed analyses in Sec. 5). 4.3 Training of the Student Model The student model is trained on public pseudo-data and also supervised by the aggregated teachers. Training objectives. The student model is a language model that predicts the next word given prior contexts. Given contexts from the (public) pseudo-data autocompleted by a pre-trained language model (GPT-2), the student is supervised by both the aggregated teacher predictions and the next word in the pseudo-data (i.e. pseudo label). The pseudo-data acts as a prior for the student given that the number of teacher queries is limited due to privacy concerns. The student’s loss function has two parts: • Lteacher denotes the loss with respect to teacher supervision. Note that the aggregated teacher output is a distribution over words. Therefore, we minimize the forward KL divergence between the aggregated teacher distribution pagg and the student output distribution pθ: Lteacher(c, pagg) = KL (pagg(· | c) ∥ pθ(· | c)) . (2) • Lpseudo denotes the loss with respect to the pseudo-labels w from D̃pub (i.e. next words generated by a generic language model). Similar to standard language modeling, we use the negative log-likelihood: Lpseudo(c, w) = − log pθ(w | c). (3) Eq. 4 shows the complete loss. (λ balances the two terms and we discuss the noise scale σ in Sec. 5.) L(pagg, D̃pub) = ∑ (c,w)∈D̃pub Lpseudo(c, w) + λLteacher(c, pagg), (4) Reducing the output space via candidate filtering. The high-dimensionality of the output of text generation models results in large noise (which is added to each coordinate). To reduce the output dimension (hence the amount of noise), we filter words on the tail of the distribution of the student model (i.e. set their probability to zero), and renormalize the teacher’s aggregated distribution and the student output distribution over the rest words. Note that the candidate filtering is based on the student’s outputs on public or already released inputs, thus it does not affect the privacy guarantee. This choice improves the privacy-utility tradeoff by adaptively allocating the privacy budget to release the information most helpful to the task. We experiment with two filtering strategies: top-k and top-p. In top-k filtering, we retain only the top-k most likely candidates and filter the rest according to the student model. In top-p filtering [18], 2Mathematically, the aggregated distribution with noises may be negative. If so, we renormalize the negative value to 0. Practically, we observed that being negative is an extremely rare event, since the M is usually very large (e.g., 2k) and the first term dominates the above equation. k is chosen dynamically such that the top-k words are the minimum set whose cumulative probability is at least p. The strategy seldom loses good candidates because the student usually does well on top-k predictions since the beginning of the training. 3 Reducing the number of teacher queries via efficient knowledge distillation. While the aggregated teacher model satisfies DP, each query from the student incurs some privacy loss. Therefore, we obtain teacher supervision only on “hard” examples when training the student. Note that the student is trained on both the pseudo-data and local supervision from the teachers. We consider an example to be hard if the student cannot imitate the pseudo-label, in which case distilling knowledge from the teachers that are trained on large private data is helpful. Concretely, we query teachers only when the rank of the pseudo-label is below a certain threshold among words ordered by descending probabilities under the student model. If we query the teachers, the student is trained via complete loss L(pagg, D̃pub) (Eq. 4); otherwise, the student is trained via the Lpseudo (Eq. 3). We note that the selection of tokens relies only on the student and is independent of the teachers; thus, the selection does not cause any additional privacy loss. 5 Privacy Analyses 5.1 Preliminary of Differential Privacy Lemma 5.1 (Analytical Gaussian mechanism [2]). For a numeric query f : Xn → Rd over a dataset D, the randomized algorithm that outputs f(D) + Z where Z ∼ N (0, σ2Id) satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) = Φ( ∆2σ − εσ ∆ ) − e εΦ(− ∆2σ − εσ ∆ ). where ∆ := ∆ (f) 2 = maxD∼D′ ∥f(D)− f(D′)∥2 is the global L2 sensitivity of f and Φ is the CDF function of N (0, 1). We can use the same result for an adaptive composition of a sequence of Gaussian mechanisms. Lemma 5.2 (Composition of Gaussian mechanisms [11]). The adaptive composition of a sequence of Gaussian mechanisms with a noise level σ1, σ2, . . . and global L2 sensitivity ∆1,∆2, . . . satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) ≤ δM(ε) where M is a Gaussian mechanism with noise multiplier σ/∆ = (∑ i(∆i/σi) 2 )−1/2 . Specifically, the adaptive composition of a k identical Gaussian mechanism with a noise multiplier σ satisfies the same privacy guarantee of a single Gaussian mechanism with a noise multiplier σ/ √ k. By fixing k and ε, we can calibrate the noise by choosing an appropriate σ in Sec. 4.2. 5.2 Differential Privacy for Language Models at the Sample Level Recall that we partition the private dataset into M disjoint subsets, and train each teacher model on one of the subsets. Let vector xi ∈ R|V| denote the probability distribution predicted by the i-th teacher model given some context, where |V| is the vocabulary size. The aggregation function f(D) := ∑M i=1 xi is the sum of the probability distributions predicted by all teachers. Since the datasets are disjoint, changing one sample affects only one teacher model. For neighboring datasets D, D′, let j denote the index of each teacher model; the probability distributions xj and x′j (derived from D and D′ respectively) are different. Then, the sensitivity ∆ in Lemma 5.1 & 5.2 is (See detailed deductions in App. C), ∆ := ∆ (f) 2 = ∥f(D)− f(D′)∥2 ≤ ∥xj − x′j∥2 ≤ √ 2. Adding the noises given by Lemma 5.2 to each coordinate (each candidate at each generation step of SeqPATE) preserves (ε, δ(ε))-DP for f(D). Finally, when we extract top-k coordinates by top-k candidate filtering (Sec. 4.3), the privacy guarantee also holds due to the post-processing property [14]. Therefore, the fact about whether a sample is in SeqPATE’s private sets is protected (satisfying (ε, δ(ε))-DP). 3In the first 10 training batches, the top-50 predictions of the student cover 94% “true” labels of pseudo samples. 5.3 Differential Privacy of Users’ Secret Phrases The above analyses show that we can protect the privacy of each sample (i.e., one occurrence of a sentence). However, in practice, we may want to protect all occurrences of some secret phrases specific to a user (e.g., names and addresses).4 Consider a secret phrase s that occurs ns times (ns ≥ 1) in the private set. According to group privacy [14], the protection on phrase s satisfies (nε, e nε−1 eε−1 δ)-DP [22], where the privacy loss scales linearly with the number of occurrences of s (We discuss and analyze a better strategy to reduce the privacy loss of baselines in App. M). Naively applying a DP algorithm requires larger noise to protect phrases that may occur multiple times. SeqPATE enjoys a stronger guarantee by assigning all data of a single user to one or a few teachers, such that any user-specific phrase occurs in the training data of only one or a few teachers. We denote ñs as the number of teachers whose data contain the phrase s. Since adding or removing the phrase s affects only ñs teachers (ñs is usually 1 or 2) and thus results in a sensitivity of √ 2ñs (See App. D for details). In this way, the strength of protection on secret phrases is roughly equal to that we have derived for sample-level DP. The exact (ε, δ(ε, ñs))-DP for the phrase s can be obtained according to Lemma 5.1 & 5.2, where δ(ε, ñs) = Φ( ñs√2σ − εσ√ 2ñs )− eεΦ(− ñs√ 2σ − εσ√ 2ñs ). Unlike other generic DP algorithms such as NoisySGD, SeqPATE avoids a linear increase in privacy loss (i.e., a linear increase in ε) on user phrases by careful partitioning of the private data. This effect is complimentary to other generic, but more intrusive, techniques such as redaction and deduplication [50] for addressing the same issue. Finally, a user-specific partitioning with SeqPATE also protects multiple secret phrases of the same user (e.g., a combination of SSN, credit card numbers, address, day of birth) jointly without incurring a larger privacy loss — a benefit that deduplication does not provide. 5.4 How does DP prevent memorization in SeqPATE? In practice, the privacy of the language model is usually interpreted as not generating a secret phrase in the training data as-is during inference. Thus, one may wonder how DP prevents such unintended memorization of the training data. We remark that the protection against memorization follows the definition of DP. Consider the attack by Carlini et al. [6], which uses a language model to predict a secret phrase s given a prefix. By the closure to post-processing [14], the prediction also satisfies DP. We denote W as the undesirable event where SeqPATE generates the phrase s verbatim. The DP definition implies that the probability of W to happen when s is in the SeqPATE’s private sets is at most eε larger than the probability of an alternative SeqPATE model trained without s in those sets. The chances for the latter model to generate text with s are astronomically small. Hence, DP implies that the probability of W under the former model (i.e. any SeqPATE model in general) is small. 6 Experiments 6.1 Experimental Settings Datasets. We evaluate our model on two datasets. AirDialog [47] consists of 1M utterances from customer service dialog on flight booking; Europarl_v6 consists of 2M English sentences collected from European Parliament.5 (See details about datasets in App. E.) Baselines. We compare SeqPATE with two DP baselines: (1) standard NoisySGD trained on the private data with calibrated noise on clipped gradients [1, 22] and further trained on public set Dpub without protection; (2) based on NoisySGD, NoisySGD+GC [24] applies a ghost clipping which enables large batch size with memory saving techniques. Additionally, we use two non-DP methods as reference: (1) Pri-GPT trained on the private set without any privacy protection; (2) the public pre-trained GPT-2 model Pub-GPT without access to private data. For all methods, we can optionally fine-tune on the generated pseudo-data as a warm-up, and the operation is denoted as +D̃pub. 4A formal definition of this is called personalized differential privacy, first seen in [16]. 5www.statmt.org/europarl Implementation details. All models are fine-tuned from the (public) pre-trained GPT-2 model [35]. The batch size is 32 for all comparing methods except the GC [24] (GC [24] requires 2048). We use Adam [23] and adjust the initial learning rate with a range of 10−3 to 10−6 for all methods. The δ mentioned in Sec. 5 for all DP methods is 10−6. For SeqPATE, before training the student model with teacher supervision, we first fine-tune it on the public pseudo-data D̃pub as a warm-up. The coefficient λ that balances supervision for the teacher and the pseudo-data (Eq. 4) is set to 20, where we have tuned it on the validation set of the public pseudo-data. The default number of teacher models is 2k, where our model works well according to the experiments in App. H. We designed some strategies 6 to reduce memory and disk usage (See strategies and the computational cost in App. I). We run SeqPATE with 2k teachers on a single GPU in 3 days. Our code is publicly accessible. 7. (See details about hyperparameters in App. G.) Evaluation Metrics. We evaluate the generated text by perplexity (PPL) and Bleu (Bleu-n) [33]. 6.2 Overall Performance Protection at the sample level. Tab. 1 show the performance on the two datasets. Among the non-DP baselines, Pri-GPT acts as an upper bound on the performance, since it can fully utilize the private set by discarding privacy protection. Pub-GPT+D̃pub outperforms Pub-GPT on both datasets, showing that the pseudo data is helpful (additional ablation study on the pseudo data in App. J also verifies this). NoisySGD+GC+D̃pub surpasses the above two methods, since it uses a much larger batch size (2048 vs 32) than NoisySGD. Our method, SeqPATE, significantly outperforms NoisySGD+GC+D̃pub (+59% in Bleu4 on AirDialog and +7.0% in Bleu4 on Europarl_v6) while ensuring the same level of privacy protection in terms of ε. Protection on the user’s secret phrases. We evaluate our method for privacy protection of secret phrases mentioned in Sec 5.3. The key step is to partition the data such that each phrase only occurs in the training data of very few teachers, which is straightforward given the user ID associated with the private data. In general, SeqPATE works with any set of secret phrases. In our experiments, we consider a user’s full name as their secret phrase since it can be easily recognized from the data. We partition AirDialog’s private data according to the accompanying user IDs. As a result, there are 96.6% users whose data are assigned to a single teacher (details about the data partition in App. F). As described in Sec. 5.3, standard DP methods incur larger privacy loss on secret phrases. In Tab. 2, we see that NoisySGD+GC+D̃pub needs large noise to achieve a satisfactory level of protection on phrases, because ε increases linearly with the frequency of the phrase (group privacy [14]). “Batching users” indicates partitioning data into batches according to users, which helps NoisySGD protect users’ phrases (more analyses in App. M). For SeqPATE, the number of teachers trained on data containing the phrase ñs is close to 1 on average after our partition. Thus, SeqPATE provides the same level of protection on users’ secret phrases with a smaller noise and thus achieves better performance (+70% and +36% in Bleu4) (see more about the protection level on users’ secret phrases in App. F). 6We train and conduct the inference on the teachers one-by-one and cache the teachers’ outputs. 7https://github.com/tianzhiliang/SeqPATE Privacy-utility tradeoff. In Fig. 2, we show the private-utility tradeoff curve of all DP algorithms. 8 Typically, DP with ε ∈ [0.1, 10] is considered to provide a meaningful protection [45]. We observe that SeqPATE outperforms NoisySGD and NoisySGD+GC+D̃pub in this range. However, SeqPATE does not work better than the two methods when ε > 10. The reason is that NoisySGD+GC+D̃pub approaches Pri-GPT as ε approaches infinity (i.e. the noise approaches 0). However, SeqPATE with an infinite ε is still weaker than Pri-GPT because distillation still incurs performance loss: the teachers cannot completely transfer knowledge from the private data to the student. Therefore, we suggest using SeqPATE if strong privacy protection is desirable. 6.3 Ablation Studies There are several design choices in SeqPATE and we study the importance of each of them. In Tab. 3, we consider the following variants of SeqPATE: (1) −Merge_P: aggregating the teachers by voting instead of averaging their output distributions; (2) −KL: training the student using the cross-entropy loss with respect to teachers’ top-1 prediction instead of KL divergence; (3) −Lpseudo: not learning from the pseudo label (Eq. 3); (4) −Effi KD: querying teachers on all samples without selection; (5) −Gaussian: using the Laplace mechanism as the original PATE algorithm instead of the Gaussian mechanism; and (6) −All: using none of the above strategies, which is similar (although not equivalent) to the original PATE (the difference is that PATE needs to roll out all teachers (Sec. 4.1)). Aggregating the teachers by voting and training with KL loss are the most important strategies for SeqPATE. The poor performance on −Merge_P shows that voting is not suitable for text generation. The reason is that voting over a large output space leads to low agreement rates. The results show that the Lpseudo loss makes little contribution to SeqPATE. The reason is that we have pre-trained on the student’s training set via Lpseudo before the student’s training. The promotion caused by efficient knowledge distillation (Effi KD) on AirDialog is larger than that on Europarl_v6, which shows that the “clever” student (e.g., models on AirDialog with low PPL and high Bleu) benefits more from this strategy. This is because the “clever” student can dramatically save the privacy cost and transfer it to where it would benefit the student most. The poor performance of −All verifies that the original PATE is not suitable for text generation. 6.4 Analyses on Candidate Filtering and Teacher Numbers To analyze candidate filtering with different filtering strategies, we conduct experiments on top-p and top-k filtering. As shown in Tab. 4, our full model employs the top-p filtering (the threshold p is 0.95) surpasses most variants with manually chosen k. Top-k filtering (k =50 or 100) also works well. Filtering with a too small k (k = 1 or k = 10) implies discarding too much useful information from the supervision (k = 1 is different from − KL in Tab. 3, which uses the Top-1 of teachers’ results). Filtering with oversize k results in unnecessarily large noises. Candidates with very small probabilities should be filtered during generation; however, random noises may increase their probabilities, so models may generate those words that are misled by the noise. The results in App. H show that more teachers lead to better results when the number of teachers is in the range of 1 ∼ 2k. This is because the noise assigned to each teacher drops linearly as the number of teachers increases. Note that SeqPATE cannot always benefit from increasing the teacher numbers, because the scale of each teacher’s data is linearly decreased as the teacher numbers go up. We choose ε = 3 on the sample level protection for all results in Tabs. 3, 4, and App. H. Additionally, we conduct empirical comparisons and analyses of SeqPATE versus the original PATE in App. N. We show the effects of protections on users’ secret phrases in App. O. We compare SeqPATE with another non-DP based baseline (i.e. blacklist based filtering) in App. P. We also conduct a case study in App. Q. 7 Related Work Text generation models may leak user information through the generated texts [19, 7]. One direction of privacy protection is to protect author-level (user-level) information. The methods prevent attackers from inferring the author attributes (e.g., gender, age) [25] and the relationship between information and authors [29]. Some researchers [40, 41] infer the membership (whether samples from a given author are used to train the model) given a black-box model. Some papers protect user privacy of training data against untrusted servers via federated learning [27, 10]. Another direction is to prevent attackers from extracting sensitive information in training sets by analyzing the outputs [30, 22], which is urgently needed [7]. Our SeqPATE focuses on this direction. In this direction, regularization methods [6, 43, 20] restrict the model capacity and prevent the model from memorizing exact training samples. Anonymization methods [26, 44] detect sensitive text and replace it with non-sensitive text. Unlike DP [14] methods, the above methods do not provide a quantifiable guarantee for privacy protection. Some researchers focus on protecting user privacy against untrusted servers via federated learning [27, 10]. Some researchers apply DP to text generation. For user-level privacy, ER-AE [4] augments the semantic information in the generated text to hide authors’ writing styles from attackers. McMahan et al. [28] propose a recurrent language model with a DP guarantee against the identification of users. Note that the user-level privacy (relationships between users and their information) is different from the privacy of users’ secret phrases in our model: Our model prevents individual user phrases from being detected. Some researchers apply NoisySGD to text generation to prevent sensitive training samples from being extracted: some of them [37, 39, 50] employ DP to protect a part of selected tokens; others [22, 49, 24] apply DP to protect both samples and all tokens, but the privacy cost on tokens is very high (Sec. 5.3). Our model falls into the latter category and reduces the privacy cost of tokens. Kerrigan et al. [22] apply NoisySGD [1] to text generation. Yu et al. [49] investigate fine-tuning strategies on pre-trained language models with NoisySGD. Li et al. [24] apply ghost clipping to pre-trained language models with NoisySGD and reduce memory usage. Shi et al. [38] apply DP to particular generation steps instead of training samples or n-grams. Brown et al. [5] analyze DP based method versus data sanitization of text generation models. Brown et al. [12] propose a efficient NoisySGD to speed up model training. Differential privacy (DP) [13, 14] formally defines and quantifies privacy. ML models with DP guarantee [46, 15, 52] prevent the existence of individual training examples from being detected [6]. Some researchers protect the privacy of empirical risk minimization classifiers [8] and SVM [36] with DP. Following Song et al. [42], NoisySGD [1] achieves DP on deep learning models by adding noises to gradients. Pichapati et al. [34] adaptively clip the gradient in NoisySGD. PATE [31, 32] transfers the knowledge from teacher models trained on private sets with noises to a student model. KNN-PATE [51] refines PATE by accessing only the k-nearest neighbors from the private set. Jordon et al. [21] adversarially learn to generate synthetic data with discriminators trained by PATE. These methods are not customized for text generation models. Xie et al. [48] propose DPGAN to adversarially learn with a generator and a discriminator. 8 Conclusion In this paper, we propose a novel framework, SeqPATE, to protect the privacy of the training data for text generation models with DP guarantees. SeqPATE achieves a good privacy-utility trade-off by leveraging both private and public data. As an extension of PATE, SeqPATE can handle the sequential generation paradigm with large output space at each step and is therefore adaptive to text generation models. We avoid rolling out teachers by providing pseudo-inputs for the teacher’s inference and the student’s training. We further reduce the output space by candidate filtering and limit privacy losses via efficient knowledge distillation. SeqPATE achieves a better performance with the sample-level protection and further provides much stronger protection on users’ secret phrases. The limitations, ethical considerations, and social impacts of this paper are in App. A and L. 9 Acknowledgement Research in this paper was supported by Hong Kong Research Grants Council under grand No. 16204920. HH is partly supported by the Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI). YW is partially supported by NSF Award #2048091. The authors thank Mr. Wei Dong and Dr. Yiping Song for their help and insights on this paper.
1. What is the focus and contribution of the paper on text generation? 2. What are the strengths of the proposed approach, particularly in terms of its extension to the text generation problem? 3. What are the weaknesses of the paper, especially regarding its novelty and comparisons with other works? 4. Do you have any concerns or criticisms regarding the approach taken for user secrecy phrases? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper extends the PATE approach to the text generation problem. The authors introduce additional steps to help boosting the performance of PATE in the text generation setting as the oiriginal approach does not bode well with the large output space of vocabulary. The paper further studies phrase-level privacy beyond the regularly studied sample-level privacy. Strengths And Weaknesses The paper is well-written and easy to follow. Unfortunately my main concern is on the novelty of the paper. The approach is heavily based on the PATE algorithm with few tricks to work better for the text generation task. It utilizes a pre-trained LM to generate pseudo completions and reduces the output space by filtering the tail distribution without a privacy requirement and finally the privacy loss is reduced by acquiring the teacher supervision only when the student is not good at a certain prediction. The latter idea has also appeared in the more recent PATE paper. While I believe these extensions are valuable in improving the performance of PATE in this scenario, I do not think they provide sufficient novelty for this venue.I have one critical comment about the users' secret phrases section. The authors took the route of group privacy for this scenario, which I do not think might bethe effective way with DP. DP-SGD algorithm can easily be adapted to have "user-level privacy" by batching users instead of samples. I find it an unfair comparison in the sense that the authors have not employed this approach but took the naive way of applying group privacy on user-level. Questions See strengths and weaknesses. Limitations NA.
NIPS
Title SeqPATE: Differentially Private Text Generation via Knowledge Distillation Abstract Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases. 1 Introduction Recent work has shown that sensitive user information in training corpora, such as addresses and names, can be extracted from text generation models [6]. Providing privacy guarantees to the training corpora of text generation models has become a critical problem. Differential privacy (DP) provides provable guarantees against detecting individuals in datasets. Deep learning models with DP guarantees ensure that the existence of a specific training sample cannot be detected. NoisySGD [42, 3, 1] is a popular DP algorithm for deep learning that adds noise to the gradients. PATE [31] is another type of DP learning algorithm that transfers knowledge from teachers trained on private data to a student model, where noises are added to teacher predictions to satisfy DP. PATE is model-agnostic, and its privacy cost derives from the knowledge distillation process instead of the model gradients in NoisySGD [42, 24]. Therefore, the noises required by PATE do not scale with model size. Given this benefit, PATE has great potential for text generation, since large language ∗This paper was partially done when Zhiliang Tian was a Ph.D. student at HKUST and a visiting scholar at NYU. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). models (e.g., GPT-2 [35]) have become the backbone of most text generation models. However, NoisySGD and PATE are used to protect sample-level privacy [51, 24] and not customized to protect sensitive phrases in the data with a low privacy cost [22, 39, 50]. Additionally, PATE, originally designed for classification tasks, is not customized for sequential generation on a large output space (i.e., the natural language vocabulary), which is very common in text generation. In this paper, we propose SeqPATE, a DP learning algorithm for text generation to protect the privacy of training corpora. By satisfying DP, SeqPATE has the guarantee of preventing the existence of training samples and sensitive phrases in the training corpora from being detected. Similarly to PATE, SeqPATE employs a teacher-student framework: (i) a student model learns to generate text from nonsensitive samples; and (ii) a number of teacher models, trained on sensitive text, supervise the student through noised outputs of aggregated teachers. The calibrated noise added to the output ensures that SeqPATE satisfies the DP requirements. This framework still faces some challenges in text generation. First, it suffers from the high costs of GPU memory and time. To obtain sentence-level supervision for text generation, the model needs to roll out all teachers to produce a sentence (i.e. all teachers vote to generate a word, which is then used as the input for the next word prediction). It results in a high inference cost with a large number of teachers (e.g. 2k teachers which are common in PATE). Second, the large output space (i.e., the vocabulary) in text generation leads to (i) low agreement rates among teachers and (ii) large noises required by DP, both of which significantly hurt the task performance. To address the challenges, we generate pseudo-data using a pre-trained language model so that teachers only need to provide token-level supervision given the pseudo inputs. To handle the large output space and reduce the noise, we propose to dynamically filter the candidate words and select only words with high probabilities. Also, we aggregate teachers’ outputs by interpolating their output distributions instead of voting with argmax predictions. DP learning methods provide privacy protection by adding noise, which also reduces the utility of the model. To reduce utility loss, we avoid unnecessary knowledge distillation by selectively applying knowledge distillation to generation steps where the student struggles. Most DP learning methods, including SeqPATE, prevent samples from being extracted. SeqPATE has further advantages in protecting users’ secret phrases that occur multiple times in the corpora. We evaluate SeqPATE on a sentence completion task, which demonstrates its advantage in protecting samples and phrases compared to the baselines. Our contribution is twofold: (i) We propose SeqPATE that provides privacy at both the sample level and the phrase level with theoretical analyses. (ii) We propose several strategies for SeqPATE to handle autoregressive text generation models with a large vocabulary. 2 Problem Setup Our goal is to achieve the privacy protection quantified by DP in text generation to prevent attackers from inferring whether a sample or an n-gram appears in the training set. Our setting contains two types of textual datasets: (1) a private set Dpri from a corpus with sensitive information, (2) a public set Dpub that contains no sensitive information or comes from data contributors (e.g., volunteers) who have no objection to publishing their data. We aim to protect the privacy on the private set and can ignore the privacy protection on the public set. Our application, sentence completion, aims to complete the whole sentence given the prefix. We train a language model to accomplish the task. The public set Dpub consists of prefixes, which can hardly contain sensitive information. The private set Dpri consists of whole sentences. Such a setting fits some real-world text generation applications: in dialog systems, the training samples from online services consist of questions and responses. The questions from customer service staff or service robots can be public, and the response from users carrying individual information should be private. 3 Background on DP and PATE Definition 3.1. [Differential privacy (DP) [13, 14]] For any two neighboring datasets D,D′ (differ in only one individual), a randomized algorithm M : Xn → Y is (ε, δ)-differentially private if, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ, ∀S ⊆ Y, where ε > 0, δ ≥ 0. (1) By definition, DP is a quantifiable definition of privacy that provides guarantees on identifications of individual data (preventing an adversary from inferring whether the input is D or D′). ML models with DP ensure that each training sample has a degree of plausible deniability, i.e., the trained model is just as likely as to be trained on an alternative dataset without that sample. In SeqPATE, M is the entire training and inference process, S is the vocabulary, and Pr[·] denotes the output distribution of generating a word. Attackers cannot tell whether a sample is in the training set or not, since the output distributions of the datasets with or without that sample are very similar (bounded by Eq. 1). PATE [31], designed for classification tasks, takes advantage of an unlabeled public dataset Dpub and also trains on a labeled private set Dpri in a semi-supervised scenario. PATE achieves DP through a teacher-student framework with M teacher models and a student model, where the student learns from the private set via knowledge distillation through teachers. PATE has three parts: (i) The teacher models are trained on the private set Dpri, which is shuffled and divided into M disjoint subsets. Each teacher is trained on one subset. (ii) Teacher aggregation merges the teachers’ outputs. Each of the trained teachers then provides supervision to the student’s unlabeled public set Dpub. We use noised majority votes from teachers as labels to supervise the student. (iii) A student model is trained on the public set Dpub with the supervision of the aggregated teachers. 4 Approach Fig. 1 shows an overview of SeqPATE. Given the public prefix (e.g., “Cats sit”), we first obtain the pseudo-inputs by completing the sentence (e.g., “Cats sit on the mats”) using a pre-trained language model (Sec. 4.1). At each word, we then aggregate the teachers’ prediction of the next word as supervision for training the student model (Sec. 4.2). To reduce the noise required by DP for a large output space of the size of the vocabulary, we reduce the output space by dynamically filtering unimportant words. To reduce the number of teacher queries that incur privacy losses, we propose an efficient knowledge distillation strategy that only queries teacher labels on uncertain examples (Sec. 4.3). We show the training algorithm in App. B and a running example in App. K. 4.1 Pseudo Input Generation Conventional text generation models generate words sequentially from left to right. Thus, naively applying PATE to text generation requires rolling out all teachers word by word, i.e., iteratively sampling the next word from the aggregated teacher prediction. This is costly in both computation (running inference for hundreds of teacher models) and privacy costs (querying teachers at every step). To tackle this challenge, we use a pre-trained language model to complete the public prefixes into pseudo sentences; thus, we only need to query teachers on the next word given a (pseudo) context. 4.2 Teacher Aggregation PATE aggregates teacher predictions by majority vote. While it works for classification problems with a relatively small number of classes, the output space of text generation models contains all words in the vocabulary. As a result, the number of votes for each candidate word may be very low without a clear winner. For example, multiple candidates may tie for the top-1 prediction. Inspired by Chen et al. [9, 17], we aggregate teacher results by averaging their output distributions. We first train M teacher models on disjoint subsets of the private data. To produce the aggregated next word distribution given a context c, we average the teachers’ output distributions, add calibrated noises, and then renormalize the results into a proper distribution. Following Papernot et al. [32], we apply the Gaussian mechanism. Formally, let pmϕ (· | c) be the prediction of the m-th teacher. The aggregated distribution is pagg(· | c) ∝ 1M ∑M m=1(p m ϕ (· | c)+N (0, σ2)), 2 where the Gaussian noise is added to the aggregated output distribution. The way of SeqPATE satisfies DP guarantee (Eq. 1) is to add that calibrated noise to the teachers’ output as mentioned above (detailed analyses in Sec. 5). 4.3 Training of the Student Model The student model is trained on public pseudo-data and also supervised by the aggregated teachers. Training objectives. The student model is a language model that predicts the next word given prior contexts. Given contexts from the (public) pseudo-data autocompleted by a pre-trained language model (GPT-2), the student is supervised by both the aggregated teacher predictions and the next word in the pseudo-data (i.e. pseudo label). The pseudo-data acts as a prior for the student given that the number of teacher queries is limited due to privacy concerns. The student’s loss function has two parts: • Lteacher denotes the loss with respect to teacher supervision. Note that the aggregated teacher output is a distribution over words. Therefore, we minimize the forward KL divergence between the aggregated teacher distribution pagg and the student output distribution pθ: Lteacher(c, pagg) = KL (pagg(· | c) ∥ pθ(· | c)) . (2) • Lpseudo denotes the loss with respect to the pseudo-labels w from D̃pub (i.e. next words generated by a generic language model). Similar to standard language modeling, we use the negative log-likelihood: Lpseudo(c, w) = − log pθ(w | c). (3) Eq. 4 shows the complete loss. (λ balances the two terms and we discuss the noise scale σ in Sec. 5.) L(pagg, D̃pub) = ∑ (c,w)∈D̃pub Lpseudo(c, w) + λLteacher(c, pagg), (4) Reducing the output space via candidate filtering. The high-dimensionality of the output of text generation models results in large noise (which is added to each coordinate). To reduce the output dimension (hence the amount of noise), we filter words on the tail of the distribution of the student model (i.e. set their probability to zero), and renormalize the teacher’s aggregated distribution and the student output distribution over the rest words. Note that the candidate filtering is based on the student’s outputs on public or already released inputs, thus it does not affect the privacy guarantee. This choice improves the privacy-utility tradeoff by adaptively allocating the privacy budget to release the information most helpful to the task. We experiment with two filtering strategies: top-k and top-p. In top-k filtering, we retain only the top-k most likely candidates and filter the rest according to the student model. In top-p filtering [18], 2Mathematically, the aggregated distribution with noises may be negative. If so, we renormalize the negative value to 0. Practically, we observed that being negative is an extremely rare event, since the M is usually very large (e.g., 2k) and the first term dominates the above equation. k is chosen dynamically such that the top-k words are the minimum set whose cumulative probability is at least p. The strategy seldom loses good candidates because the student usually does well on top-k predictions since the beginning of the training. 3 Reducing the number of teacher queries via efficient knowledge distillation. While the aggregated teacher model satisfies DP, each query from the student incurs some privacy loss. Therefore, we obtain teacher supervision only on “hard” examples when training the student. Note that the student is trained on both the pseudo-data and local supervision from the teachers. We consider an example to be hard if the student cannot imitate the pseudo-label, in which case distilling knowledge from the teachers that are trained on large private data is helpful. Concretely, we query teachers only when the rank of the pseudo-label is below a certain threshold among words ordered by descending probabilities under the student model. If we query the teachers, the student is trained via complete loss L(pagg, D̃pub) (Eq. 4); otherwise, the student is trained via the Lpseudo (Eq. 3). We note that the selection of tokens relies only on the student and is independent of the teachers; thus, the selection does not cause any additional privacy loss. 5 Privacy Analyses 5.1 Preliminary of Differential Privacy Lemma 5.1 (Analytical Gaussian mechanism [2]). For a numeric query f : Xn → Rd over a dataset D, the randomized algorithm that outputs f(D) + Z where Z ∼ N (0, σ2Id) satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) = Φ( ∆2σ − εσ ∆ ) − e εΦ(− ∆2σ − εσ ∆ ). where ∆ := ∆ (f) 2 = maxD∼D′ ∥f(D)− f(D′)∥2 is the global L2 sensitivity of f and Φ is the CDF function of N (0, 1). We can use the same result for an adaptive composition of a sequence of Gaussian mechanisms. Lemma 5.2 (Composition of Gaussian mechanisms [11]). The adaptive composition of a sequence of Gaussian mechanisms with a noise level σ1, σ2, . . . and global L2 sensitivity ∆1,∆2, . . . satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) ≤ δM(ε) where M is a Gaussian mechanism with noise multiplier σ/∆ = (∑ i(∆i/σi) 2 )−1/2 . Specifically, the adaptive composition of a k identical Gaussian mechanism with a noise multiplier σ satisfies the same privacy guarantee of a single Gaussian mechanism with a noise multiplier σ/ √ k. By fixing k and ε, we can calibrate the noise by choosing an appropriate σ in Sec. 4.2. 5.2 Differential Privacy for Language Models at the Sample Level Recall that we partition the private dataset into M disjoint subsets, and train each teacher model on one of the subsets. Let vector xi ∈ R|V| denote the probability distribution predicted by the i-th teacher model given some context, where |V| is the vocabulary size. The aggregation function f(D) := ∑M i=1 xi is the sum of the probability distributions predicted by all teachers. Since the datasets are disjoint, changing one sample affects only one teacher model. For neighboring datasets D, D′, let j denote the index of each teacher model; the probability distributions xj and x′j (derived from D and D′ respectively) are different. Then, the sensitivity ∆ in Lemma 5.1 & 5.2 is (See detailed deductions in App. C), ∆ := ∆ (f) 2 = ∥f(D)− f(D′)∥2 ≤ ∥xj − x′j∥2 ≤ √ 2. Adding the noises given by Lemma 5.2 to each coordinate (each candidate at each generation step of SeqPATE) preserves (ε, δ(ε))-DP for f(D). Finally, when we extract top-k coordinates by top-k candidate filtering (Sec. 4.3), the privacy guarantee also holds due to the post-processing property [14]. Therefore, the fact about whether a sample is in SeqPATE’s private sets is protected (satisfying (ε, δ(ε))-DP). 3In the first 10 training batches, the top-50 predictions of the student cover 94% “true” labels of pseudo samples. 5.3 Differential Privacy of Users’ Secret Phrases The above analyses show that we can protect the privacy of each sample (i.e., one occurrence of a sentence). However, in practice, we may want to protect all occurrences of some secret phrases specific to a user (e.g., names and addresses).4 Consider a secret phrase s that occurs ns times (ns ≥ 1) in the private set. According to group privacy [14], the protection on phrase s satisfies (nε, e nε−1 eε−1 δ)-DP [22], where the privacy loss scales linearly with the number of occurrences of s (We discuss and analyze a better strategy to reduce the privacy loss of baselines in App. M). Naively applying a DP algorithm requires larger noise to protect phrases that may occur multiple times. SeqPATE enjoys a stronger guarantee by assigning all data of a single user to one or a few teachers, such that any user-specific phrase occurs in the training data of only one or a few teachers. We denote ñs as the number of teachers whose data contain the phrase s. Since adding or removing the phrase s affects only ñs teachers (ñs is usually 1 or 2) and thus results in a sensitivity of √ 2ñs (See App. D for details). In this way, the strength of protection on secret phrases is roughly equal to that we have derived for sample-level DP. The exact (ε, δ(ε, ñs))-DP for the phrase s can be obtained according to Lemma 5.1 & 5.2, where δ(ε, ñs) = Φ( ñs√2σ − εσ√ 2ñs )− eεΦ(− ñs√ 2σ − εσ√ 2ñs ). Unlike other generic DP algorithms such as NoisySGD, SeqPATE avoids a linear increase in privacy loss (i.e., a linear increase in ε) on user phrases by careful partitioning of the private data. This effect is complimentary to other generic, but more intrusive, techniques such as redaction and deduplication [50] for addressing the same issue. Finally, a user-specific partitioning with SeqPATE also protects multiple secret phrases of the same user (e.g., a combination of SSN, credit card numbers, address, day of birth) jointly without incurring a larger privacy loss — a benefit that deduplication does not provide. 5.4 How does DP prevent memorization in SeqPATE? In practice, the privacy of the language model is usually interpreted as not generating a secret phrase in the training data as-is during inference. Thus, one may wonder how DP prevents such unintended memorization of the training data. We remark that the protection against memorization follows the definition of DP. Consider the attack by Carlini et al. [6], which uses a language model to predict a secret phrase s given a prefix. By the closure to post-processing [14], the prediction also satisfies DP. We denote W as the undesirable event where SeqPATE generates the phrase s verbatim. The DP definition implies that the probability of W to happen when s is in the SeqPATE’s private sets is at most eε larger than the probability of an alternative SeqPATE model trained without s in those sets. The chances for the latter model to generate text with s are astronomically small. Hence, DP implies that the probability of W under the former model (i.e. any SeqPATE model in general) is small. 6 Experiments 6.1 Experimental Settings Datasets. We evaluate our model on two datasets. AirDialog [47] consists of 1M utterances from customer service dialog on flight booking; Europarl_v6 consists of 2M English sentences collected from European Parliament.5 (See details about datasets in App. E.) Baselines. We compare SeqPATE with two DP baselines: (1) standard NoisySGD trained on the private data with calibrated noise on clipped gradients [1, 22] and further trained on public set Dpub without protection; (2) based on NoisySGD, NoisySGD+GC [24] applies a ghost clipping which enables large batch size with memory saving techniques. Additionally, we use two non-DP methods as reference: (1) Pri-GPT trained on the private set without any privacy protection; (2) the public pre-trained GPT-2 model Pub-GPT without access to private data. For all methods, we can optionally fine-tune on the generated pseudo-data as a warm-up, and the operation is denoted as +D̃pub. 4A formal definition of this is called personalized differential privacy, first seen in [16]. 5www.statmt.org/europarl Implementation details. All models are fine-tuned from the (public) pre-trained GPT-2 model [35]. The batch size is 32 for all comparing methods except the GC [24] (GC [24] requires 2048). We use Adam [23] and adjust the initial learning rate with a range of 10−3 to 10−6 for all methods. The δ mentioned in Sec. 5 for all DP methods is 10−6. For SeqPATE, before training the student model with teacher supervision, we first fine-tune it on the public pseudo-data D̃pub as a warm-up. The coefficient λ that balances supervision for the teacher and the pseudo-data (Eq. 4) is set to 20, where we have tuned it on the validation set of the public pseudo-data. The default number of teacher models is 2k, where our model works well according to the experiments in App. H. We designed some strategies 6 to reduce memory and disk usage (See strategies and the computational cost in App. I). We run SeqPATE with 2k teachers on a single GPU in 3 days. Our code is publicly accessible. 7. (See details about hyperparameters in App. G.) Evaluation Metrics. We evaluate the generated text by perplexity (PPL) and Bleu (Bleu-n) [33]. 6.2 Overall Performance Protection at the sample level. Tab. 1 show the performance on the two datasets. Among the non-DP baselines, Pri-GPT acts as an upper bound on the performance, since it can fully utilize the private set by discarding privacy protection. Pub-GPT+D̃pub outperforms Pub-GPT on both datasets, showing that the pseudo data is helpful (additional ablation study on the pseudo data in App. J also verifies this). NoisySGD+GC+D̃pub surpasses the above two methods, since it uses a much larger batch size (2048 vs 32) than NoisySGD. Our method, SeqPATE, significantly outperforms NoisySGD+GC+D̃pub (+59% in Bleu4 on AirDialog and +7.0% in Bleu4 on Europarl_v6) while ensuring the same level of privacy protection in terms of ε. Protection on the user’s secret phrases. We evaluate our method for privacy protection of secret phrases mentioned in Sec 5.3. The key step is to partition the data such that each phrase only occurs in the training data of very few teachers, which is straightforward given the user ID associated with the private data. In general, SeqPATE works with any set of secret phrases. In our experiments, we consider a user’s full name as their secret phrase since it can be easily recognized from the data. We partition AirDialog’s private data according to the accompanying user IDs. As a result, there are 96.6% users whose data are assigned to a single teacher (details about the data partition in App. F). As described in Sec. 5.3, standard DP methods incur larger privacy loss on secret phrases. In Tab. 2, we see that NoisySGD+GC+D̃pub needs large noise to achieve a satisfactory level of protection on phrases, because ε increases linearly with the frequency of the phrase (group privacy [14]). “Batching users” indicates partitioning data into batches according to users, which helps NoisySGD protect users’ phrases (more analyses in App. M). For SeqPATE, the number of teachers trained on data containing the phrase ñs is close to 1 on average after our partition. Thus, SeqPATE provides the same level of protection on users’ secret phrases with a smaller noise and thus achieves better performance (+70% and +36% in Bleu4) (see more about the protection level on users’ secret phrases in App. F). 6We train and conduct the inference on the teachers one-by-one and cache the teachers’ outputs. 7https://github.com/tianzhiliang/SeqPATE Privacy-utility tradeoff. In Fig. 2, we show the private-utility tradeoff curve of all DP algorithms. 8 Typically, DP with ε ∈ [0.1, 10] is considered to provide a meaningful protection [45]. We observe that SeqPATE outperforms NoisySGD and NoisySGD+GC+D̃pub in this range. However, SeqPATE does not work better than the two methods when ε > 10. The reason is that NoisySGD+GC+D̃pub approaches Pri-GPT as ε approaches infinity (i.e. the noise approaches 0). However, SeqPATE with an infinite ε is still weaker than Pri-GPT because distillation still incurs performance loss: the teachers cannot completely transfer knowledge from the private data to the student. Therefore, we suggest using SeqPATE if strong privacy protection is desirable. 6.3 Ablation Studies There are several design choices in SeqPATE and we study the importance of each of them. In Tab. 3, we consider the following variants of SeqPATE: (1) −Merge_P: aggregating the teachers by voting instead of averaging their output distributions; (2) −KL: training the student using the cross-entropy loss with respect to teachers’ top-1 prediction instead of KL divergence; (3) −Lpseudo: not learning from the pseudo label (Eq. 3); (4) −Effi KD: querying teachers on all samples without selection; (5) −Gaussian: using the Laplace mechanism as the original PATE algorithm instead of the Gaussian mechanism; and (6) −All: using none of the above strategies, which is similar (although not equivalent) to the original PATE (the difference is that PATE needs to roll out all teachers (Sec. 4.1)). Aggregating the teachers by voting and training with KL loss are the most important strategies for SeqPATE. The poor performance on −Merge_P shows that voting is not suitable for text generation. The reason is that voting over a large output space leads to low agreement rates. The results show that the Lpseudo loss makes little contribution to SeqPATE. The reason is that we have pre-trained on the student’s training set via Lpseudo before the student’s training. The promotion caused by efficient knowledge distillation (Effi KD) on AirDialog is larger than that on Europarl_v6, which shows that the “clever” student (e.g., models on AirDialog with low PPL and high Bleu) benefits more from this strategy. This is because the “clever” student can dramatically save the privacy cost and transfer it to where it would benefit the student most. The poor performance of −All verifies that the original PATE is not suitable for text generation. 6.4 Analyses on Candidate Filtering and Teacher Numbers To analyze candidate filtering with different filtering strategies, we conduct experiments on top-p and top-k filtering. As shown in Tab. 4, our full model employs the top-p filtering (the threshold p is 0.95) surpasses most variants with manually chosen k. Top-k filtering (k =50 or 100) also works well. Filtering with a too small k (k = 1 or k = 10) implies discarding too much useful information from the supervision (k = 1 is different from − KL in Tab. 3, which uses the Top-1 of teachers’ results). Filtering with oversize k results in unnecessarily large noises. Candidates with very small probabilities should be filtered during generation; however, random noises may increase their probabilities, so models may generate those words that are misled by the noise. The results in App. H show that more teachers lead to better results when the number of teachers is in the range of 1 ∼ 2k. This is because the noise assigned to each teacher drops linearly as the number of teachers increases. Note that SeqPATE cannot always benefit from increasing the teacher numbers, because the scale of each teacher’s data is linearly decreased as the teacher numbers go up. We choose ε = 3 on the sample level protection for all results in Tabs. 3, 4, and App. H. Additionally, we conduct empirical comparisons and analyses of SeqPATE versus the original PATE in App. N. We show the effects of protections on users’ secret phrases in App. O. We compare SeqPATE with another non-DP based baseline (i.e. blacklist based filtering) in App. P. We also conduct a case study in App. Q. 7 Related Work Text generation models may leak user information through the generated texts [19, 7]. One direction of privacy protection is to protect author-level (user-level) information. The methods prevent attackers from inferring the author attributes (e.g., gender, age) [25] and the relationship between information and authors [29]. Some researchers [40, 41] infer the membership (whether samples from a given author are used to train the model) given a black-box model. Some papers protect user privacy of training data against untrusted servers via federated learning [27, 10]. Another direction is to prevent attackers from extracting sensitive information in training sets by analyzing the outputs [30, 22], which is urgently needed [7]. Our SeqPATE focuses on this direction. In this direction, regularization methods [6, 43, 20] restrict the model capacity and prevent the model from memorizing exact training samples. Anonymization methods [26, 44] detect sensitive text and replace it with non-sensitive text. Unlike DP [14] methods, the above methods do not provide a quantifiable guarantee for privacy protection. Some researchers focus on protecting user privacy against untrusted servers via federated learning [27, 10]. Some researchers apply DP to text generation. For user-level privacy, ER-AE [4] augments the semantic information in the generated text to hide authors’ writing styles from attackers. McMahan et al. [28] propose a recurrent language model with a DP guarantee against the identification of users. Note that the user-level privacy (relationships between users and their information) is different from the privacy of users’ secret phrases in our model: Our model prevents individual user phrases from being detected. Some researchers apply NoisySGD to text generation to prevent sensitive training samples from being extracted: some of them [37, 39, 50] employ DP to protect a part of selected tokens; others [22, 49, 24] apply DP to protect both samples and all tokens, but the privacy cost on tokens is very high (Sec. 5.3). Our model falls into the latter category and reduces the privacy cost of tokens. Kerrigan et al. [22] apply NoisySGD [1] to text generation. Yu et al. [49] investigate fine-tuning strategies on pre-trained language models with NoisySGD. Li et al. [24] apply ghost clipping to pre-trained language models with NoisySGD and reduce memory usage. Shi et al. [38] apply DP to particular generation steps instead of training samples or n-grams. Brown et al. [5] analyze DP based method versus data sanitization of text generation models. Brown et al. [12] propose a efficient NoisySGD to speed up model training. Differential privacy (DP) [13, 14] formally defines and quantifies privacy. ML models with DP guarantee [46, 15, 52] prevent the existence of individual training examples from being detected [6]. Some researchers protect the privacy of empirical risk minimization classifiers [8] and SVM [36] with DP. Following Song et al. [42], NoisySGD [1] achieves DP on deep learning models by adding noises to gradients. Pichapati et al. [34] adaptively clip the gradient in NoisySGD. PATE [31, 32] transfers the knowledge from teacher models trained on private sets with noises to a student model. KNN-PATE [51] refines PATE by accessing only the k-nearest neighbors from the private set. Jordon et al. [21] adversarially learn to generate synthetic data with discriminators trained by PATE. These methods are not customized for text generation models. Xie et al. [48] propose DPGAN to adversarially learn with a generator and a discriminator. 8 Conclusion In this paper, we propose a novel framework, SeqPATE, to protect the privacy of the training data for text generation models with DP guarantees. SeqPATE achieves a good privacy-utility trade-off by leveraging both private and public data. As an extension of PATE, SeqPATE can handle the sequential generation paradigm with large output space at each step and is therefore adaptive to text generation models. We avoid rolling out teachers by providing pseudo-inputs for the teacher’s inference and the student’s training. We further reduce the output space by candidate filtering and limit privacy losses via efficient knowledge distillation. SeqPATE achieves a better performance with the sample-level protection and further provides much stronger protection on users’ secret phrases. The limitations, ethical considerations, and social impacts of this paper are in App. A and L. 9 Acknowledgement Research in this paper was supported by Hong Kong Research Grants Council under grand No. 16204920. HH is partly supported by the Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI). YW is partially supported by NSF Award #2048091. The authors thank Mr. Wei Dong and Dr. Yiping Song for their help and insights on this paper.
1. What is the focus and contribution of the paper on text generation tasks? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper regarding its experiments and applications? 4. Are there any concerns or suggestions regarding the societal impact and limitations of the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes an extension of PATE, a private learning algorithm, to text generation tasks. The extensions are simple yet effective: they generate pseudo inputs and reduce the sequence generation problem to next word predictions. They also propose a strategy to dynamically filter out candidates to reduce the large output space in the text decoder. Experiments in the sentence completion task show that the proposed model is effective in protecting samples and sensitive phrases. Strengths And Weaknesses Strengths The proposed extension is very simple yet intuitive and effective for differentially private text generation. Weaknesses The paper could have been more convincing if the model is tested on multiple text generation tasks such as dialog response generation (generate a response given previous utterances) where privacy is more crucial. Questions None. Limitations As the work focuses on privacy, I think it would be nice to have a specific section on limitations and societal impact. This is currently absent in the main paper.
NIPS
Title SeqPATE: Differentially Private Text Generation via Knowledge Distillation Abstract Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases. 1 Introduction Recent work has shown that sensitive user information in training corpora, such as addresses and names, can be extracted from text generation models [6]. Providing privacy guarantees to the training corpora of text generation models has become a critical problem. Differential privacy (DP) provides provable guarantees against detecting individuals in datasets. Deep learning models with DP guarantees ensure that the existence of a specific training sample cannot be detected. NoisySGD [42, 3, 1] is a popular DP algorithm for deep learning that adds noise to the gradients. PATE [31] is another type of DP learning algorithm that transfers knowledge from teachers trained on private data to a student model, where noises are added to teacher predictions to satisfy DP. PATE is model-agnostic, and its privacy cost derives from the knowledge distillation process instead of the model gradients in NoisySGD [42, 24]. Therefore, the noises required by PATE do not scale with model size. Given this benefit, PATE has great potential for text generation, since large language ∗This paper was partially done when Zhiliang Tian was a Ph.D. student at HKUST and a visiting scholar at NYU. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). models (e.g., GPT-2 [35]) have become the backbone of most text generation models. However, NoisySGD and PATE are used to protect sample-level privacy [51, 24] and not customized to protect sensitive phrases in the data with a low privacy cost [22, 39, 50]. Additionally, PATE, originally designed for classification tasks, is not customized for sequential generation on a large output space (i.e., the natural language vocabulary), which is very common in text generation. In this paper, we propose SeqPATE, a DP learning algorithm for text generation to protect the privacy of training corpora. By satisfying DP, SeqPATE has the guarantee of preventing the existence of training samples and sensitive phrases in the training corpora from being detected. Similarly to PATE, SeqPATE employs a teacher-student framework: (i) a student model learns to generate text from nonsensitive samples; and (ii) a number of teacher models, trained on sensitive text, supervise the student through noised outputs of aggregated teachers. The calibrated noise added to the output ensures that SeqPATE satisfies the DP requirements. This framework still faces some challenges in text generation. First, it suffers from the high costs of GPU memory and time. To obtain sentence-level supervision for text generation, the model needs to roll out all teachers to produce a sentence (i.e. all teachers vote to generate a word, which is then used as the input for the next word prediction). It results in a high inference cost with a large number of teachers (e.g. 2k teachers which are common in PATE). Second, the large output space (i.e., the vocabulary) in text generation leads to (i) low agreement rates among teachers and (ii) large noises required by DP, both of which significantly hurt the task performance. To address the challenges, we generate pseudo-data using a pre-trained language model so that teachers only need to provide token-level supervision given the pseudo inputs. To handle the large output space and reduce the noise, we propose to dynamically filter the candidate words and select only words with high probabilities. Also, we aggregate teachers’ outputs by interpolating their output distributions instead of voting with argmax predictions. DP learning methods provide privacy protection by adding noise, which also reduces the utility of the model. To reduce utility loss, we avoid unnecessary knowledge distillation by selectively applying knowledge distillation to generation steps where the student struggles. Most DP learning methods, including SeqPATE, prevent samples from being extracted. SeqPATE has further advantages in protecting users’ secret phrases that occur multiple times in the corpora. We evaluate SeqPATE on a sentence completion task, which demonstrates its advantage in protecting samples and phrases compared to the baselines. Our contribution is twofold: (i) We propose SeqPATE that provides privacy at both the sample level and the phrase level with theoretical analyses. (ii) We propose several strategies for SeqPATE to handle autoregressive text generation models with a large vocabulary. 2 Problem Setup Our goal is to achieve the privacy protection quantified by DP in text generation to prevent attackers from inferring whether a sample or an n-gram appears in the training set. Our setting contains two types of textual datasets: (1) a private set Dpri from a corpus with sensitive information, (2) a public set Dpub that contains no sensitive information or comes from data contributors (e.g., volunteers) who have no objection to publishing their data. We aim to protect the privacy on the private set and can ignore the privacy protection on the public set. Our application, sentence completion, aims to complete the whole sentence given the prefix. We train a language model to accomplish the task. The public set Dpub consists of prefixes, which can hardly contain sensitive information. The private set Dpri consists of whole sentences. Such a setting fits some real-world text generation applications: in dialog systems, the training samples from online services consist of questions and responses. The questions from customer service staff or service robots can be public, and the response from users carrying individual information should be private. 3 Background on DP and PATE Definition 3.1. [Differential privacy (DP) [13, 14]] For any two neighboring datasets D,D′ (differ in only one individual), a randomized algorithm M : Xn → Y is (ε, δ)-differentially private if, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ, ∀S ⊆ Y, where ε > 0, δ ≥ 0. (1) By definition, DP is a quantifiable definition of privacy that provides guarantees on identifications of individual data (preventing an adversary from inferring whether the input is D or D′). ML models with DP ensure that each training sample has a degree of plausible deniability, i.e., the trained model is just as likely as to be trained on an alternative dataset without that sample. In SeqPATE, M is the entire training and inference process, S is the vocabulary, and Pr[·] denotes the output distribution of generating a word. Attackers cannot tell whether a sample is in the training set or not, since the output distributions of the datasets with or without that sample are very similar (bounded by Eq. 1). PATE [31], designed for classification tasks, takes advantage of an unlabeled public dataset Dpub and also trains on a labeled private set Dpri in a semi-supervised scenario. PATE achieves DP through a teacher-student framework with M teacher models and a student model, where the student learns from the private set via knowledge distillation through teachers. PATE has three parts: (i) The teacher models are trained on the private set Dpri, which is shuffled and divided into M disjoint subsets. Each teacher is trained on one subset. (ii) Teacher aggregation merges the teachers’ outputs. Each of the trained teachers then provides supervision to the student’s unlabeled public set Dpub. We use noised majority votes from teachers as labels to supervise the student. (iii) A student model is trained on the public set Dpub with the supervision of the aggregated teachers. 4 Approach Fig. 1 shows an overview of SeqPATE. Given the public prefix (e.g., “Cats sit”), we first obtain the pseudo-inputs by completing the sentence (e.g., “Cats sit on the mats”) using a pre-trained language model (Sec. 4.1). At each word, we then aggregate the teachers’ prediction of the next word as supervision for training the student model (Sec. 4.2). To reduce the noise required by DP for a large output space of the size of the vocabulary, we reduce the output space by dynamically filtering unimportant words. To reduce the number of teacher queries that incur privacy losses, we propose an efficient knowledge distillation strategy that only queries teacher labels on uncertain examples (Sec. 4.3). We show the training algorithm in App. B and a running example in App. K. 4.1 Pseudo Input Generation Conventional text generation models generate words sequentially from left to right. Thus, naively applying PATE to text generation requires rolling out all teachers word by word, i.e., iteratively sampling the next word from the aggregated teacher prediction. This is costly in both computation (running inference for hundreds of teacher models) and privacy costs (querying teachers at every step). To tackle this challenge, we use a pre-trained language model to complete the public prefixes into pseudo sentences; thus, we only need to query teachers on the next word given a (pseudo) context. 4.2 Teacher Aggregation PATE aggregates teacher predictions by majority vote. While it works for classification problems with a relatively small number of classes, the output space of text generation models contains all words in the vocabulary. As a result, the number of votes for each candidate word may be very low without a clear winner. For example, multiple candidates may tie for the top-1 prediction. Inspired by Chen et al. [9, 17], we aggregate teacher results by averaging their output distributions. We first train M teacher models on disjoint subsets of the private data. To produce the aggregated next word distribution given a context c, we average the teachers’ output distributions, add calibrated noises, and then renormalize the results into a proper distribution. Following Papernot et al. [32], we apply the Gaussian mechanism. Formally, let pmϕ (· | c) be the prediction of the m-th teacher. The aggregated distribution is pagg(· | c) ∝ 1M ∑M m=1(p m ϕ (· | c)+N (0, σ2)), 2 where the Gaussian noise is added to the aggregated output distribution. The way of SeqPATE satisfies DP guarantee (Eq. 1) is to add that calibrated noise to the teachers’ output as mentioned above (detailed analyses in Sec. 5). 4.3 Training of the Student Model The student model is trained on public pseudo-data and also supervised by the aggregated teachers. Training objectives. The student model is a language model that predicts the next word given prior contexts. Given contexts from the (public) pseudo-data autocompleted by a pre-trained language model (GPT-2), the student is supervised by both the aggregated teacher predictions and the next word in the pseudo-data (i.e. pseudo label). The pseudo-data acts as a prior for the student given that the number of teacher queries is limited due to privacy concerns. The student’s loss function has two parts: • Lteacher denotes the loss with respect to teacher supervision. Note that the aggregated teacher output is a distribution over words. Therefore, we minimize the forward KL divergence between the aggregated teacher distribution pagg and the student output distribution pθ: Lteacher(c, pagg) = KL (pagg(· | c) ∥ pθ(· | c)) . (2) • Lpseudo denotes the loss with respect to the pseudo-labels w from D̃pub (i.e. next words generated by a generic language model). Similar to standard language modeling, we use the negative log-likelihood: Lpseudo(c, w) = − log pθ(w | c). (3) Eq. 4 shows the complete loss. (λ balances the two terms and we discuss the noise scale σ in Sec. 5.) L(pagg, D̃pub) = ∑ (c,w)∈D̃pub Lpseudo(c, w) + λLteacher(c, pagg), (4) Reducing the output space via candidate filtering. The high-dimensionality of the output of text generation models results in large noise (which is added to each coordinate). To reduce the output dimension (hence the amount of noise), we filter words on the tail of the distribution of the student model (i.e. set their probability to zero), and renormalize the teacher’s aggregated distribution and the student output distribution over the rest words. Note that the candidate filtering is based on the student’s outputs on public or already released inputs, thus it does not affect the privacy guarantee. This choice improves the privacy-utility tradeoff by adaptively allocating the privacy budget to release the information most helpful to the task. We experiment with two filtering strategies: top-k and top-p. In top-k filtering, we retain only the top-k most likely candidates and filter the rest according to the student model. In top-p filtering [18], 2Mathematically, the aggregated distribution with noises may be negative. If so, we renormalize the negative value to 0. Practically, we observed that being negative is an extremely rare event, since the M is usually very large (e.g., 2k) and the first term dominates the above equation. k is chosen dynamically such that the top-k words are the minimum set whose cumulative probability is at least p. The strategy seldom loses good candidates because the student usually does well on top-k predictions since the beginning of the training. 3 Reducing the number of teacher queries via efficient knowledge distillation. While the aggregated teacher model satisfies DP, each query from the student incurs some privacy loss. Therefore, we obtain teacher supervision only on “hard” examples when training the student. Note that the student is trained on both the pseudo-data and local supervision from the teachers. We consider an example to be hard if the student cannot imitate the pseudo-label, in which case distilling knowledge from the teachers that are trained on large private data is helpful. Concretely, we query teachers only when the rank of the pseudo-label is below a certain threshold among words ordered by descending probabilities under the student model. If we query the teachers, the student is trained via complete loss L(pagg, D̃pub) (Eq. 4); otherwise, the student is trained via the Lpseudo (Eq. 3). We note that the selection of tokens relies only on the student and is independent of the teachers; thus, the selection does not cause any additional privacy loss. 5 Privacy Analyses 5.1 Preliminary of Differential Privacy Lemma 5.1 (Analytical Gaussian mechanism [2]). For a numeric query f : Xn → Rd over a dataset D, the randomized algorithm that outputs f(D) + Z where Z ∼ N (0, σ2Id) satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) = Φ( ∆2σ − εσ ∆ ) − e εΦ(− ∆2σ − εσ ∆ ). where ∆ := ∆ (f) 2 = maxD∼D′ ∥f(D)− f(D′)∥2 is the global L2 sensitivity of f and Φ is the CDF function of N (0, 1). We can use the same result for an adaptive composition of a sequence of Gaussian mechanisms. Lemma 5.2 (Composition of Gaussian mechanisms [11]). The adaptive composition of a sequence of Gaussian mechanisms with a noise level σ1, σ2, . . . and global L2 sensitivity ∆1,∆2, . . . satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) ≤ δM(ε) where M is a Gaussian mechanism with noise multiplier σ/∆ = (∑ i(∆i/σi) 2 )−1/2 . Specifically, the adaptive composition of a k identical Gaussian mechanism with a noise multiplier σ satisfies the same privacy guarantee of a single Gaussian mechanism with a noise multiplier σ/ √ k. By fixing k and ε, we can calibrate the noise by choosing an appropriate σ in Sec. 4.2. 5.2 Differential Privacy for Language Models at the Sample Level Recall that we partition the private dataset into M disjoint subsets, and train each teacher model on one of the subsets. Let vector xi ∈ R|V| denote the probability distribution predicted by the i-th teacher model given some context, where |V| is the vocabulary size. The aggregation function f(D) := ∑M i=1 xi is the sum of the probability distributions predicted by all teachers. Since the datasets are disjoint, changing one sample affects only one teacher model. For neighboring datasets D, D′, let j denote the index of each teacher model; the probability distributions xj and x′j (derived from D and D′ respectively) are different. Then, the sensitivity ∆ in Lemma 5.1 & 5.2 is (See detailed deductions in App. C), ∆ := ∆ (f) 2 = ∥f(D)− f(D′)∥2 ≤ ∥xj − x′j∥2 ≤ √ 2. Adding the noises given by Lemma 5.2 to each coordinate (each candidate at each generation step of SeqPATE) preserves (ε, δ(ε))-DP for f(D). Finally, when we extract top-k coordinates by top-k candidate filtering (Sec. 4.3), the privacy guarantee also holds due to the post-processing property [14]. Therefore, the fact about whether a sample is in SeqPATE’s private sets is protected (satisfying (ε, δ(ε))-DP). 3In the first 10 training batches, the top-50 predictions of the student cover 94% “true” labels of pseudo samples. 5.3 Differential Privacy of Users’ Secret Phrases The above analyses show that we can protect the privacy of each sample (i.e., one occurrence of a sentence). However, in practice, we may want to protect all occurrences of some secret phrases specific to a user (e.g., names and addresses).4 Consider a secret phrase s that occurs ns times (ns ≥ 1) in the private set. According to group privacy [14], the protection on phrase s satisfies (nε, e nε−1 eε−1 δ)-DP [22], where the privacy loss scales linearly with the number of occurrences of s (We discuss and analyze a better strategy to reduce the privacy loss of baselines in App. M). Naively applying a DP algorithm requires larger noise to protect phrases that may occur multiple times. SeqPATE enjoys a stronger guarantee by assigning all data of a single user to one or a few teachers, such that any user-specific phrase occurs in the training data of only one or a few teachers. We denote ñs as the number of teachers whose data contain the phrase s. Since adding or removing the phrase s affects only ñs teachers (ñs is usually 1 or 2) and thus results in a sensitivity of √ 2ñs (See App. D for details). In this way, the strength of protection on secret phrases is roughly equal to that we have derived for sample-level DP. The exact (ε, δ(ε, ñs))-DP for the phrase s can be obtained according to Lemma 5.1 & 5.2, where δ(ε, ñs) = Φ( ñs√2σ − εσ√ 2ñs )− eεΦ(− ñs√ 2σ − εσ√ 2ñs ). Unlike other generic DP algorithms such as NoisySGD, SeqPATE avoids a linear increase in privacy loss (i.e., a linear increase in ε) on user phrases by careful partitioning of the private data. This effect is complimentary to other generic, but more intrusive, techniques such as redaction and deduplication [50] for addressing the same issue. Finally, a user-specific partitioning with SeqPATE also protects multiple secret phrases of the same user (e.g., a combination of SSN, credit card numbers, address, day of birth) jointly without incurring a larger privacy loss — a benefit that deduplication does not provide. 5.4 How does DP prevent memorization in SeqPATE? In practice, the privacy of the language model is usually interpreted as not generating a secret phrase in the training data as-is during inference. Thus, one may wonder how DP prevents such unintended memorization of the training data. We remark that the protection against memorization follows the definition of DP. Consider the attack by Carlini et al. [6], which uses a language model to predict a secret phrase s given a prefix. By the closure to post-processing [14], the prediction also satisfies DP. We denote W as the undesirable event where SeqPATE generates the phrase s verbatim. The DP definition implies that the probability of W to happen when s is in the SeqPATE’s private sets is at most eε larger than the probability of an alternative SeqPATE model trained without s in those sets. The chances for the latter model to generate text with s are astronomically small. Hence, DP implies that the probability of W under the former model (i.e. any SeqPATE model in general) is small. 6 Experiments 6.1 Experimental Settings Datasets. We evaluate our model on two datasets. AirDialog [47] consists of 1M utterances from customer service dialog on flight booking; Europarl_v6 consists of 2M English sentences collected from European Parliament.5 (See details about datasets in App. E.) Baselines. We compare SeqPATE with two DP baselines: (1) standard NoisySGD trained on the private data with calibrated noise on clipped gradients [1, 22] and further trained on public set Dpub without protection; (2) based on NoisySGD, NoisySGD+GC [24] applies a ghost clipping which enables large batch size with memory saving techniques. Additionally, we use two non-DP methods as reference: (1) Pri-GPT trained on the private set without any privacy protection; (2) the public pre-trained GPT-2 model Pub-GPT without access to private data. For all methods, we can optionally fine-tune on the generated pseudo-data as a warm-up, and the operation is denoted as +D̃pub. 4A formal definition of this is called personalized differential privacy, first seen in [16]. 5www.statmt.org/europarl Implementation details. All models are fine-tuned from the (public) pre-trained GPT-2 model [35]. The batch size is 32 for all comparing methods except the GC [24] (GC [24] requires 2048). We use Adam [23] and adjust the initial learning rate with a range of 10−3 to 10−6 for all methods. The δ mentioned in Sec. 5 for all DP methods is 10−6. For SeqPATE, before training the student model with teacher supervision, we first fine-tune it on the public pseudo-data D̃pub as a warm-up. The coefficient λ that balances supervision for the teacher and the pseudo-data (Eq. 4) is set to 20, where we have tuned it on the validation set of the public pseudo-data. The default number of teacher models is 2k, where our model works well according to the experiments in App. H. We designed some strategies 6 to reduce memory and disk usage (See strategies and the computational cost in App. I). We run SeqPATE with 2k teachers on a single GPU in 3 days. Our code is publicly accessible. 7. (See details about hyperparameters in App. G.) Evaluation Metrics. We evaluate the generated text by perplexity (PPL) and Bleu (Bleu-n) [33]. 6.2 Overall Performance Protection at the sample level. Tab. 1 show the performance on the two datasets. Among the non-DP baselines, Pri-GPT acts as an upper bound on the performance, since it can fully utilize the private set by discarding privacy protection. Pub-GPT+D̃pub outperforms Pub-GPT on both datasets, showing that the pseudo data is helpful (additional ablation study on the pseudo data in App. J also verifies this). NoisySGD+GC+D̃pub surpasses the above two methods, since it uses a much larger batch size (2048 vs 32) than NoisySGD. Our method, SeqPATE, significantly outperforms NoisySGD+GC+D̃pub (+59% in Bleu4 on AirDialog and +7.0% in Bleu4 on Europarl_v6) while ensuring the same level of privacy protection in terms of ε. Protection on the user’s secret phrases. We evaluate our method for privacy protection of secret phrases mentioned in Sec 5.3. The key step is to partition the data such that each phrase only occurs in the training data of very few teachers, which is straightforward given the user ID associated with the private data. In general, SeqPATE works with any set of secret phrases. In our experiments, we consider a user’s full name as their secret phrase since it can be easily recognized from the data. We partition AirDialog’s private data according to the accompanying user IDs. As a result, there are 96.6% users whose data are assigned to a single teacher (details about the data partition in App. F). As described in Sec. 5.3, standard DP methods incur larger privacy loss on secret phrases. In Tab. 2, we see that NoisySGD+GC+D̃pub needs large noise to achieve a satisfactory level of protection on phrases, because ε increases linearly with the frequency of the phrase (group privacy [14]). “Batching users” indicates partitioning data into batches according to users, which helps NoisySGD protect users’ phrases (more analyses in App. M). For SeqPATE, the number of teachers trained on data containing the phrase ñs is close to 1 on average after our partition. Thus, SeqPATE provides the same level of protection on users’ secret phrases with a smaller noise and thus achieves better performance (+70% and +36% in Bleu4) (see more about the protection level on users’ secret phrases in App. F). 6We train and conduct the inference on the teachers one-by-one and cache the teachers’ outputs. 7https://github.com/tianzhiliang/SeqPATE Privacy-utility tradeoff. In Fig. 2, we show the private-utility tradeoff curve of all DP algorithms. 8 Typically, DP with ε ∈ [0.1, 10] is considered to provide a meaningful protection [45]. We observe that SeqPATE outperforms NoisySGD and NoisySGD+GC+D̃pub in this range. However, SeqPATE does not work better than the two methods when ε > 10. The reason is that NoisySGD+GC+D̃pub approaches Pri-GPT as ε approaches infinity (i.e. the noise approaches 0). However, SeqPATE with an infinite ε is still weaker than Pri-GPT because distillation still incurs performance loss: the teachers cannot completely transfer knowledge from the private data to the student. Therefore, we suggest using SeqPATE if strong privacy protection is desirable. 6.3 Ablation Studies There are several design choices in SeqPATE and we study the importance of each of them. In Tab. 3, we consider the following variants of SeqPATE: (1) −Merge_P: aggregating the teachers by voting instead of averaging their output distributions; (2) −KL: training the student using the cross-entropy loss with respect to teachers’ top-1 prediction instead of KL divergence; (3) −Lpseudo: not learning from the pseudo label (Eq. 3); (4) −Effi KD: querying teachers on all samples without selection; (5) −Gaussian: using the Laplace mechanism as the original PATE algorithm instead of the Gaussian mechanism; and (6) −All: using none of the above strategies, which is similar (although not equivalent) to the original PATE (the difference is that PATE needs to roll out all teachers (Sec. 4.1)). Aggregating the teachers by voting and training with KL loss are the most important strategies for SeqPATE. The poor performance on −Merge_P shows that voting is not suitable for text generation. The reason is that voting over a large output space leads to low agreement rates. The results show that the Lpseudo loss makes little contribution to SeqPATE. The reason is that we have pre-trained on the student’s training set via Lpseudo before the student’s training. The promotion caused by efficient knowledge distillation (Effi KD) on AirDialog is larger than that on Europarl_v6, which shows that the “clever” student (e.g., models on AirDialog with low PPL and high Bleu) benefits more from this strategy. This is because the “clever” student can dramatically save the privacy cost and transfer it to where it would benefit the student most. The poor performance of −All verifies that the original PATE is not suitable for text generation. 6.4 Analyses on Candidate Filtering and Teacher Numbers To analyze candidate filtering with different filtering strategies, we conduct experiments on top-p and top-k filtering. As shown in Tab. 4, our full model employs the top-p filtering (the threshold p is 0.95) surpasses most variants with manually chosen k. Top-k filtering (k =50 or 100) also works well. Filtering with a too small k (k = 1 or k = 10) implies discarding too much useful information from the supervision (k = 1 is different from − KL in Tab. 3, which uses the Top-1 of teachers’ results). Filtering with oversize k results in unnecessarily large noises. Candidates with very small probabilities should be filtered during generation; however, random noises may increase their probabilities, so models may generate those words that are misled by the noise. The results in App. H show that more teachers lead to better results when the number of teachers is in the range of 1 ∼ 2k. This is because the noise assigned to each teacher drops linearly as the number of teachers increases. Note that SeqPATE cannot always benefit from increasing the teacher numbers, because the scale of each teacher’s data is linearly decreased as the teacher numbers go up. We choose ε = 3 on the sample level protection for all results in Tabs. 3, 4, and App. H. Additionally, we conduct empirical comparisons and analyses of SeqPATE versus the original PATE in App. N. We show the effects of protections on users’ secret phrases in App. O. We compare SeqPATE with another non-DP based baseline (i.e. blacklist based filtering) in App. P. We also conduct a case study in App. Q. 7 Related Work Text generation models may leak user information through the generated texts [19, 7]. One direction of privacy protection is to protect author-level (user-level) information. The methods prevent attackers from inferring the author attributes (e.g., gender, age) [25] and the relationship between information and authors [29]. Some researchers [40, 41] infer the membership (whether samples from a given author are used to train the model) given a black-box model. Some papers protect user privacy of training data against untrusted servers via federated learning [27, 10]. Another direction is to prevent attackers from extracting sensitive information in training sets by analyzing the outputs [30, 22], which is urgently needed [7]. Our SeqPATE focuses on this direction. In this direction, regularization methods [6, 43, 20] restrict the model capacity and prevent the model from memorizing exact training samples. Anonymization methods [26, 44] detect sensitive text and replace it with non-sensitive text. Unlike DP [14] methods, the above methods do not provide a quantifiable guarantee for privacy protection. Some researchers focus on protecting user privacy against untrusted servers via federated learning [27, 10]. Some researchers apply DP to text generation. For user-level privacy, ER-AE [4] augments the semantic information in the generated text to hide authors’ writing styles from attackers. McMahan et al. [28] propose a recurrent language model with a DP guarantee against the identification of users. Note that the user-level privacy (relationships between users and their information) is different from the privacy of users’ secret phrases in our model: Our model prevents individual user phrases from being detected. Some researchers apply NoisySGD to text generation to prevent sensitive training samples from being extracted: some of them [37, 39, 50] employ DP to protect a part of selected tokens; others [22, 49, 24] apply DP to protect both samples and all tokens, but the privacy cost on tokens is very high (Sec. 5.3). Our model falls into the latter category and reduces the privacy cost of tokens. Kerrigan et al. [22] apply NoisySGD [1] to text generation. Yu et al. [49] investigate fine-tuning strategies on pre-trained language models with NoisySGD. Li et al. [24] apply ghost clipping to pre-trained language models with NoisySGD and reduce memory usage. Shi et al. [38] apply DP to particular generation steps instead of training samples or n-grams. Brown et al. [5] analyze DP based method versus data sanitization of text generation models. Brown et al. [12] propose a efficient NoisySGD to speed up model training. Differential privacy (DP) [13, 14] formally defines and quantifies privacy. ML models with DP guarantee [46, 15, 52] prevent the existence of individual training examples from being detected [6]. Some researchers protect the privacy of empirical risk minimization classifiers [8] and SVM [36] with DP. Following Song et al. [42], NoisySGD [1] achieves DP on deep learning models by adding noises to gradients. Pichapati et al. [34] adaptively clip the gradient in NoisySGD. PATE [31, 32] transfers the knowledge from teacher models trained on private sets with noises to a student model. KNN-PATE [51] refines PATE by accessing only the k-nearest neighbors from the private set. Jordon et al. [21] adversarially learn to generate synthetic data with discriminators trained by PATE. These methods are not customized for text generation models. Xie et al. [48] propose DPGAN to adversarially learn with a generator and a discriminator. 8 Conclusion In this paper, we propose a novel framework, SeqPATE, to protect the privacy of the training data for text generation models with DP guarantees. SeqPATE achieves a good privacy-utility trade-off by leveraging both private and public data. As an extension of PATE, SeqPATE can handle the sequential generation paradigm with large output space at each step and is therefore adaptive to text generation models. We avoid rolling out teachers by providing pseudo-inputs for the teacher’s inference and the student’s training. We further reduce the output space by candidate filtering and limit privacy losses via efficient knowledge distillation. SeqPATE achieves a better performance with the sample-level protection and further provides much stronger protection on users’ secret phrases. The limitations, ethical considerations, and social impacts of this paper are in App. A and L. 9 Acknowledgement Research in this paper was supported by Hong Kong Research Grants Council under grand No. 16204920. HH is partly supported by the Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI). YW is partially supported by NSF Award #2048091. The authors thank Mr. Wei Dong and Dr. Yiping Song for their help and insights on this paper.
1. What is the focus and contribution of the paper on differentially private learning for text generation? 2. What are the strengths of the proposed approach, particularly in terms of privacy protection and motivation? 3. What are the weaknesses of the paper, especially regarding the lack of qualitative and error analysis and runtime analysis? 4. How does the reviewer assess the effectiveness of SeqPATE in protecting both samples and sensitive phrases, and its superiority compared to other frameworks? 5. What are the limitations of the paper regarding its claims and experimental design?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, authors propose a novel framework, SeqPATE, an extension of PATE on text generation, as a Differentially private (DP) learning algorithm for text generations. SeqPATE aims to protect the privacy of both training samples and sensitive phrases in samples, and employs a teacher-student framework. Additionally, authors propose several strategies for SeqPATE to handle text generations with a sequence of classifications over large spaces. Strengths And Weaknesses Strengths +Privacy protections is important for text generation models and other tasks +Motivation and problem setting are clear +Previous work survey is enough Weaknesses -Several claims have not been adequately verified. For example, "the effectiveness of SeqPATE in protecting both samples and sensitive phrases", "training corpora with a moderate privacy cost". -No qualitative and error analysis -Runtime analysis is lacked. Questions *How about applying a simple approach such as a word blacklist to a benchmark? *Can you explain the rationale for the superiority of this framework compared to the case where teacher models and student model are studied with D p u b and D p r i , respectively? *Show some examples or qualitative evaluation of how SeqPATE achieves utility with strong privacy protections on training samples. Limitations Without qualitative analysis, it is a quantitative comparison of similar models and does not support the author's claim. Only the usual text generation model metrics (i.e., PPL and Bleu)are used.
NIPS
Title SeqPATE: Differentially Private Text Generation via Knowledge Distillation Abstract Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases. 1 Introduction Recent work has shown that sensitive user information in training corpora, such as addresses and names, can be extracted from text generation models [6]. Providing privacy guarantees to the training corpora of text generation models has become a critical problem. Differential privacy (DP) provides provable guarantees against detecting individuals in datasets. Deep learning models with DP guarantees ensure that the existence of a specific training sample cannot be detected. NoisySGD [42, 3, 1] is a popular DP algorithm for deep learning that adds noise to the gradients. PATE [31] is another type of DP learning algorithm that transfers knowledge from teachers trained on private data to a student model, where noises are added to teacher predictions to satisfy DP. PATE is model-agnostic, and its privacy cost derives from the knowledge distillation process instead of the model gradients in NoisySGD [42, 24]. Therefore, the noises required by PATE do not scale with model size. Given this benefit, PATE has great potential for text generation, since large language ∗This paper was partially done when Zhiliang Tian was a Ph.D. student at HKUST and a visiting scholar at NYU. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). models (e.g., GPT-2 [35]) have become the backbone of most text generation models. However, NoisySGD and PATE are used to protect sample-level privacy [51, 24] and not customized to protect sensitive phrases in the data with a low privacy cost [22, 39, 50]. Additionally, PATE, originally designed for classification tasks, is not customized for sequential generation on a large output space (i.e., the natural language vocabulary), which is very common in text generation. In this paper, we propose SeqPATE, a DP learning algorithm for text generation to protect the privacy of training corpora. By satisfying DP, SeqPATE has the guarantee of preventing the existence of training samples and sensitive phrases in the training corpora from being detected. Similarly to PATE, SeqPATE employs a teacher-student framework: (i) a student model learns to generate text from nonsensitive samples; and (ii) a number of teacher models, trained on sensitive text, supervise the student through noised outputs of aggregated teachers. The calibrated noise added to the output ensures that SeqPATE satisfies the DP requirements. This framework still faces some challenges in text generation. First, it suffers from the high costs of GPU memory and time. To obtain sentence-level supervision for text generation, the model needs to roll out all teachers to produce a sentence (i.e. all teachers vote to generate a word, which is then used as the input for the next word prediction). It results in a high inference cost with a large number of teachers (e.g. 2k teachers which are common in PATE). Second, the large output space (i.e., the vocabulary) in text generation leads to (i) low agreement rates among teachers and (ii) large noises required by DP, both of which significantly hurt the task performance. To address the challenges, we generate pseudo-data using a pre-trained language model so that teachers only need to provide token-level supervision given the pseudo inputs. To handle the large output space and reduce the noise, we propose to dynamically filter the candidate words and select only words with high probabilities. Also, we aggregate teachers’ outputs by interpolating their output distributions instead of voting with argmax predictions. DP learning methods provide privacy protection by adding noise, which also reduces the utility of the model. To reduce utility loss, we avoid unnecessary knowledge distillation by selectively applying knowledge distillation to generation steps where the student struggles. Most DP learning methods, including SeqPATE, prevent samples from being extracted. SeqPATE has further advantages in protecting users’ secret phrases that occur multiple times in the corpora. We evaluate SeqPATE on a sentence completion task, which demonstrates its advantage in protecting samples and phrases compared to the baselines. Our contribution is twofold: (i) We propose SeqPATE that provides privacy at both the sample level and the phrase level with theoretical analyses. (ii) We propose several strategies for SeqPATE to handle autoregressive text generation models with a large vocabulary. 2 Problem Setup Our goal is to achieve the privacy protection quantified by DP in text generation to prevent attackers from inferring whether a sample or an n-gram appears in the training set. Our setting contains two types of textual datasets: (1) a private set Dpri from a corpus with sensitive information, (2) a public set Dpub that contains no sensitive information or comes from data contributors (e.g., volunteers) who have no objection to publishing their data. We aim to protect the privacy on the private set and can ignore the privacy protection on the public set. Our application, sentence completion, aims to complete the whole sentence given the prefix. We train a language model to accomplish the task. The public set Dpub consists of prefixes, which can hardly contain sensitive information. The private set Dpri consists of whole sentences. Such a setting fits some real-world text generation applications: in dialog systems, the training samples from online services consist of questions and responses. The questions from customer service staff or service robots can be public, and the response from users carrying individual information should be private. 3 Background on DP and PATE Definition 3.1. [Differential privacy (DP) [13, 14]] For any two neighboring datasets D,D′ (differ in only one individual), a randomized algorithm M : Xn → Y is (ε, δ)-differentially private if, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ, ∀S ⊆ Y, where ε > 0, δ ≥ 0. (1) By definition, DP is a quantifiable definition of privacy that provides guarantees on identifications of individual data (preventing an adversary from inferring whether the input is D or D′). ML models with DP ensure that each training sample has a degree of plausible deniability, i.e., the trained model is just as likely as to be trained on an alternative dataset without that sample. In SeqPATE, M is the entire training and inference process, S is the vocabulary, and Pr[·] denotes the output distribution of generating a word. Attackers cannot tell whether a sample is in the training set or not, since the output distributions of the datasets with or without that sample are very similar (bounded by Eq. 1). PATE [31], designed for classification tasks, takes advantage of an unlabeled public dataset Dpub and also trains on a labeled private set Dpri in a semi-supervised scenario. PATE achieves DP through a teacher-student framework with M teacher models and a student model, where the student learns from the private set via knowledge distillation through teachers. PATE has three parts: (i) The teacher models are trained on the private set Dpri, which is shuffled and divided into M disjoint subsets. Each teacher is trained on one subset. (ii) Teacher aggregation merges the teachers’ outputs. Each of the trained teachers then provides supervision to the student’s unlabeled public set Dpub. We use noised majority votes from teachers as labels to supervise the student. (iii) A student model is trained on the public set Dpub with the supervision of the aggregated teachers. 4 Approach Fig. 1 shows an overview of SeqPATE. Given the public prefix (e.g., “Cats sit”), we first obtain the pseudo-inputs by completing the sentence (e.g., “Cats sit on the mats”) using a pre-trained language model (Sec. 4.1). At each word, we then aggregate the teachers’ prediction of the next word as supervision for training the student model (Sec. 4.2). To reduce the noise required by DP for a large output space of the size of the vocabulary, we reduce the output space by dynamically filtering unimportant words. To reduce the number of teacher queries that incur privacy losses, we propose an efficient knowledge distillation strategy that only queries teacher labels on uncertain examples (Sec. 4.3). We show the training algorithm in App. B and a running example in App. K. 4.1 Pseudo Input Generation Conventional text generation models generate words sequentially from left to right. Thus, naively applying PATE to text generation requires rolling out all teachers word by word, i.e., iteratively sampling the next word from the aggregated teacher prediction. This is costly in both computation (running inference for hundreds of teacher models) and privacy costs (querying teachers at every step). To tackle this challenge, we use a pre-trained language model to complete the public prefixes into pseudo sentences; thus, we only need to query teachers on the next word given a (pseudo) context. 4.2 Teacher Aggregation PATE aggregates teacher predictions by majority vote. While it works for classification problems with a relatively small number of classes, the output space of text generation models contains all words in the vocabulary. As a result, the number of votes for each candidate word may be very low without a clear winner. For example, multiple candidates may tie for the top-1 prediction. Inspired by Chen et al. [9, 17], we aggregate teacher results by averaging their output distributions. We first train M teacher models on disjoint subsets of the private data. To produce the aggregated next word distribution given a context c, we average the teachers’ output distributions, add calibrated noises, and then renormalize the results into a proper distribution. Following Papernot et al. [32], we apply the Gaussian mechanism. Formally, let pmϕ (· | c) be the prediction of the m-th teacher. The aggregated distribution is pagg(· | c) ∝ 1M ∑M m=1(p m ϕ (· | c)+N (0, σ2)), 2 where the Gaussian noise is added to the aggregated output distribution. The way of SeqPATE satisfies DP guarantee (Eq. 1) is to add that calibrated noise to the teachers’ output as mentioned above (detailed analyses in Sec. 5). 4.3 Training of the Student Model The student model is trained on public pseudo-data and also supervised by the aggregated teachers. Training objectives. The student model is a language model that predicts the next word given prior contexts. Given contexts from the (public) pseudo-data autocompleted by a pre-trained language model (GPT-2), the student is supervised by both the aggregated teacher predictions and the next word in the pseudo-data (i.e. pseudo label). The pseudo-data acts as a prior for the student given that the number of teacher queries is limited due to privacy concerns. The student’s loss function has two parts: • Lteacher denotes the loss with respect to teacher supervision. Note that the aggregated teacher output is a distribution over words. Therefore, we minimize the forward KL divergence between the aggregated teacher distribution pagg and the student output distribution pθ: Lteacher(c, pagg) = KL (pagg(· | c) ∥ pθ(· | c)) . (2) • Lpseudo denotes the loss with respect to the pseudo-labels w from D̃pub (i.e. next words generated by a generic language model). Similar to standard language modeling, we use the negative log-likelihood: Lpseudo(c, w) = − log pθ(w | c). (3) Eq. 4 shows the complete loss. (λ balances the two terms and we discuss the noise scale σ in Sec. 5.) L(pagg, D̃pub) = ∑ (c,w)∈D̃pub Lpseudo(c, w) + λLteacher(c, pagg), (4) Reducing the output space via candidate filtering. The high-dimensionality of the output of text generation models results in large noise (which is added to each coordinate). To reduce the output dimension (hence the amount of noise), we filter words on the tail of the distribution of the student model (i.e. set their probability to zero), and renormalize the teacher’s aggregated distribution and the student output distribution over the rest words. Note that the candidate filtering is based on the student’s outputs on public or already released inputs, thus it does not affect the privacy guarantee. This choice improves the privacy-utility tradeoff by adaptively allocating the privacy budget to release the information most helpful to the task. We experiment with two filtering strategies: top-k and top-p. In top-k filtering, we retain only the top-k most likely candidates and filter the rest according to the student model. In top-p filtering [18], 2Mathematically, the aggregated distribution with noises may be negative. If so, we renormalize the negative value to 0. Practically, we observed that being negative is an extremely rare event, since the M is usually very large (e.g., 2k) and the first term dominates the above equation. k is chosen dynamically such that the top-k words are the minimum set whose cumulative probability is at least p. The strategy seldom loses good candidates because the student usually does well on top-k predictions since the beginning of the training. 3 Reducing the number of teacher queries via efficient knowledge distillation. While the aggregated teacher model satisfies DP, each query from the student incurs some privacy loss. Therefore, we obtain teacher supervision only on “hard” examples when training the student. Note that the student is trained on both the pseudo-data and local supervision from the teachers. We consider an example to be hard if the student cannot imitate the pseudo-label, in which case distilling knowledge from the teachers that are trained on large private data is helpful. Concretely, we query teachers only when the rank of the pseudo-label is below a certain threshold among words ordered by descending probabilities under the student model. If we query the teachers, the student is trained via complete loss L(pagg, D̃pub) (Eq. 4); otherwise, the student is trained via the Lpseudo (Eq. 3). We note that the selection of tokens relies only on the student and is independent of the teachers; thus, the selection does not cause any additional privacy loss. 5 Privacy Analyses 5.1 Preliminary of Differential Privacy Lemma 5.1 (Analytical Gaussian mechanism [2]). For a numeric query f : Xn → Rd over a dataset D, the randomized algorithm that outputs f(D) + Z where Z ∼ N (0, σ2Id) satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) = Φ( ∆2σ − εσ ∆ ) − e εΦ(− ∆2σ − εσ ∆ ). where ∆ := ∆ (f) 2 = maxD∼D′ ∥f(D)− f(D′)∥2 is the global L2 sensitivity of f and Φ is the CDF function of N (0, 1). We can use the same result for an adaptive composition of a sequence of Gaussian mechanisms. Lemma 5.2 (Composition of Gaussian mechanisms [11]). The adaptive composition of a sequence of Gaussian mechanisms with a noise level σ1, σ2, . . . and global L2 sensitivity ∆1,∆2, . . . satisfies (ε, δ(ε))-DP for all ε ≥ 0 and δ(ε) ≤ δM(ε) where M is a Gaussian mechanism with noise multiplier σ/∆ = (∑ i(∆i/σi) 2 )−1/2 . Specifically, the adaptive composition of a k identical Gaussian mechanism with a noise multiplier σ satisfies the same privacy guarantee of a single Gaussian mechanism with a noise multiplier σ/ √ k. By fixing k and ε, we can calibrate the noise by choosing an appropriate σ in Sec. 4.2. 5.2 Differential Privacy for Language Models at the Sample Level Recall that we partition the private dataset into M disjoint subsets, and train each teacher model on one of the subsets. Let vector xi ∈ R|V| denote the probability distribution predicted by the i-th teacher model given some context, where |V| is the vocabulary size. The aggregation function f(D) := ∑M i=1 xi is the sum of the probability distributions predicted by all teachers. Since the datasets are disjoint, changing one sample affects only one teacher model. For neighboring datasets D, D′, let j denote the index of each teacher model; the probability distributions xj and x′j (derived from D and D′ respectively) are different. Then, the sensitivity ∆ in Lemma 5.1 & 5.2 is (See detailed deductions in App. C), ∆ := ∆ (f) 2 = ∥f(D)− f(D′)∥2 ≤ ∥xj − x′j∥2 ≤ √ 2. Adding the noises given by Lemma 5.2 to each coordinate (each candidate at each generation step of SeqPATE) preserves (ε, δ(ε))-DP for f(D). Finally, when we extract top-k coordinates by top-k candidate filtering (Sec. 4.3), the privacy guarantee also holds due to the post-processing property [14]. Therefore, the fact about whether a sample is in SeqPATE’s private sets is protected (satisfying (ε, δ(ε))-DP). 3In the first 10 training batches, the top-50 predictions of the student cover 94% “true” labels of pseudo samples. 5.3 Differential Privacy of Users’ Secret Phrases The above analyses show that we can protect the privacy of each sample (i.e., one occurrence of a sentence). However, in practice, we may want to protect all occurrences of some secret phrases specific to a user (e.g., names and addresses).4 Consider a secret phrase s that occurs ns times (ns ≥ 1) in the private set. According to group privacy [14], the protection on phrase s satisfies (nε, e nε−1 eε−1 δ)-DP [22], where the privacy loss scales linearly with the number of occurrences of s (We discuss and analyze a better strategy to reduce the privacy loss of baselines in App. M). Naively applying a DP algorithm requires larger noise to protect phrases that may occur multiple times. SeqPATE enjoys a stronger guarantee by assigning all data of a single user to one or a few teachers, such that any user-specific phrase occurs in the training data of only one or a few teachers. We denote ñs as the number of teachers whose data contain the phrase s. Since adding or removing the phrase s affects only ñs teachers (ñs is usually 1 or 2) and thus results in a sensitivity of √ 2ñs (See App. D for details). In this way, the strength of protection on secret phrases is roughly equal to that we have derived for sample-level DP. The exact (ε, δ(ε, ñs))-DP for the phrase s can be obtained according to Lemma 5.1 & 5.2, where δ(ε, ñs) = Φ( ñs√2σ − εσ√ 2ñs )− eεΦ(− ñs√ 2σ − εσ√ 2ñs ). Unlike other generic DP algorithms such as NoisySGD, SeqPATE avoids a linear increase in privacy loss (i.e., a linear increase in ε) on user phrases by careful partitioning of the private data. This effect is complimentary to other generic, but more intrusive, techniques such as redaction and deduplication [50] for addressing the same issue. Finally, a user-specific partitioning with SeqPATE also protects multiple secret phrases of the same user (e.g., a combination of SSN, credit card numbers, address, day of birth) jointly without incurring a larger privacy loss — a benefit that deduplication does not provide. 5.4 How does DP prevent memorization in SeqPATE? In practice, the privacy of the language model is usually interpreted as not generating a secret phrase in the training data as-is during inference. Thus, one may wonder how DP prevents such unintended memorization of the training data. We remark that the protection against memorization follows the definition of DP. Consider the attack by Carlini et al. [6], which uses a language model to predict a secret phrase s given a prefix. By the closure to post-processing [14], the prediction also satisfies DP. We denote W as the undesirable event where SeqPATE generates the phrase s verbatim. The DP definition implies that the probability of W to happen when s is in the SeqPATE’s private sets is at most eε larger than the probability of an alternative SeqPATE model trained without s in those sets. The chances for the latter model to generate text with s are astronomically small. Hence, DP implies that the probability of W under the former model (i.e. any SeqPATE model in general) is small. 6 Experiments 6.1 Experimental Settings Datasets. We evaluate our model on two datasets. AirDialog [47] consists of 1M utterances from customer service dialog on flight booking; Europarl_v6 consists of 2M English sentences collected from European Parliament.5 (See details about datasets in App. E.) Baselines. We compare SeqPATE with two DP baselines: (1) standard NoisySGD trained on the private data with calibrated noise on clipped gradients [1, 22] and further trained on public set Dpub without protection; (2) based on NoisySGD, NoisySGD+GC [24] applies a ghost clipping which enables large batch size with memory saving techniques. Additionally, we use two non-DP methods as reference: (1) Pri-GPT trained on the private set without any privacy protection; (2) the public pre-trained GPT-2 model Pub-GPT without access to private data. For all methods, we can optionally fine-tune on the generated pseudo-data as a warm-up, and the operation is denoted as +D̃pub. 4A formal definition of this is called personalized differential privacy, first seen in [16]. 5www.statmt.org/europarl Implementation details. All models are fine-tuned from the (public) pre-trained GPT-2 model [35]. The batch size is 32 for all comparing methods except the GC [24] (GC [24] requires 2048). We use Adam [23] and adjust the initial learning rate with a range of 10−3 to 10−6 for all methods. The δ mentioned in Sec. 5 for all DP methods is 10−6. For SeqPATE, before training the student model with teacher supervision, we first fine-tune it on the public pseudo-data D̃pub as a warm-up. The coefficient λ that balances supervision for the teacher and the pseudo-data (Eq. 4) is set to 20, where we have tuned it on the validation set of the public pseudo-data. The default number of teacher models is 2k, where our model works well according to the experiments in App. H. We designed some strategies 6 to reduce memory and disk usage (See strategies and the computational cost in App. I). We run SeqPATE with 2k teachers on a single GPU in 3 days. Our code is publicly accessible. 7. (See details about hyperparameters in App. G.) Evaluation Metrics. We evaluate the generated text by perplexity (PPL) and Bleu (Bleu-n) [33]. 6.2 Overall Performance Protection at the sample level. Tab. 1 show the performance on the two datasets. Among the non-DP baselines, Pri-GPT acts as an upper bound on the performance, since it can fully utilize the private set by discarding privacy protection. Pub-GPT+D̃pub outperforms Pub-GPT on both datasets, showing that the pseudo data is helpful (additional ablation study on the pseudo data in App. J also verifies this). NoisySGD+GC+D̃pub surpasses the above two methods, since it uses a much larger batch size (2048 vs 32) than NoisySGD. Our method, SeqPATE, significantly outperforms NoisySGD+GC+D̃pub (+59% in Bleu4 on AirDialog and +7.0% in Bleu4 on Europarl_v6) while ensuring the same level of privacy protection in terms of ε. Protection on the user’s secret phrases. We evaluate our method for privacy protection of secret phrases mentioned in Sec 5.3. The key step is to partition the data such that each phrase only occurs in the training data of very few teachers, which is straightforward given the user ID associated with the private data. In general, SeqPATE works with any set of secret phrases. In our experiments, we consider a user’s full name as their secret phrase since it can be easily recognized from the data. We partition AirDialog’s private data according to the accompanying user IDs. As a result, there are 96.6% users whose data are assigned to a single teacher (details about the data partition in App. F). As described in Sec. 5.3, standard DP methods incur larger privacy loss on secret phrases. In Tab. 2, we see that NoisySGD+GC+D̃pub needs large noise to achieve a satisfactory level of protection on phrases, because ε increases linearly with the frequency of the phrase (group privacy [14]). “Batching users” indicates partitioning data into batches according to users, which helps NoisySGD protect users’ phrases (more analyses in App. M). For SeqPATE, the number of teachers trained on data containing the phrase ñs is close to 1 on average after our partition. Thus, SeqPATE provides the same level of protection on users’ secret phrases with a smaller noise and thus achieves better performance (+70% and +36% in Bleu4) (see more about the protection level on users’ secret phrases in App. F). 6We train and conduct the inference on the teachers one-by-one and cache the teachers’ outputs. 7https://github.com/tianzhiliang/SeqPATE Privacy-utility tradeoff. In Fig. 2, we show the private-utility tradeoff curve of all DP algorithms. 8 Typically, DP with ε ∈ [0.1, 10] is considered to provide a meaningful protection [45]. We observe that SeqPATE outperforms NoisySGD and NoisySGD+GC+D̃pub in this range. However, SeqPATE does not work better than the two methods when ε > 10. The reason is that NoisySGD+GC+D̃pub approaches Pri-GPT as ε approaches infinity (i.e. the noise approaches 0). However, SeqPATE with an infinite ε is still weaker than Pri-GPT because distillation still incurs performance loss: the teachers cannot completely transfer knowledge from the private data to the student. Therefore, we suggest using SeqPATE if strong privacy protection is desirable. 6.3 Ablation Studies There are several design choices in SeqPATE and we study the importance of each of them. In Tab. 3, we consider the following variants of SeqPATE: (1) −Merge_P: aggregating the teachers by voting instead of averaging their output distributions; (2) −KL: training the student using the cross-entropy loss with respect to teachers’ top-1 prediction instead of KL divergence; (3) −Lpseudo: not learning from the pseudo label (Eq. 3); (4) −Effi KD: querying teachers on all samples without selection; (5) −Gaussian: using the Laplace mechanism as the original PATE algorithm instead of the Gaussian mechanism; and (6) −All: using none of the above strategies, which is similar (although not equivalent) to the original PATE (the difference is that PATE needs to roll out all teachers (Sec. 4.1)). Aggregating the teachers by voting and training with KL loss are the most important strategies for SeqPATE. The poor performance on −Merge_P shows that voting is not suitable for text generation. The reason is that voting over a large output space leads to low agreement rates. The results show that the Lpseudo loss makes little contribution to SeqPATE. The reason is that we have pre-trained on the student’s training set via Lpseudo before the student’s training. The promotion caused by efficient knowledge distillation (Effi KD) on AirDialog is larger than that on Europarl_v6, which shows that the “clever” student (e.g., models on AirDialog with low PPL and high Bleu) benefits more from this strategy. This is because the “clever” student can dramatically save the privacy cost and transfer it to where it would benefit the student most. The poor performance of −All verifies that the original PATE is not suitable for text generation. 6.4 Analyses on Candidate Filtering and Teacher Numbers To analyze candidate filtering with different filtering strategies, we conduct experiments on top-p and top-k filtering. As shown in Tab. 4, our full model employs the top-p filtering (the threshold p is 0.95) surpasses most variants with manually chosen k. Top-k filtering (k =50 or 100) also works well. Filtering with a too small k (k = 1 or k = 10) implies discarding too much useful information from the supervision (k = 1 is different from − KL in Tab. 3, which uses the Top-1 of teachers’ results). Filtering with oversize k results in unnecessarily large noises. Candidates with very small probabilities should be filtered during generation; however, random noises may increase their probabilities, so models may generate those words that are misled by the noise. The results in App. H show that more teachers lead to better results when the number of teachers is in the range of 1 ∼ 2k. This is because the noise assigned to each teacher drops linearly as the number of teachers increases. Note that SeqPATE cannot always benefit from increasing the teacher numbers, because the scale of each teacher’s data is linearly decreased as the teacher numbers go up. We choose ε = 3 on the sample level protection for all results in Tabs. 3, 4, and App. H. Additionally, we conduct empirical comparisons and analyses of SeqPATE versus the original PATE in App. N. We show the effects of protections on users’ secret phrases in App. O. We compare SeqPATE with another non-DP based baseline (i.e. blacklist based filtering) in App. P. We also conduct a case study in App. Q. 7 Related Work Text generation models may leak user information through the generated texts [19, 7]. One direction of privacy protection is to protect author-level (user-level) information. The methods prevent attackers from inferring the author attributes (e.g., gender, age) [25] and the relationship between information and authors [29]. Some researchers [40, 41] infer the membership (whether samples from a given author are used to train the model) given a black-box model. Some papers protect user privacy of training data against untrusted servers via federated learning [27, 10]. Another direction is to prevent attackers from extracting sensitive information in training sets by analyzing the outputs [30, 22], which is urgently needed [7]. Our SeqPATE focuses on this direction. In this direction, regularization methods [6, 43, 20] restrict the model capacity and prevent the model from memorizing exact training samples. Anonymization methods [26, 44] detect sensitive text and replace it with non-sensitive text. Unlike DP [14] methods, the above methods do not provide a quantifiable guarantee for privacy protection. Some researchers focus on protecting user privacy against untrusted servers via federated learning [27, 10]. Some researchers apply DP to text generation. For user-level privacy, ER-AE [4] augments the semantic information in the generated text to hide authors’ writing styles from attackers. McMahan et al. [28] propose a recurrent language model with a DP guarantee against the identification of users. Note that the user-level privacy (relationships between users and their information) is different from the privacy of users’ secret phrases in our model: Our model prevents individual user phrases from being detected. Some researchers apply NoisySGD to text generation to prevent sensitive training samples from being extracted: some of them [37, 39, 50] employ DP to protect a part of selected tokens; others [22, 49, 24] apply DP to protect both samples and all tokens, but the privacy cost on tokens is very high (Sec. 5.3). Our model falls into the latter category and reduces the privacy cost of tokens. Kerrigan et al. [22] apply NoisySGD [1] to text generation. Yu et al. [49] investigate fine-tuning strategies on pre-trained language models with NoisySGD. Li et al. [24] apply ghost clipping to pre-trained language models with NoisySGD and reduce memory usage. Shi et al. [38] apply DP to particular generation steps instead of training samples or n-grams. Brown et al. [5] analyze DP based method versus data sanitization of text generation models. Brown et al. [12] propose a efficient NoisySGD to speed up model training. Differential privacy (DP) [13, 14] formally defines and quantifies privacy. ML models with DP guarantee [46, 15, 52] prevent the existence of individual training examples from being detected [6]. Some researchers protect the privacy of empirical risk minimization classifiers [8] and SVM [36] with DP. Following Song et al. [42], NoisySGD [1] achieves DP on deep learning models by adding noises to gradients. Pichapati et al. [34] adaptively clip the gradient in NoisySGD. PATE [31, 32] transfers the knowledge from teacher models trained on private sets with noises to a student model. KNN-PATE [51] refines PATE by accessing only the k-nearest neighbors from the private set. Jordon et al. [21] adversarially learn to generate synthetic data with discriminators trained by PATE. These methods are not customized for text generation models. Xie et al. [48] propose DPGAN to adversarially learn with a generator and a discriminator. 8 Conclusion In this paper, we propose a novel framework, SeqPATE, to protect the privacy of the training data for text generation models with DP guarantees. SeqPATE achieves a good privacy-utility trade-off by leveraging both private and public data. As an extension of PATE, SeqPATE can handle the sequential generation paradigm with large output space at each step and is therefore adaptive to text generation models. We avoid rolling out teachers by providing pseudo-inputs for the teacher’s inference and the student’s training. We further reduce the output space by candidate filtering and limit privacy losses via efficient knowledge distillation. SeqPATE achieves a better performance with the sample-level protection and further provides much stronger protection on users’ secret phrases. The limitations, ethical considerations, and social impacts of this paper are in App. A and L. 9 Acknowledgement Research in this paper was supported by Hong Kong Research Grants Council under grand No. 16204920. HH is partly supported by the Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI). YW is partially supported by NSF Award #2048091. The authors thank Mr. Wei Dong and Dr. Yiping Song for their help and insights on this paper.
1. What are the main contributions and extensions of the paper regarding PATE in text generation? 2. What are the strengths of the paper regarding its writing, originality, and explanation of difficult points? 3. How could the paper improve in connecting the theory of differential privacy and the proposed SeqPATE method? 4. Are there any limitations or areas for improvement regarding the paper's focus on the technical aspects of privacy?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper extends PATE into the field of text generation. To do so, the following technical challenges must be properly addressed: 1. in addition to protecting individual words, we need to protect phrases too. 2. Compared to other task, the output space is huge for text generation. 3. We need to control the privacy loss. This paper has done a solid work to address these challenges. Strengths And Weaknesses This paper is well written. This work is original. It is based on the theory of differentially private (DP), so its potential and quality are pretty high. The important difficult points are well explained. But as a reader, I think one area can be improved: the connection between theory of DP and the proposed SeqPATE method can be explained more explicitly. Doing so can greatly decrease the barriers to new researchers who are interested in this area. Questions This is a solid work. But making it easier to follow may greatly increase the influence of this work. Limitations Privacy is an area with great social impact. This paper focuses solely on the technical aspect of privacy. But it’s too early to give an assessment on its social impact. So in my humble opinion, it is acceptable that its social impact are absent in the paper.
NIPS
Title Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification Abstract The Information Bottleneck (IB) objective uses information theory to formulate a task-performance versus robustness trade-off. It has been successfully applied in the standard discriminative classification setting. We pose the question whether the IB can also be used to train generative likelihood models such as normalizing flows. Since normalizing flows use invertible network architectures (INNs), they are information-preserving by construction. This seems contradictory to the idea of a bottleneck. In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN’s generative capabilities intact. Secondly, we investigate the properties of these models experimentally, specifically used as generative classifiers. This model class offers advantages such as improved uncertainty quantification and out-of-distribution detection, but traditional generative classifier solutions suffer considerably in classification accuracy. We find the trade-off parameter in the IB controls a mix of generative capabilities and accuracy close to standard classifiers. Empirically, our uncertainty estimates in this mixed regime compare favourably to conventional generative and discriminative classifiers. Code: github.com/VLL-HD/IB-INN 1 Introduction The Information Bottleneck (IB) objective (Tishby et al., 2000) allows for an information-theoretic view of neural networks, for the setting where we have some observed input variable X , and want to predict some Y from it. For simplicity, we limit the discussion to the common case of discrete Y (i.e. class labels), but results readily generalize. The IB postulates existence of a latent space Z, where all information flow between X and Y is channeled through (hence the method’s name). In order to optimize predictive performance, IB attempts to maximize the mutual information I(Y,Z) between Y andZ. Simultaneously, it strives to minimize the mutual information I(X,Z) betweenX and Z, forcing the model to ignore irrelevant aspects of X which do not contribute to classification performance and only increase the potential for overfitting. The objective can thus be expressed as LIB = I(X,Z)− β I(Y,Z) . (1) The trade-off parameter β is crucial to balance the two aspects. The IB was successfully applied in a variational form (Alemi et al., 2017; Kolchinsky et al., 2017) to train feed-forward classification models p(Y |X) with higher robustness to overfitting and adversarial attacks than standard ones. In this work, we consider the relationship between X and Y from the opposite perspective – using the IB, we train an invertible neural network (INN) as a conditional generative likelihood model 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. p(X|Y ), i.e. as a specific type of conditional normalizing flow. In this case, X is the variable of which the likelihood is predicted, and Y is the class condition. It is a generative model because one can sample from the learned p(X|Y ) at test time to generate new examples from any class, although we here focus on optimal likelihood estimation for existing inputs, not the generating aspect. We find that the IB, when applied to such a likelihood model p(X|Y ), has special implications for the use as a so-called generative classifier (GC). GCs stand in contrast to standard discriminative classifers (DCs), which directly predict the class probabilities p(Y |X). For a GC, the posterior class probabilities are indirectly inferred at test time by Bayes’ rule, cf. Fig. 1: p(Y |X) = p(X|Y )p(Y )/Ep(Y ) [p(X|Y )]. Because DCs optimize prediction performance directly, they achieve better results in this respect. However, their models for p(Y |X) tend to be most accurate near decision boundaries (where it matters), but deteriorate away from them (where deviations incur no noticeable loss). Consequently, they are poorly calibrated (Guo et al., 2017) and out-of-distribution data can not be easily recognized at test time (Ovadia et al., 2019). In contrast, GCs model full likelihoods p(X|Y ) and thus implicitly full posteriors p(Y |X), which leads to the opposite behavior – better predictive uncertainty at the price of reduced accuracy. Fig. 2 illustrates the decision process in latent space Z. In the past, deep learning models trained in a purely generative way, particularly flow-based models trained with maximum likelihood, achieved highly unsatisfactory accuracy, so that some recent work has called into question the overall effectiveness of GCs (Fetaya et al., 2019; Nalisnick et al., 2019b). In-depth studies of idealized settings (Bishop & Lasserre, 2007; Bishop, 2007) revealed the existence of a trade-off, controlling the balance between discriminative and generative performance. In this work, we find that the IB can represent this trade-off, when applied to generative likelihood models. To summarize our contributions, we combine two concepts – the Information Bottleneck (IB) objective and Invertible Neural Networks (INNs). Firstly, we derive an asymptotically exact formulation of the IB for this setting, resulting in our IB-INN model, a special type of conditional normalizing flow. Secondly, we show that this model is especially suitable for the use as a GC: the trade-off parameter β in the IB-INN’s loss smoothly interpolates between the advantages of GCs (accurate posterior calibration and outlier detection), and those of DCs (superior task performance). Empirically, at the right setting for β, our model only suffers a minor degradation in classification accuracy compared to DCs while exhibiting more accurate uncertainty quantification than pure DCs or GCs. 2 Related Work Information Bottleneck: The IB was introduced by Tishby et al. (2000) as a tool for informationtheoretic optimization of compression methods. This idea was expanded on by Chechik et al. (2005); Gilad-Bachrach et al. (2003); Shamir et al. (2010) and Friedman et al. (2013). A relationship between IB and deep learning was first proposed by Tishby & Zaslavsky (2015), and later experimentally examined by Shwartz-Ziv & Tishby (2017), who use IB for the understanding of neural network behavior and training dynamics. A close relation of IB to dropout, disentanglement, and variational autoencoding was discovered by Achille & Soatto (2018), which led them to introduce Information Dropout as a way to take advantage of IB in discriminative models. The approximation of IB in a variational setting was proposed independently by Kolchinsky et al. (2017) and Alemi et al. (2017), who especially demonstrate improved robustness against overfitting and adversarial attacks. Generative Classification: An in-depth analysis of the trade-offs between discriminative and generative models was first performed by Ng & Jordan (2001) and was later extended by Bouchard & Triggs (2004); Bishop & Lasserre (2007); Xue & Titterington (2010), who investigated the possibility of balancing the strengths of both methods via a hyperparameter, albeit for very simple models. GCs have been used more rarely in the deep learning era, some exceptions being application to natural language processing (Yogatama et al., 2017), and adversarial attack robustness (Li et al., 2019; Schott et al., 2019). However, Fetaya et al. (2019) found that conditional normalizing flows have poor discriminative performance, making them unsuitable as GCs. GCs should be clearly distinguished from so-called hybrid models (Raina et al., 2004): these commonly only model the marginal p(X) and jointly perform discriminate classification using shared features, with their main application being semi-supervised learning. Notable examples are Kingma et al. (2014); Chongxuan et al. (2017); Nalisnick et al. (2019c); Grathwohl et al. (2019). 3 Method Below, upper case letters denote random variables (RVs) (e.g. X) and lower case letters their instances (e.g. x). The probability density function of a RV is written as p(X), the evaluated density as p(x) or p(X=x), and all RVs are vector quantities. We distinguish true distributions from modeled ones by the letters p and q, respectively. The distributions q always depend on model parameters, but we do not make this explicit to avoid notation clutter. Assumption 1 in the appendix provides some weak assumptions about the domains of the RVs and their distributions. Full proofs for all results are also provided in the appendix. Our models have two kinds of learnable parameters. Firstly, an invertible neural network (INN) with parameters θ maps inputs X to latent variables Z bijectively: Z = gθ(X) ⇔ X = g−1θ (Z). Assumption 2 in the Appendix provides some explicit assumptions about the network, its gradients, and the parameter space, which are largely fulfilled by standard invertible network architectures, including the affine coupling architecture we use in the experiments. Secondly, a Gaussian mixture model with class-dependent means µy , where y are the class labels, and unit covariance matrices is used as a reference distribution for the latent variables Z: q(Z |Y ) = N (µy, I) and q(Z) = ∑ y p(y)N (µy, I). (2) For simplicity, we assume that the label distribution is known, i.e. q(Y ) = p(Y ). Our derivation rests on a quantity we call mutual cross-information CI (in analogy to the well-known cross-entropy): CI(U, V ) = Eu,v∼p(U,V ) [ log q(u, v) q(u)q(v) ] . (3) Note that the expectation is taken over the true distribution p, whereas the logarithm involves model distributions q. In contrast, plain mutual information uses the same distribution in both places. Our definition is equivalent to the recently proposed predictive V-information (Xu et al., 2020), whose authors provide additional intuition and guarantees. The following proposition (proof in Appendix) clarifies the relationship between mutual information I and CI: Proposition 1. Assume that q(.) can be chosen from a sufficiently rich model family (e.g. a universal density estimator, see Assumption 2). Then for every η > 0 there is a model such that ∣∣I(U, V ) − CI(U, V ) ∣∣ < η and I(U, V ) = CI(U, V ) if p(u, v) = q(u, v). We replace both mutual information terms I(X,Z) and I(Y, Z) in Eq. 1 with the mutual crossinformation CI , and derive optimization procedures for each term in the following subsections. 3.1 INN-Based Formulation of the I(X,Z)-Term in the IB Objective Estimation of the mutual cross-information CI(X,Z) between inputs and latents is problematic for deterministic mappings from X to Z (Amjad & Geiger, 2018), and specifically for INNs, which are bijective by construction. In this case, the joint distributions q(X,Z) and p(X,Z) are not valid Radon-Nikodym densities and both CI and I are undefined. Intuitively, I and CI become infinite, because p and q have an infinitely high delta-peak at Z = gθ(X), and are otherwise 0. For the IB to be applicable, some information has to be discarded in the mapping to Z, making p and q valid Radon-Nikodym densities. In contrast, normalizing flows rely on all information to be retained for optimal generative capabilities and density estimation. Our solution to this seeming contradiction comes from the practical use of normalizing flows. Here, a small amount of noise is commonly added to dequantize X (i.e. to turn discrete pixel values into real numbers), to avoid numerical issues during training. We adopt this approach to artificially introduce a minimal amount of information loss: Instead of feeding X to the network, we input a noisy version X ′ = X + E , where E ∼ N (0, σ2I) = p(E) is Gaussian with mean zero and covariance σ2I. For a quantization step size ∆X , the additional error on the estimated densities caused by the augmentation has a known bound decaying with exp(−∆X2/2σ2) (see Appendix). We are interested in the limit σ → 0, so in practice, we choose a very small fixed σ, that is smaller than ∆X . This makes the error practically indistinguishable from zero. The INN then learns the bijective mapping ZE = gθ(X + E), which guarantees CI(X,ZE) to be well defined. Minimizing this CI according to the IB principle means that gθ(X + E) is encouraged to amplify the noise E , so that X can be recovered less accurately, see Fig. 3 for illustration. If the global minimum of the loss is achieved w.r.t. θ, I and CI coincide, as CI(X,ZE) is an upper bound (also cf. Prop. 1): Proposition 2. For the specific case that ZE = gθ(X + E), it holds that I(X,ZE) ≤ CI(X,ZE). Our approach should be clearly distinguished from applications of the IB to DCs, such as Alemi et al. (2017), which pursue a different goal. There, the model learns to ignore the vast majority of input information and keeps only enough to predict the class posterior p(Y |X). In contrast, we induce only a small, explicitly adjustable loss of information to make the IB well-defined. As a result, the amount of retained information in our generative IB-INNs is orders of magnitude larger than in DC approaches, which is necessary to represent accurate class-conditional likelihoods p(X |Y ). We now derive the loss function that allows optimizing θ and µy to minimize the noise-augmented CI(X,ZE) in the limit of small noise σ → 0. Full details are found in appendix. We decompose the mutual cross-information into two terms CI(X,ZE) = Ep(X),p(E) [ −log q ( ZE=gθ(x+ε) ) ] + Ep(X),p(E) [ log q ( ZE=gθ(x+ ε) ∣∣x) ]︸ ︷︷ ︸ :=A . The first expectation can be approximated by the empirical mean over a finite dataset, because the Gaussian mixture distribution q(ZE) is known analytically. To approximate the second term, we first note that the condition X = x can be replaced with Z = gθ(x), because gθ is bijective and both conditions convey the same information A = Ep(X),p(E) [ log q ( ZE = gθ(x+ ε) ∣∣Z = gθ(x)) ]. We now linearize gθ by its first order Taylor expansion, gθ(x + ε) = gθ(x) + Jxε + O(ε2), where Jx = ∂gθ(X) ∂X ∣∣ x denotes the Jacobian at X = x. Going forward, we write O(σ2) instead of O(ε2) for clarity, noting that both are equivalent because we can write ε = σn with n ∼ N (0, I), and ‖ε‖ = σ‖n‖. Inserting the expansion into A, the O(σ2) can be moved outside of the expression: It can be moved outside the log, because that has a Lipschitz constant of 1/ inf q(gθ(X+E)), which we show is uniformly bounded in the full proof. The O(σ2) can then be exchanged with the expectation because the expectation’s argument is also uniformly bounded, finally leading to A = Ep(X),p(E) [ log q ( gθ(x) + Jxε ∣∣ gθ(x)) ]+O(σ2). Since ε is Gaussian with mean zero and covariance σ2I, the conditional distribution is Gaussian with mean gθ(x) and covariance σ2JxJTx . The expectation with respect to p(E) is thus the negative entropy of a multivariate Gaussian and can be computed analytically as well A = Ep(X) [ −1 2 log ( det(2πeσ2JxJ T x ) )] +O(σ2) = Ep(X) [ − log |det(Jx)| ] − d log(σ)− d 2 log(2πe) +O(σ2) with d the dimension of X . To avoid running the model twice (for x and x+ ε), we approximate the expectation of the Jacobian determinant by 0th-order Taylor expansion as Ep(X) [ log |det(Jx)| ] = Ep(X),p(E) [ log |det(Jε)| ] +O(σ), where Jε is the Jacobian evaluated at x + ε instead of x. The residual can be moved outside of the log and the expectation because Jε is uniformly bounded in our networks. Putting everything together, we drop terms from CI(X,ZE) that are independent of the model or vanish with rate at least O(σ) as σ → 0. The resulting loss LX becomes LX = Ep(X), p(E) [ − log q ( gθ(x+ε) ) − log ∣∣ det(Jε)∣∣ ]. (4) Since the change of variables formula defines the network’s generative distribution as qX(x) = q ( Z = gθ(x) ) ∣∣det(Jx)∣∣, LX is the negative log-likelihood of the perturbed data under qX , LX = Ep(X),p(E) [ − log qX(x+ ε) ] . (5) The crucial difference between CI(X,ZE) and LX is the elimination of the term −d log(σ). It is huge for small σ and would dominate the model-dependent terms, making minimization of CI(X,ZE) very hard. Intuitively, the fact that CI(X,ZE) diverges for σ → 0 highlights why CI(X,Z) is undefined for bijectively related X and Z. In practice, we estimate LX by its empirical mean on a training set {xi, εi}Ni=1 of size N , denoted as L (N) X . It remains to be shown that replacing I(X,ZE) withL(N)X in the IB loss Eq. 1 does not fundamentally change the solution of the learning problem in the limit of large N , small σ and sufficient model power. Sufficient model power here means that the family of generative distributions realizable by gθ should be a universal density estimator (see Appendix, Assumption 2). This is the case if gθ can represent increasing triangular maps (Bogachev et al., 2005), which has been proven for certain network architectures explicitly (e.g. Jaini et al., 2019; Huang et al., 2018), including the affine coupling networks we use for the experiments (Teshima et al., 2020). Propositions 1 & 2 then tell us that we may optimize CI(X,ZE) as an estimator of I(X,ZE). The above derivation of the loss can be strengthened into Proposition 3. Under Assumptions 1 and 2, for any , η > 0 and 0 < δ < 1 there are σ0 > 0 and N0 ∈ N, such that ∀N ≥ N0 and ∀0 < σ < σ0, the following holds uniformly for all model parameters θ: Pr (∣∣∣CI(X,ZE) + d log√2πeσ2 − L(N)X ∣∣∣ > ) < δ and Pr (∥∥∥∥ ∂∂θCI(X,ZE)− ∂∂θL(N)X ∥∥∥∥ > η) < δ The first statement proves consistence of L(N)X , and the second justifies gradient-descent optimization on the basis of L(N)X . Proofs can be found in the appendix. 3.2 GMM-Based Formulation of the I(Z,Y)-Term in the IB Objective Similarly to the first term in the IB-loss in Eq. 1, we also replace the mutual information I(Y, Z) with CI(Y, ZE). Inserting the likelihood q(z | y) = N (z;µy, I) of our latent Gaussian mixture model into the definition and recalling that q(Y ) = p(Y ), this can be decomposed into CI(Y,ZE) = Ep(Y ) [ − log p(y) ] + Ep(X,Y ),p(E) [ log q ( gθ(x+ε) | y ) p(y)∑ y′ q ( gθ(x+ε) | y′ ) p(y′) ] . (6) In this case, CI(Y,ZE) is a lower bound on the true mutual information I(Y, ZE), allowing for its maximization in our objective. In fact, it corresponds to a bound originally proposed by Barber & Agakov (2003) (see their Eq. 3): The first term is simply the entropy h(Y ), because p(Y ) is known. The second term can be rewritten as the negative cross-entropy −hq(Y | ZE). For I(Y,ZE), we would have the negative entropy −h(Y | ZE) in its place, then Gibbs’ inequality leads directly to CI(Y, ZE) ≤ I(Y,ZE). The first expectation can be dropped during training, as it is model-independent. Note how the the second term can also be written as the expectation of the GMM’s log-posterior log q(y | z). Since all mixture components have unit covariance, the elements of Z are conditionally independent and the likelihood factorizes as q(z | y) = ∏j q(zj | y). Thus, q(y | z) can be interpreted as a naive Bayes classifier. In contrast to naive Bayes classifiers in data space, which typically perform badly because raw features are not conditionally independent, our training enforces this property in latent space and ensures accurate classification. Defining the loss L(N)Y as the empirical mean of the log-posterior in a training set {xi, yi, εi}Ni=1 of size N, we get L(N)Y = 1 N N∑ i=1 log N ( gθ(xi + εi);µyi , I ) p(yi)∑ y′ N ( gθ(xi + εi);µy′ , I ) p(y′) . (7) 3.3 The IB-INN-Loss and its Advantages Replacing the mutual information terms in Eq. 1 with their empirical estimates L(N)X and L (N) Y , our model parameters θ and {µ1, ..., µK} are trained by gradient descent of the IB-INN loss L(N)IB-INN = L (N) X − β L (N) Y (8) In the following, we will interpret and discuss the nature of the loss function in Eq. 8 and form an intuitive understanding of why it is more suitable than the class-conditional negativelog-likelihood (‘class-NLL’) traditionally used for normalizing-flow type generative classifiers: Lclass-NLL = −E log ( qθ(x|y) ) . The findings are represented graphically in Fig. 4. LX -term: As shown by Eq. 5, the term is the (unconditional) negative-log-likelihood loss used for normalizing flows, with the difference that q(Z) is a GMM rather than a unimodal Gaussian. We conclude that this loss term encourages the INN to become an accurate likelihood model under the marginalized latent distribution and to ignore any class information. LY -term: Examining Eq. 7, we see that for any pair (g(x+ ε), y), the cluster centers (µY 6=y) of the other classes are repulsed (by minimizing the denominator), while gθ(x+ ε) and the correct cluster center µy are drawn together. Note that the class-NLL loss only captures the second aspect and lacks repulsion, resulting in a much weaker training signal. We can also view this in a different way: by substituting q(x|y) ∣∣det(Jx)∣∣−1 for q(z|y), the second summand of Eq. 6 simplifies to log q(y|x), since the Jacobian cancels out. This means that our LY loss directly maximizes the correct class probability, while ignoring the data likelihood. Again, this improves the training signal: as Fetaya et al. (2019) showed, the data likelihood will otherwise dominate the class-NLL loss, so that lack of classification accuracy is insufficiently penalized. Classical class-NLL loss: The class-NLL loss or an approximation thereof is used to train standard GCs. The IB-INN loss reduces to this case for β = 1, because the first summand in LX (cf. Eq. 4) cancels with the denominator in Eq. 7. Then, the INN no longer receives a penalty when latent mixture components overlap, and the GMM looses its class discriminatory power, as Fig. 4 illustrates: Points are only drawn towards the correct class, but there is no loss component repulsing them from the incorrect classes. As a result, all cluster centers tend to collapse together, leading the INN to effectively just model the marginal data likelihood (as found by Fetaya et al., 2019). Similarly, Wu et al. (2019) found that β = 1 is the minimum possible value to perform classification with discriminative IB methods. 4 Experiments In the following, we examine the properties of the IB-INN used as a GC, especially the quality of uncertainty estimates and OoD detection. We construct our IB-INN by combining the design efforts of various works on INNs and normalizing flows. In brief, we use a Real-NVP architecture consisting of affine coupling blocks (Dinh et al., 2017), with added improvements from recent works (Kingma & Dhariwal, 2018; Jacobsen et al., 2019, 2018; Ardizzone et al., 2019). A detailed description of the architecture is given in the appendix. We learn the set of means µY as free parameters jointly with the remaining model parameters in an end-to-end fashion using the loss in Eq. 8. The practical implementation of the loss is explained in the appendix. We apply two additional techniques while learning the model, label smoothing and loss rebalancing: Label smoothing Hard labels force the Gaussian mixture components to be maximally separated, so they drift continually further apart during training, leading to instabilities. Label smoothing (Szegedy et al., 2016) with smoothing factor 0.05 prevents this, and we also apply it to all baseline models. Loss rebalancing The following rebalancing scheme allows us to use the same hyperparameters when changing β between 5 orders of magnitude. Firstly, we divide the loss LX by the number of dimensions of X , which approximately matches its magnitude to the LY loss. We define a corresponding γ := β/dim(X) to stay consistent with the IB definition. Secondly, we scale the entire loss by a factor 2/(1 + γ). This ensures that it keeps the same magnitude when changing γ. L(N)IB = 2 1 + γ ( L(N)X dim(X) − γ L(N)Y ) (9) Finally, the noise amplitude σ should be chosen to satisfy two criteria: it should be small enough so that the Taylor expansions in the loss for σ → 0 are sufficiently accurate, and it should also not hinder the model’s performance. Our ablation provided in the Appendix indicates that both criteria are satisfied when σ / 0.25∆X , with the quantization step size ∆X , so we fix σ = 10−3 for the remaining experiments. 4.1 Comparison of Methods In addition to the IB-INN, we train several alternative methods. For each, we use exactly the same INN model, or an equivalent feed-forward ResNet model. Every method has the exact same hyperparameters and training procedure, the only difference being the loss function and invertibility. Class-NLL: As a standard generative classifier, we firstly train an INN with a GMM in latent space naively as a conditional generative model, using the class-conditional maximum likelihood loss. Secondly, we also train a regularized version, to increase the classification accuracy. The regularization consists of leaving the class centroids µY fixed on a hyper-sphere, forcing some degree of class-separation. Feed-forward As a DC baseline, we train a standard ResNet (He et al., 2016) with softmax cross entropy loss. We replace each affine coupling block by a ResNet block, leaving all other hyperparameters the same. i-RevNet (Jacobsen et al., 2018): To rule out any differences stemming from the constraint of invertibility, we additionally train the INN as a standard softmax classifier, by projecting the outputs to class logits. While the architecture is invertible, it is not a generative model and trained just like a standard feed-forward classifier. Variational Information Bottleneck (VIB): To examine which observed behaviours are due to the IB in general, and what is specific to GCs, we also train the VIB (Alemi et al., 2017), a feed-forward DC, using a ResNet. We convert the authors definition of β to our γ for consistency. 4.2 Quantitative measurements RGB rotation (CIFAR10) Small noise (CIFAR10) QuickDraw ImageNet Figure 5: Examples from each OoD dataset used in the evaluation. The inlier data are original CIFAR10 images. In the following, we describe the scores used in Table 1. Bits/dim: The bits/dim metric is common for objectively comparing the performance of density estimation models such as normalizing flows, and is closely related to the KL divergence between real and estimated distributions. Details can be found e.g. in Theis et al. (2015). Calibration error: The calibration curve measures whether the confidence of a model agrees with its actual performance. All prediction outputs are binned according to their predicted probability P (‘confidence’), and it is recorded which fraction of predictions in each bin was correct, Q. For a perfectly calibrated model, we have P = Q, e.g. predictions with 70% confidence are correct 70% of the time. We use several metrics to measure deviations from this behaviour, largely in line with Guo et al. (2017). Specifically, we consider the expected calibration error (ECE, error weighted by bin count), the maximum calibration error (MCE, max error over all bins), and the integrated calibration error (ICE, summed error per bin), as well as the geometric mean of all three: 3 √ ECE ·MCE · ICE. The geometric mean is used because it properly accounts for the different magnitudes of the metrics. Exact definitions found in appendix. Increased out-of-distribution (OoD) prediction entropy: For data that is OoD, we expect from a model that it returns uncertain class predictions, as it has not been trained on such data. In the ideal case, each class is assigned the same probability of 1/(nr. classes). Ovadia et al. (2019) quantify this through the discrete entropy of the class prediction outputs H(Y |XOod). To counteract the effect of less accurate models having higher prediction entropy overall, we report the difference between OoD and in-distribution test set H(Y |XOod)−H(Y |XIn distrib.). OoD detection score: We use OoD detection capabilities intrinsically built in to GCs. For this, we apply the recently proposed typicality test (Nalisnick et al., 2019a). This is a hypothesis test that sets an upper and lower threshold on the estimated likelihood, beyond which batches of inputs are classified as OoD. We apply the test to single input images (i.e. batch size 1). For quantification, we vary the detection threshold to produce a receiver operator characteristic (ROC), and compute the area under this curve (ROC-AUC) in percent. For short, we call this the OoD detection score. It will be 100 for perfectly separated in- and outliers, and 50 if each point is assigned a random likelihood. OoD datasets: The inlier dataset consist of CIFAR10/100 images, i.e. 32× 32 colour images showing 10/100 object classes. Additionally, we created four different OoD datasets, that cover different aspects, see Fig. 5. Firstly, we create a random 3D rotation matrix with a rotation angle of α = 0.3π, and apply it to the RGB color vectors of each pixel of CIFAR10 images. Secondly, we add random uniform noise with a small amplitude to CIFAR10 images, as an alteration of the image statistics. Thirdly, we use the QuickDraw dataset of hand drawn objects (Ha & Eck, 2018), and filter only the categories corresponding to CIFAR10 classes and color each grayscale line drawing randomly. Therefore the semantic content is the same, but the image modality is different. Lastly, we downscale the ImageNet validation set to 32 × 32 pixels. In this case, the semantic content is different, but the image statistics are very similar to CIFAR10. 4.3 Results Quantitative Model Comparison A comparison of all models is performed in Table 1 for CIFAR10, and in the appendix for CIFAR100. At the extreme γ →∞, the model behaves almost identically to a standard feed forward classifier using the same architecture (i-RevNet), and for γ = 0, it closely mirrors a conventionally trained GC, as the bits/dim are the same. We find the most favourable setting to be at γ = 1: Here, the classification error and the bits/dim each only suffer a 10% penalty compared to the extremes. The uncertainty quantification for IB-INN at this setting (calibration and OoD prediction entropy) is far better than for pure DCs. Against expectations, standard GCs have worse calibration error. Our hypothesis is that their predictions are too noisy and inaccurate for a positive effect to be visible. For OoD detection, the IB-INN and standard GCs are all comparable, as we would expect from the similar bits/dim. Fig. 6 shows the trade-off between the two extremes in more detail: at low γ, the OoD detection and uncertainty quantification are improved, at the cost of classification accuracy. The VIB behaves in agreement with the other DCs: it has consistently lower classification error but higher calibration error than the IB-INN. This confirms that the IB-INN’s behaviour is due to the application of IB to GCs exclusively. This does not mean that the IB-INN should be preferred over VIB, or vice versa. The main advantages of the VIB are the increased robustness to overfitting and adversarial attacks, aspects that we do not examine in this work. Latent Space Exploration To better understand what the IB-INN learns, we analyze the latent space in different ways. Firstly, Fig. 7 shows the layout of the latent space GMM through a linear projection. We find that the clusters of ambiguous classes, e.g. truck and car, are connected in latent space, to account for uncertainty. Secondly, Fig. 9 shows interpolations in latent space between two test set images, using models trained with different values of γ. We observe that for low γ, the IB-INN has a well structured latent space, leading to good generative capabilities and plausible interpolations. For larger γ, class separation increases and interpolation quality continually degrades. Finally, generated images can give insight into the classification process, visualizing how the model understands each class. If a certain feature is not generated, this means it does not contribute positively to the likelihood, and in turn will be ignored for classification. Examples for this are shown in Fig. 8. 5 Conclusions We addressed the application of the Information Bottleneck (IB) as a loss function to Invertible Neural Networks (INNs) trained as generative models. We find that we can formulate an asymptotically exact version of the IB, which results in an INN that is a generative classifier. From our experiments, we conclude that the IB-INN provides high quality uncertainties and out-of-distribution detection, while reaching almost the same classification accuracy as standard feed-forward methods on CIFAR10 and CIFAR100. Acknowledgements LA received funding by the Federal Ministry of Education and Research of Germany project High Performance Deep Learning Framework (No 01IH17002). RM received funding from the Robert Bosch PhD scholarship. UK and CR received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). We thank the Center for Information Services and High Performance Computing (ZIH) at Dresden University of Technology for generous allocations of computation time. Furthermore we thank our colleagues (in alphabetical order) Tim Adler, Felix Draxler, Clemens Fruböse, Jakob Kruse, Titus Leistner, Jens Müller and Peter Sorrenson for their help and fruitful discussions. Broader Impact As our IB-INN is not bound to any particular application, and applies to settings that can in principle already be solved with existing methods, we foresee no societal advantages or dangers in terms of direct application. More generally, we think accurate uncertainty quantification plays an important role in a safe and productive use of AI.
1. What is the main contribution of the paper regarding invertible networks? 2. How does the proposed method utilize an intermediate denoising procedure to introduce an information bottleneck? 3. What is the V-information version of the IB objective, and how does it relate to the original IB objective? 4. How does the loss function take the form of a jacobian smoothness constraint, and what is its purpose? 5. Can the authors provide more examples of generations from the model to demonstrate its capabilities? 6. Are there any limitations or potential drawbacks to the approach taken by the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduces and formulizes an information bottleneck objective for use with invertible networks. Normally this would be impossible because invertible transformations are information preserving, so in order to get around this issue, they introduce an intermediate denoising procedure by adding small amounts of Gaussian noise. This fundamentally allows for an information bottleneck again. Technically, instead of targeting the IB objective directly, they target the V-information version of it, a sort of cross information where the expectations are taken with respect to real data while the models density is used to compute the log density ratios. For the particular instance presented here, this gives a particularly tractable form for the loss, involving a sort of jacobian smoothness constraint on the invertible mapping taking the place of the bottleneck. They show how their proposed objective / architecture behaves showing that they IB knob can smoothly explore a tradeoff between density modelling characteristics and discriminative classification performance with a seemingly sweet spot in between. Strengths I like the paper. I think its well written, the idea is novel, the work is honest. I feel as though I have learned something new in reading the paper and at its core is a new idea I wish I had thought of myself. To me that is precisely what a paper should provide. With particular interest of late in the field for invertible models and IB type methods, this paper does a good job of exploring their intersection, providing an objective that is arguably one of the only reasonable things one could define that is IB like for invertible models. I think the comparative experiments are well done and do illustrate not only the strengths but some of the weaknesses of the approach. Weaknesses So, while proposition 2 alleviates concerns about adopting the CI rather than I objective for X, Z_epsilon, the paper doesn't similarly address the replacement of I(z,y) with CI(z,y). I suspect you can not show that CI(Z, Y) <= I(Z,Y) (as you would want for the role it plays in the objective). If one cannot, I think this should be addressed and admitted in the paper, if one can than it certainly should be added. I would have liked to see more examples of generations from the model, because it relies crucially on the addition of small amounts of noise in pixel space, I suspect the model suffers quite a bit in its generative capabilities as it attempts to bottleneck tighter this seems true in Figure 8, where for the larger values of gamma the generations are all saturating, but I think this kind of discussion or admission or more demonstrations should be included in the text.
NIPS
Title Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification Abstract The Information Bottleneck (IB) objective uses information theory to formulate a task-performance versus robustness trade-off. It has been successfully applied in the standard discriminative classification setting. We pose the question whether the IB can also be used to train generative likelihood models such as normalizing flows. Since normalizing flows use invertible network architectures (INNs), they are information-preserving by construction. This seems contradictory to the idea of a bottleneck. In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN’s generative capabilities intact. Secondly, we investigate the properties of these models experimentally, specifically used as generative classifiers. This model class offers advantages such as improved uncertainty quantification and out-of-distribution detection, but traditional generative classifier solutions suffer considerably in classification accuracy. We find the trade-off parameter in the IB controls a mix of generative capabilities and accuracy close to standard classifiers. Empirically, our uncertainty estimates in this mixed regime compare favourably to conventional generative and discriminative classifiers. Code: github.com/VLL-HD/IB-INN 1 Introduction The Information Bottleneck (IB) objective (Tishby et al., 2000) allows for an information-theoretic view of neural networks, for the setting where we have some observed input variable X , and want to predict some Y from it. For simplicity, we limit the discussion to the common case of discrete Y (i.e. class labels), but results readily generalize. The IB postulates existence of a latent space Z, where all information flow between X and Y is channeled through (hence the method’s name). In order to optimize predictive performance, IB attempts to maximize the mutual information I(Y,Z) between Y andZ. Simultaneously, it strives to minimize the mutual information I(X,Z) betweenX and Z, forcing the model to ignore irrelevant aspects of X which do not contribute to classification performance and only increase the potential for overfitting. The objective can thus be expressed as LIB = I(X,Z)− β I(Y,Z) . (1) The trade-off parameter β is crucial to balance the two aspects. The IB was successfully applied in a variational form (Alemi et al., 2017; Kolchinsky et al., 2017) to train feed-forward classification models p(Y |X) with higher robustness to overfitting and adversarial attacks than standard ones. In this work, we consider the relationship between X and Y from the opposite perspective – using the IB, we train an invertible neural network (INN) as a conditional generative likelihood model 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. p(X|Y ), i.e. as a specific type of conditional normalizing flow. In this case, X is the variable of which the likelihood is predicted, and Y is the class condition. It is a generative model because one can sample from the learned p(X|Y ) at test time to generate new examples from any class, although we here focus on optimal likelihood estimation for existing inputs, not the generating aspect. We find that the IB, when applied to such a likelihood model p(X|Y ), has special implications for the use as a so-called generative classifier (GC). GCs stand in contrast to standard discriminative classifers (DCs), which directly predict the class probabilities p(Y |X). For a GC, the posterior class probabilities are indirectly inferred at test time by Bayes’ rule, cf. Fig. 1: p(Y |X) = p(X|Y )p(Y )/Ep(Y ) [p(X|Y )]. Because DCs optimize prediction performance directly, they achieve better results in this respect. However, their models for p(Y |X) tend to be most accurate near decision boundaries (where it matters), but deteriorate away from them (where deviations incur no noticeable loss). Consequently, they are poorly calibrated (Guo et al., 2017) and out-of-distribution data can not be easily recognized at test time (Ovadia et al., 2019). In contrast, GCs model full likelihoods p(X|Y ) and thus implicitly full posteriors p(Y |X), which leads to the opposite behavior – better predictive uncertainty at the price of reduced accuracy. Fig. 2 illustrates the decision process in latent space Z. In the past, deep learning models trained in a purely generative way, particularly flow-based models trained with maximum likelihood, achieved highly unsatisfactory accuracy, so that some recent work has called into question the overall effectiveness of GCs (Fetaya et al., 2019; Nalisnick et al., 2019b). In-depth studies of idealized settings (Bishop & Lasserre, 2007; Bishop, 2007) revealed the existence of a trade-off, controlling the balance between discriminative and generative performance. In this work, we find that the IB can represent this trade-off, when applied to generative likelihood models. To summarize our contributions, we combine two concepts – the Information Bottleneck (IB) objective and Invertible Neural Networks (INNs). Firstly, we derive an asymptotically exact formulation of the IB for this setting, resulting in our IB-INN model, a special type of conditional normalizing flow. Secondly, we show that this model is especially suitable for the use as a GC: the trade-off parameter β in the IB-INN’s loss smoothly interpolates between the advantages of GCs (accurate posterior calibration and outlier detection), and those of DCs (superior task performance). Empirically, at the right setting for β, our model only suffers a minor degradation in classification accuracy compared to DCs while exhibiting more accurate uncertainty quantification than pure DCs or GCs. 2 Related Work Information Bottleneck: The IB was introduced by Tishby et al. (2000) as a tool for informationtheoretic optimization of compression methods. This idea was expanded on by Chechik et al. (2005); Gilad-Bachrach et al. (2003); Shamir et al. (2010) and Friedman et al. (2013). A relationship between IB and deep learning was first proposed by Tishby & Zaslavsky (2015), and later experimentally examined by Shwartz-Ziv & Tishby (2017), who use IB for the understanding of neural network behavior and training dynamics. A close relation of IB to dropout, disentanglement, and variational autoencoding was discovered by Achille & Soatto (2018), which led them to introduce Information Dropout as a way to take advantage of IB in discriminative models. The approximation of IB in a variational setting was proposed independently by Kolchinsky et al. (2017) and Alemi et al. (2017), who especially demonstrate improved robustness against overfitting and adversarial attacks. Generative Classification: An in-depth analysis of the trade-offs between discriminative and generative models was first performed by Ng & Jordan (2001) and was later extended by Bouchard & Triggs (2004); Bishop & Lasserre (2007); Xue & Titterington (2010), who investigated the possibility of balancing the strengths of both methods via a hyperparameter, albeit for very simple models. GCs have been used more rarely in the deep learning era, some exceptions being application to natural language processing (Yogatama et al., 2017), and adversarial attack robustness (Li et al., 2019; Schott et al., 2019). However, Fetaya et al. (2019) found that conditional normalizing flows have poor discriminative performance, making them unsuitable as GCs. GCs should be clearly distinguished from so-called hybrid models (Raina et al., 2004): these commonly only model the marginal p(X) and jointly perform discriminate classification using shared features, with their main application being semi-supervised learning. Notable examples are Kingma et al. (2014); Chongxuan et al. (2017); Nalisnick et al. (2019c); Grathwohl et al. (2019). 3 Method Below, upper case letters denote random variables (RVs) (e.g. X) and lower case letters their instances (e.g. x). The probability density function of a RV is written as p(X), the evaluated density as p(x) or p(X=x), and all RVs are vector quantities. We distinguish true distributions from modeled ones by the letters p and q, respectively. The distributions q always depend on model parameters, but we do not make this explicit to avoid notation clutter. Assumption 1 in the appendix provides some weak assumptions about the domains of the RVs and their distributions. Full proofs for all results are also provided in the appendix. Our models have two kinds of learnable parameters. Firstly, an invertible neural network (INN) with parameters θ maps inputs X to latent variables Z bijectively: Z = gθ(X) ⇔ X = g−1θ (Z). Assumption 2 in the Appendix provides some explicit assumptions about the network, its gradients, and the parameter space, which are largely fulfilled by standard invertible network architectures, including the affine coupling architecture we use in the experiments. Secondly, a Gaussian mixture model with class-dependent means µy , where y are the class labels, and unit covariance matrices is used as a reference distribution for the latent variables Z: q(Z |Y ) = N (µy, I) and q(Z) = ∑ y p(y)N (µy, I). (2) For simplicity, we assume that the label distribution is known, i.e. q(Y ) = p(Y ). Our derivation rests on a quantity we call mutual cross-information CI (in analogy to the well-known cross-entropy): CI(U, V ) = Eu,v∼p(U,V ) [ log q(u, v) q(u)q(v) ] . (3) Note that the expectation is taken over the true distribution p, whereas the logarithm involves model distributions q. In contrast, plain mutual information uses the same distribution in both places. Our definition is equivalent to the recently proposed predictive V-information (Xu et al., 2020), whose authors provide additional intuition and guarantees. The following proposition (proof in Appendix) clarifies the relationship between mutual information I and CI: Proposition 1. Assume that q(.) can be chosen from a sufficiently rich model family (e.g. a universal density estimator, see Assumption 2). Then for every η > 0 there is a model such that ∣∣I(U, V ) − CI(U, V ) ∣∣ < η and I(U, V ) = CI(U, V ) if p(u, v) = q(u, v). We replace both mutual information terms I(X,Z) and I(Y, Z) in Eq. 1 with the mutual crossinformation CI , and derive optimization procedures for each term in the following subsections. 3.1 INN-Based Formulation of the I(X,Z)-Term in the IB Objective Estimation of the mutual cross-information CI(X,Z) between inputs and latents is problematic for deterministic mappings from X to Z (Amjad & Geiger, 2018), and specifically for INNs, which are bijective by construction. In this case, the joint distributions q(X,Z) and p(X,Z) are not valid Radon-Nikodym densities and both CI and I are undefined. Intuitively, I and CI become infinite, because p and q have an infinitely high delta-peak at Z = gθ(X), and are otherwise 0. For the IB to be applicable, some information has to be discarded in the mapping to Z, making p and q valid Radon-Nikodym densities. In contrast, normalizing flows rely on all information to be retained for optimal generative capabilities and density estimation. Our solution to this seeming contradiction comes from the practical use of normalizing flows. Here, a small amount of noise is commonly added to dequantize X (i.e. to turn discrete pixel values into real numbers), to avoid numerical issues during training. We adopt this approach to artificially introduce a minimal amount of information loss: Instead of feeding X to the network, we input a noisy version X ′ = X + E , where E ∼ N (0, σ2I) = p(E) is Gaussian with mean zero and covariance σ2I. For a quantization step size ∆X , the additional error on the estimated densities caused by the augmentation has a known bound decaying with exp(−∆X2/2σ2) (see Appendix). We are interested in the limit σ → 0, so in practice, we choose a very small fixed σ, that is smaller than ∆X . This makes the error practically indistinguishable from zero. The INN then learns the bijective mapping ZE = gθ(X + E), which guarantees CI(X,ZE) to be well defined. Minimizing this CI according to the IB principle means that gθ(X + E) is encouraged to amplify the noise E , so that X can be recovered less accurately, see Fig. 3 for illustration. If the global minimum of the loss is achieved w.r.t. θ, I and CI coincide, as CI(X,ZE) is an upper bound (also cf. Prop. 1): Proposition 2. For the specific case that ZE = gθ(X + E), it holds that I(X,ZE) ≤ CI(X,ZE). Our approach should be clearly distinguished from applications of the IB to DCs, such as Alemi et al. (2017), which pursue a different goal. There, the model learns to ignore the vast majority of input information and keeps only enough to predict the class posterior p(Y |X). In contrast, we induce only a small, explicitly adjustable loss of information to make the IB well-defined. As a result, the amount of retained information in our generative IB-INNs is orders of magnitude larger than in DC approaches, which is necessary to represent accurate class-conditional likelihoods p(X |Y ). We now derive the loss function that allows optimizing θ and µy to minimize the noise-augmented CI(X,ZE) in the limit of small noise σ → 0. Full details are found in appendix. We decompose the mutual cross-information into two terms CI(X,ZE) = Ep(X),p(E) [ −log q ( ZE=gθ(x+ε) ) ] + Ep(X),p(E) [ log q ( ZE=gθ(x+ ε) ∣∣x) ]︸ ︷︷ ︸ :=A . The first expectation can be approximated by the empirical mean over a finite dataset, because the Gaussian mixture distribution q(ZE) is known analytically. To approximate the second term, we first note that the condition X = x can be replaced with Z = gθ(x), because gθ is bijective and both conditions convey the same information A = Ep(X),p(E) [ log q ( ZE = gθ(x+ ε) ∣∣Z = gθ(x)) ]. We now linearize gθ by its first order Taylor expansion, gθ(x + ε) = gθ(x) + Jxε + O(ε2), where Jx = ∂gθ(X) ∂X ∣∣ x denotes the Jacobian at X = x. Going forward, we write O(σ2) instead of O(ε2) for clarity, noting that both are equivalent because we can write ε = σn with n ∼ N (0, I), and ‖ε‖ = σ‖n‖. Inserting the expansion into A, the O(σ2) can be moved outside of the expression: It can be moved outside the log, because that has a Lipschitz constant of 1/ inf q(gθ(X+E)), which we show is uniformly bounded in the full proof. The O(σ2) can then be exchanged with the expectation because the expectation’s argument is also uniformly bounded, finally leading to A = Ep(X),p(E) [ log q ( gθ(x) + Jxε ∣∣ gθ(x)) ]+O(σ2). Since ε is Gaussian with mean zero and covariance σ2I, the conditional distribution is Gaussian with mean gθ(x) and covariance σ2JxJTx . The expectation with respect to p(E) is thus the negative entropy of a multivariate Gaussian and can be computed analytically as well A = Ep(X) [ −1 2 log ( det(2πeσ2JxJ T x ) )] +O(σ2) = Ep(X) [ − log |det(Jx)| ] − d log(σ)− d 2 log(2πe) +O(σ2) with d the dimension of X . To avoid running the model twice (for x and x+ ε), we approximate the expectation of the Jacobian determinant by 0th-order Taylor expansion as Ep(X) [ log |det(Jx)| ] = Ep(X),p(E) [ log |det(Jε)| ] +O(σ), where Jε is the Jacobian evaluated at x + ε instead of x. The residual can be moved outside of the log and the expectation because Jε is uniformly bounded in our networks. Putting everything together, we drop terms from CI(X,ZE) that are independent of the model or vanish with rate at least O(σ) as σ → 0. The resulting loss LX becomes LX = Ep(X), p(E) [ − log q ( gθ(x+ε) ) − log ∣∣ det(Jε)∣∣ ]. (4) Since the change of variables formula defines the network’s generative distribution as qX(x) = q ( Z = gθ(x) ) ∣∣det(Jx)∣∣, LX is the negative log-likelihood of the perturbed data under qX , LX = Ep(X),p(E) [ − log qX(x+ ε) ] . (5) The crucial difference between CI(X,ZE) and LX is the elimination of the term −d log(σ). It is huge for small σ and would dominate the model-dependent terms, making minimization of CI(X,ZE) very hard. Intuitively, the fact that CI(X,ZE) diverges for σ → 0 highlights why CI(X,Z) is undefined for bijectively related X and Z. In practice, we estimate LX by its empirical mean on a training set {xi, εi}Ni=1 of size N , denoted as L (N) X . It remains to be shown that replacing I(X,ZE) withL(N)X in the IB loss Eq. 1 does not fundamentally change the solution of the learning problem in the limit of large N , small σ and sufficient model power. Sufficient model power here means that the family of generative distributions realizable by gθ should be a universal density estimator (see Appendix, Assumption 2). This is the case if gθ can represent increasing triangular maps (Bogachev et al., 2005), which has been proven for certain network architectures explicitly (e.g. Jaini et al., 2019; Huang et al., 2018), including the affine coupling networks we use for the experiments (Teshima et al., 2020). Propositions 1 & 2 then tell us that we may optimize CI(X,ZE) as an estimator of I(X,ZE). The above derivation of the loss can be strengthened into Proposition 3. Under Assumptions 1 and 2, for any , η > 0 and 0 < δ < 1 there are σ0 > 0 and N0 ∈ N, such that ∀N ≥ N0 and ∀0 < σ < σ0, the following holds uniformly for all model parameters θ: Pr (∣∣∣CI(X,ZE) + d log√2πeσ2 − L(N)X ∣∣∣ > ) < δ and Pr (∥∥∥∥ ∂∂θCI(X,ZE)− ∂∂θL(N)X ∥∥∥∥ > η) < δ The first statement proves consistence of L(N)X , and the second justifies gradient-descent optimization on the basis of L(N)X . Proofs can be found in the appendix. 3.2 GMM-Based Formulation of the I(Z,Y)-Term in the IB Objective Similarly to the first term in the IB-loss in Eq. 1, we also replace the mutual information I(Y, Z) with CI(Y, ZE). Inserting the likelihood q(z | y) = N (z;µy, I) of our latent Gaussian mixture model into the definition and recalling that q(Y ) = p(Y ), this can be decomposed into CI(Y,ZE) = Ep(Y ) [ − log p(y) ] + Ep(X,Y ),p(E) [ log q ( gθ(x+ε) | y ) p(y)∑ y′ q ( gθ(x+ε) | y′ ) p(y′) ] . (6) In this case, CI(Y,ZE) is a lower bound on the true mutual information I(Y, ZE), allowing for its maximization in our objective. In fact, it corresponds to a bound originally proposed by Barber & Agakov (2003) (see their Eq. 3): The first term is simply the entropy h(Y ), because p(Y ) is known. The second term can be rewritten as the negative cross-entropy −hq(Y | ZE). For I(Y,ZE), we would have the negative entropy −h(Y | ZE) in its place, then Gibbs’ inequality leads directly to CI(Y, ZE) ≤ I(Y,ZE). The first expectation can be dropped during training, as it is model-independent. Note how the the second term can also be written as the expectation of the GMM’s log-posterior log q(y | z). Since all mixture components have unit covariance, the elements of Z are conditionally independent and the likelihood factorizes as q(z | y) = ∏j q(zj | y). Thus, q(y | z) can be interpreted as a naive Bayes classifier. In contrast to naive Bayes classifiers in data space, which typically perform badly because raw features are not conditionally independent, our training enforces this property in latent space and ensures accurate classification. Defining the loss L(N)Y as the empirical mean of the log-posterior in a training set {xi, yi, εi}Ni=1 of size N, we get L(N)Y = 1 N N∑ i=1 log N ( gθ(xi + εi);µyi , I ) p(yi)∑ y′ N ( gθ(xi + εi);µy′ , I ) p(y′) . (7) 3.3 The IB-INN-Loss and its Advantages Replacing the mutual information terms in Eq. 1 with their empirical estimates L(N)X and L (N) Y , our model parameters θ and {µ1, ..., µK} are trained by gradient descent of the IB-INN loss L(N)IB-INN = L (N) X − β L (N) Y (8) In the following, we will interpret and discuss the nature of the loss function in Eq. 8 and form an intuitive understanding of why it is more suitable than the class-conditional negativelog-likelihood (‘class-NLL’) traditionally used for normalizing-flow type generative classifiers: Lclass-NLL = −E log ( qθ(x|y) ) . The findings are represented graphically in Fig. 4. LX -term: As shown by Eq. 5, the term is the (unconditional) negative-log-likelihood loss used for normalizing flows, with the difference that q(Z) is a GMM rather than a unimodal Gaussian. We conclude that this loss term encourages the INN to become an accurate likelihood model under the marginalized latent distribution and to ignore any class information. LY -term: Examining Eq. 7, we see that for any pair (g(x+ ε), y), the cluster centers (µY 6=y) of the other classes are repulsed (by minimizing the denominator), while gθ(x+ ε) and the correct cluster center µy are drawn together. Note that the class-NLL loss only captures the second aspect and lacks repulsion, resulting in a much weaker training signal. We can also view this in a different way: by substituting q(x|y) ∣∣det(Jx)∣∣−1 for q(z|y), the second summand of Eq. 6 simplifies to log q(y|x), since the Jacobian cancels out. This means that our LY loss directly maximizes the correct class probability, while ignoring the data likelihood. Again, this improves the training signal: as Fetaya et al. (2019) showed, the data likelihood will otherwise dominate the class-NLL loss, so that lack of classification accuracy is insufficiently penalized. Classical class-NLL loss: The class-NLL loss or an approximation thereof is used to train standard GCs. The IB-INN loss reduces to this case for β = 1, because the first summand in LX (cf. Eq. 4) cancels with the denominator in Eq. 7. Then, the INN no longer receives a penalty when latent mixture components overlap, and the GMM looses its class discriminatory power, as Fig. 4 illustrates: Points are only drawn towards the correct class, but there is no loss component repulsing them from the incorrect classes. As a result, all cluster centers tend to collapse together, leading the INN to effectively just model the marginal data likelihood (as found by Fetaya et al., 2019). Similarly, Wu et al. (2019) found that β = 1 is the minimum possible value to perform classification with discriminative IB methods. 4 Experiments In the following, we examine the properties of the IB-INN used as a GC, especially the quality of uncertainty estimates and OoD detection. We construct our IB-INN by combining the design efforts of various works on INNs and normalizing flows. In brief, we use a Real-NVP architecture consisting of affine coupling blocks (Dinh et al., 2017), with added improvements from recent works (Kingma & Dhariwal, 2018; Jacobsen et al., 2019, 2018; Ardizzone et al., 2019). A detailed description of the architecture is given in the appendix. We learn the set of means µY as free parameters jointly with the remaining model parameters in an end-to-end fashion using the loss in Eq. 8. The practical implementation of the loss is explained in the appendix. We apply two additional techniques while learning the model, label smoothing and loss rebalancing: Label smoothing Hard labels force the Gaussian mixture components to be maximally separated, so they drift continually further apart during training, leading to instabilities. Label smoothing (Szegedy et al., 2016) with smoothing factor 0.05 prevents this, and we also apply it to all baseline models. Loss rebalancing The following rebalancing scheme allows us to use the same hyperparameters when changing β between 5 orders of magnitude. Firstly, we divide the loss LX by the number of dimensions of X , which approximately matches its magnitude to the LY loss. We define a corresponding γ := β/dim(X) to stay consistent with the IB definition. Secondly, we scale the entire loss by a factor 2/(1 + γ). This ensures that it keeps the same magnitude when changing γ. L(N)IB = 2 1 + γ ( L(N)X dim(X) − γ L(N)Y ) (9) Finally, the noise amplitude σ should be chosen to satisfy two criteria: it should be small enough so that the Taylor expansions in the loss for σ → 0 are sufficiently accurate, and it should also not hinder the model’s performance. Our ablation provided in the Appendix indicates that both criteria are satisfied when σ / 0.25∆X , with the quantization step size ∆X , so we fix σ = 10−3 for the remaining experiments. 4.1 Comparison of Methods In addition to the IB-INN, we train several alternative methods. For each, we use exactly the same INN model, or an equivalent feed-forward ResNet model. Every method has the exact same hyperparameters and training procedure, the only difference being the loss function and invertibility. Class-NLL: As a standard generative classifier, we firstly train an INN with a GMM in latent space naively as a conditional generative model, using the class-conditional maximum likelihood loss. Secondly, we also train a regularized version, to increase the classification accuracy. The regularization consists of leaving the class centroids µY fixed on a hyper-sphere, forcing some degree of class-separation. Feed-forward As a DC baseline, we train a standard ResNet (He et al., 2016) with softmax cross entropy loss. We replace each affine coupling block by a ResNet block, leaving all other hyperparameters the same. i-RevNet (Jacobsen et al., 2018): To rule out any differences stemming from the constraint of invertibility, we additionally train the INN as a standard softmax classifier, by projecting the outputs to class logits. While the architecture is invertible, it is not a generative model and trained just like a standard feed-forward classifier. Variational Information Bottleneck (VIB): To examine which observed behaviours are due to the IB in general, and what is specific to GCs, we also train the VIB (Alemi et al., 2017), a feed-forward DC, using a ResNet. We convert the authors definition of β to our γ for consistency. 4.2 Quantitative measurements RGB rotation (CIFAR10) Small noise (CIFAR10) QuickDraw ImageNet Figure 5: Examples from each OoD dataset used in the evaluation. The inlier data are original CIFAR10 images. In the following, we describe the scores used in Table 1. Bits/dim: The bits/dim metric is common for objectively comparing the performance of density estimation models such as normalizing flows, and is closely related to the KL divergence between real and estimated distributions. Details can be found e.g. in Theis et al. (2015). Calibration error: The calibration curve measures whether the confidence of a model agrees with its actual performance. All prediction outputs are binned according to their predicted probability P (‘confidence’), and it is recorded which fraction of predictions in each bin was correct, Q. For a perfectly calibrated model, we have P = Q, e.g. predictions with 70% confidence are correct 70% of the time. We use several metrics to measure deviations from this behaviour, largely in line with Guo et al. (2017). Specifically, we consider the expected calibration error (ECE, error weighted by bin count), the maximum calibration error (MCE, max error over all bins), and the integrated calibration error (ICE, summed error per bin), as well as the geometric mean of all three: 3 √ ECE ·MCE · ICE. The geometric mean is used because it properly accounts for the different magnitudes of the metrics. Exact definitions found in appendix. Increased out-of-distribution (OoD) prediction entropy: For data that is OoD, we expect from a model that it returns uncertain class predictions, as it has not been trained on such data. In the ideal case, each class is assigned the same probability of 1/(nr. classes). Ovadia et al. (2019) quantify this through the discrete entropy of the class prediction outputs H(Y |XOod). To counteract the effect of less accurate models having higher prediction entropy overall, we report the difference between OoD and in-distribution test set H(Y |XOod)−H(Y |XIn distrib.). OoD detection score: We use OoD detection capabilities intrinsically built in to GCs. For this, we apply the recently proposed typicality test (Nalisnick et al., 2019a). This is a hypothesis test that sets an upper and lower threshold on the estimated likelihood, beyond which batches of inputs are classified as OoD. We apply the test to single input images (i.e. batch size 1). For quantification, we vary the detection threshold to produce a receiver operator characteristic (ROC), and compute the area under this curve (ROC-AUC) in percent. For short, we call this the OoD detection score. It will be 100 for perfectly separated in- and outliers, and 50 if each point is assigned a random likelihood. OoD datasets: The inlier dataset consist of CIFAR10/100 images, i.e. 32× 32 colour images showing 10/100 object classes. Additionally, we created four different OoD datasets, that cover different aspects, see Fig. 5. Firstly, we create a random 3D rotation matrix with a rotation angle of α = 0.3π, and apply it to the RGB color vectors of each pixel of CIFAR10 images. Secondly, we add random uniform noise with a small amplitude to CIFAR10 images, as an alteration of the image statistics. Thirdly, we use the QuickDraw dataset of hand drawn objects (Ha & Eck, 2018), and filter only the categories corresponding to CIFAR10 classes and color each grayscale line drawing randomly. Therefore the semantic content is the same, but the image modality is different. Lastly, we downscale the ImageNet validation set to 32 × 32 pixels. In this case, the semantic content is different, but the image statistics are very similar to CIFAR10. 4.3 Results Quantitative Model Comparison A comparison of all models is performed in Table 1 for CIFAR10, and in the appendix for CIFAR100. At the extreme γ →∞, the model behaves almost identically to a standard feed forward classifier using the same architecture (i-RevNet), and for γ = 0, it closely mirrors a conventionally trained GC, as the bits/dim are the same. We find the most favourable setting to be at γ = 1: Here, the classification error and the bits/dim each only suffer a 10% penalty compared to the extremes. The uncertainty quantification for IB-INN at this setting (calibration and OoD prediction entropy) is far better than for pure DCs. Against expectations, standard GCs have worse calibration error. Our hypothesis is that their predictions are too noisy and inaccurate for a positive effect to be visible. For OoD detection, the IB-INN and standard GCs are all comparable, as we would expect from the similar bits/dim. Fig. 6 shows the trade-off between the two extremes in more detail: at low γ, the OoD detection and uncertainty quantification are improved, at the cost of classification accuracy. The VIB behaves in agreement with the other DCs: it has consistently lower classification error but higher calibration error than the IB-INN. This confirms that the IB-INN’s behaviour is due to the application of IB to GCs exclusively. This does not mean that the IB-INN should be preferred over VIB, or vice versa. The main advantages of the VIB are the increased robustness to overfitting and adversarial attacks, aspects that we do not examine in this work. Latent Space Exploration To better understand what the IB-INN learns, we analyze the latent space in different ways. Firstly, Fig. 7 shows the layout of the latent space GMM through a linear projection. We find that the clusters of ambiguous classes, e.g. truck and car, are connected in latent space, to account for uncertainty. Secondly, Fig. 9 shows interpolations in latent space between two test set images, using models trained with different values of γ. We observe that for low γ, the IB-INN has a well structured latent space, leading to good generative capabilities and plausible interpolations. For larger γ, class separation increases and interpolation quality continually degrades. Finally, generated images can give insight into the classification process, visualizing how the model understands each class. If a certain feature is not generated, this means it does not contribute positively to the likelihood, and in turn will be ignored for classification. Examples for this are shown in Fig. 8. 5 Conclusions We addressed the application of the Information Bottleneck (IB) as a loss function to Invertible Neural Networks (INNs) trained as generative models. We find that we can formulate an asymptotically exact version of the IB, which results in an INN that is a generative classifier. From our experiments, we conclude that the IB-INN provides high quality uncertainties and out-of-distribution detection, while reaching almost the same classification accuracy as standard feed-forward methods on CIFAR10 and CIFAR100. Acknowledgements LA received funding by the Federal Ministry of Education and Research of Germany project High Performance Deep Learning Framework (No 01IH17002). RM received funding from the Robert Bosch PhD scholarship. UK and CR received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). We thank the Center for Information Services and High Performance Computing (ZIH) at Dresden University of Technology for generous allocations of computation time. Furthermore we thank our colleagues (in alphabetical order) Tim Adler, Felix Draxler, Clemens Fruböse, Jakob Kruse, Titus Leistner, Jens Müller and Peter Sorrenson for their help and fruitful discussions. Broader Impact As our IB-INN is not bound to any particular application, and applies to settings that can in principle already be solved with existing methods, we foresee no societal advantages or dangers in terms of direct application. More generally, we think accurate uncertainty quantification plays an important role in a safe and productive use of AI.
1. What is the focus and contribution of the paper on normalizing flows? 2. What are the strengths of the proposed approach, particularly in terms of technical aspects and figure quality? 3. What are the weaknesses of the paper, especially regarding its accessibility and clarity? 4. Can you provide more information about the equation and how they contribute to the overall method? 5. How does the reviewer assess the significance and impact of the work in the context of classification problems?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a technique for training normalizing flows. The paper looks really good but falls outside my field of expertise. Strengths Unfortunately, I can't judge the quality of this paper that falls outside the scope of my expertise. It looks technically sound, the equations make sense, and the quality of the figures is really appreciable. Weaknesses It could be nice to add a few sentences at the beginning of the paper explaining what the paper attempts to solve in layman's terms. I guess the final goal is to solve a classification problem, right?
NIPS
Title Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification Abstract The Information Bottleneck (IB) objective uses information theory to formulate a task-performance versus robustness trade-off. It has been successfully applied in the standard discriminative classification setting. We pose the question whether the IB can also be used to train generative likelihood models such as normalizing flows. Since normalizing flows use invertible network architectures (INNs), they are information-preserving by construction. This seems contradictory to the idea of a bottleneck. In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN’s generative capabilities intact. Secondly, we investigate the properties of these models experimentally, specifically used as generative classifiers. This model class offers advantages such as improved uncertainty quantification and out-of-distribution detection, but traditional generative classifier solutions suffer considerably in classification accuracy. We find the trade-off parameter in the IB controls a mix of generative capabilities and accuracy close to standard classifiers. Empirically, our uncertainty estimates in this mixed regime compare favourably to conventional generative and discriminative classifiers. Code: github.com/VLL-HD/IB-INN 1 Introduction The Information Bottleneck (IB) objective (Tishby et al., 2000) allows for an information-theoretic view of neural networks, for the setting where we have some observed input variable X , and want to predict some Y from it. For simplicity, we limit the discussion to the common case of discrete Y (i.e. class labels), but results readily generalize. The IB postulates existence of a latent space Z, where all information flow between X and Y is channeled through (hence the method’s name). In order to optimize predictive performance, IB attempts to maximize the mutual information I(Y,Z) between Y andZ. Simultaneously, it strives to minimize the mutual information I(X,Z) betweenX and Z, forcing the model to ignore irrelevant aspects of X which do not contribute to classification performance and only increase the potential for overfitting. The objective can thus be expressed as LIB = I(X,Z)− β I(Y,Z) . (1) The trade-off parameter β is crucial to balance the two aspects. The IB was successfully applied in a variational form (Alemi et al., 2017; Kolchinsky et al., 2017) to train feed-forward classification models p(Y |X) with higher robustness to overfitting and adversarial attacks than standard ones. In this work, we consider the relationship between X and Y from the opposite perspective – using the IB, we train an invertible neural network (INN) as a conditional generative likelihood model 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. p(X|Y ), i.e. as a specific type of conditional normalizing flow. In this case, X is the variable of which the likelihood is predicted, and Y is the class condition. It is a generative model because one can sample from the learned p(X|Y ) at test time to generate new examples from any class, although we here focus on optimal likelihood estimation for existing inputs, not the generating aspect. We find that the IB, when applied to such a likelihood model p(X|Y ), has special implications for the use as a so-called generative classifier (GC). GCs stand in contrast to standard discriminative classifers (DCs), which directly predict the class probabilities p(Y |X). For a GC, the posterior class probabilities are indirectly inferred at test time by Bayes’ rule, cf. Fig. 1: p(Y |X) = p(X|Y )p(Y )/Ep(Y ) [p(X|Y )]. Because DCs optimize prediction performance directly, they achieve better results in this respect. However, their models for p(Y |X) tend to be most accurate near decision boundaries (where it matters), but deteriorate away from them (where deviations incur no noticeable loss). Consequently, they are poorly calibrated (Guo et al., 2017) and out-of-distribution data can not be easily recognized at test time (Ovadia et al., 2019). In contrast, GCs model full likelihoods p(X|Y ) and thus implicitly full posteriors p(Y |X), which leads to the opposite behavior – better predictive uncertainty at the price of reduced accuracy. Fig. 2 illustrates the decision process in latent space Z. In the past, deep learning models trained in a purely generative way, particularly flow-based models trained with maximum likelihood, achieved highly unsatisfactory accuracy, so that some recent work has called into question the overall effectiveness of GCs (Fetaya et al., 2019; Nalisnick et al., 2019b). In-depth studies of idealized settings (Bishop & Lasserre, 2007; Bishop, 2007) revealed the existence of a trade-off, controlling the balance between discriminative and generative performance. In this work, we find that the IB can represent this trade-off, when applied to generative likelihood models. To summarize our contributions, we combine two concepts – the Information Bottleneck (IB) objective and Invertible Neural Networks (INNs). Firstly, we derive an asymptotically exact formulation of the IB for this setting, resulting in our IB-INN model, a special type of conditional normalizing flow. Secondly, we show that this model is especially suitable for the use as a GC: the trade-off parameter β in the IB-INN’s loss smoothly interpolates between the advantages of GCs (accurate posterior calibration and outlier detection), and those of DCs (superior task performance). Empirically, at the right setting for β, our model only suffers a minor degradation in classification accuracy compared to DCs while exhibiting more accurate uncertainty quantification than pure DCs or GCs. 2 Related Work Information Bottleneck: The IB was introduced by Tishby et al. (2000) as a tool for informationtheoretic optimization of compression methods. This idea was expanded on by Chechik et al. (2005); Gilad-Bachrach et al. (2003); Shamir et al. (2010) and Friedman et al. (2013). A relationship between IB and deep learning was first proposed by Tishby & Zaslavsky (2015), and later experimentally examined by Shwartz-Ziv & Tishby (2017), who use IB for the understanding of neural network behavior and training dynamics. A close relation of IB to dropout, disentanglement, and variational autoencoding was discovered by Achille & Soatto (2018), which led them to introduce Information Dropout as a way to take advantage of IB in discriminative models. The approximation of IB in a variational setting was proposed independently by Kolchinsky et al. (2017) and Alemi et al. (2017), who especially demonstrate improved robustness against overfitting and adversarial attacks. Generative Classification: An in-depth analysis of the trade-offs between discriminative and generative models was first performed by Ng & Jordan (2001) and was later extended by Bouchard & Triggs (2004); Bishop & Lasserre (2007); Xue & Titterington (2010), who investigated the possibility of balancing the strengths of both methods via a hyperparameter, albeit for very simple models. GCs have been used more rarely in the deep learning era, some exceptions being application to natural language processing (Yogatama et al., 2017), and adversarial attack robustness (Li et al., 2019; Schott et al., 2019). However, Fetaya et al. (2019) found that conditional normalizing flows have poor discriminative performance, making them unsuitable as GCs. GCs should be clearly distinguished from so-called hybrid models (Raina et al., 2004): these commonly only model the marginal p(X) and jointly perform discriminate classification using shared features, with their main application being semi-supervised learning. Notable examples are Kingma et al. (2014); Chongxuan et al. (2017); Nalisnick et al. (2019c); Grathwohl et al. (2019). 3 Method Below, upper case letters denote random variables (RVs) (e.g. X) and lower case letters their instances (e.g. x). The probability density function of a RV is written as p(X), the evaluated density as p(x) or p(X=x), and all RVs are vector quantities. We distinguish true distributions from modeled ones by the letters p and q, respectively. The distributions q always depend on model parameters, but we do not make this explicit to avoid notation clutter. Assumption 1 in the appendix provides some weak assumptions about the domains of the RVs and their distributions. Full proofs for all results are also provided in the appendix. Our models have two kinds of learnable parameters. Firstly, an invertible neural network (INN) with parameters θ maps inputs X to latent variables Z bijectively: Z = gθ(X) ⇔ X = g−1θ (Z). Assumption 2 in the Appendix provides some explicit assumptions about the network, its gradients, and the parameter space, which are largely fulfilled by standard invertible network architectures, including the affine coupling architecture we use in the experiments. Secondly, a Gaussian mixture model with class-dependent means µy , where y are the class labels, and unit covariance matrices is used as a reference distribution for the latent variables Z: q(Z |Y ) = N (µy, I) and q(Z) = ∑ y p(y)N (µy, I). (2) For simplicity, we assume that the label distribution is known, i.e. q(Y ) = p(Y ). Our derivation rests on a quantity we call mutual cross-information CI (in analogy to the well-known cross-entropy): CI(U, V ) = Eu,v∼p(U,V ) [ log q(u, v) q(u)q(v) ] . (3) Note that the expectation is taken over the true distribution p, whereas the logarithm involves model distributions q. In contrast, plain mutual information uses the same distribution in both places. Our definition is equivalent to the recently proposed predictive V-information (Xu et al., 2020), whose authors provide additional intuition and guarantees. The following proposition (proof in Appendix) clarifies the relationship between mutual information I and CI: Proposition 1. Assume that q(.) can be chosen from a sufficiently rich model family (e.g. a universal density estimator, see Assumption 2). Then for every η > 0 there is a model such that ∣∣I(U, V ) − CI(U, V ) ∣∣ < η and I(U, V ) = CI(U, V ) if p(u, v) = q(u, v). We replace both mutual information terms I(X,Z) and I(Y, Z) in Eq. 1 with the mutual crossinformation CI , and derive optimization procedures for each term in the following subsections. 3.1 INN-Based Formulation of the I(X,Z)-Term in the IB Objective Estimation of the mutual cross-information CI(X,Z) between inputs and latents is problematic for deterministic mappings from X to Z (Amjad & Geiger, 2018), and specifically for INNs, which are bijective by construction. In this case, the joint distributions q(X,Z) and p(X,Z) are not valid Radon-Nikodym densities and both CI and I are undefined. Intuitively, I and CI become infinite, because p and q have an infinitely high delta-peak at Z = gθ(X), and are otherwise 0. For the IB to be applicable, some information has to be discarded in the mapping to Z, making p and q valid Radon-Nikodym densities. In contrast, normalizing flows rely on all information to be retained for optimal generative capabilities and density estimation. Our solution to this seeming contradiction comes from the practical use of normalizing flows. Here, a small amount of noise is commonly added to dequantize X (i.e. to turn discrete pixel values into real numbers), to avoid numerical issues during training. We adopt this approach to artificially introduce a minimal amount of information loss: Instead of feeding X to the network, we input a noisy version X ′ = X + E , where E ∼ N (0, σ2I) = p(E) is Gaussian with mean zero and covariance σ2I. For a quantization step size ∆X , the additional error on the estimated densities caused by the augmentation has a known bound decaying with exp(−∆X2/2σ2) (see Appendix). We are interested in the limit σ → 0, so in practice, we choose a very small fixed σ, that is smaller than ∆X . This makes the error practically indistinguishable from zero. The INN then learns the bijective mapping ZE = gθ(X + E), which guarantees CI(X,ZE) to be well defined. Minimizing this CI according to the IB principle means that gθ(X + E) is encouraged to amplify the noise E , so that X can be recovered less accurately, see Fig. 3 for illustration. If the global minimum of the loss is achieved w.r.t. θ, I and CI coincide, as CI(X,ZE) is an upper bound (also cf. Prop. 1): Proposition 2. For the specific case that ZE = gθ(X + E), it holds that I(X,ZE) ≤ CI(X,ZE). Our approach should be clearly distinguished from applications of the IB to DCs, such as Alemi et al. (2017), which pursue a different goal. There, the model learns to ignore the vast majority of input information and keeps only enough to predict the class posterior p(Y |X). In contrast, we induce only a small, explicitly adjustable loss of information to make the IB well-defined. As a result, the amount of retained information in our generative IB-INNs is orders of magnitude larger than in DC approaches, which is necessary to represent accurate class-conditional likelihoods p(X |Y ). We now derive the loss function that allows optimizing θ and µy to minimize the noise-augmented CI(X,ZE) in the limit of small noise σ → 0. Full details are found in appendix. We decompose the mutual cross-information into two terms CI(X,ZE) = Ep(X),p(E) [ −log q ( ZE=gθ(x+ε) ) ] + Ep(X),p(E) [ log q ( ZE=gθ(x+ ε) ∣∣x) ]︸ ︷︷ ︸ :=A . The first expectation can be approximated by the empirical mean over a finite dataset, because the Gaussian mixture distribution q(ZE) is known analytically. To approximate the second term, we first note that the condition X = x can be replaced with Z = gθ(x), because gθ is bijective and both conditions convey the same information A = Ep(X),p(E) [ log q ( ZE = gθ(x+ ε) ∣∣Z = gθ(x)) ]. We now linearize gθ by its first order Taylor expansion, gθ(x + ε) = gθ(x) + Jxε + O(ε2), where Jx = ∂gθ(X) ∂X ∣∣ x denotes the Jacobian at X = x. Going forward, we write O(σ2) instead of O(ε2) for clarity, noting that both are equivalent because we can write ε = σn with n ∼ N (0, I), and ‖ε‖ = σ‖n‖. Inserting the expansion into A, the O(σ2) can be moved outside of the expression: It can be moved outside the log, because that has a Lipschitz constant of 1/ inf q(gθ(X+E)), which we show is uniformly bounded in the full proof. The O(σ2) can then be exchanged with the expectation because the expectation’s argument is also uniformly bounded, finally leading to A = Ep(X),p(E) [ log q ( gθ(x) + Jxε ∣∣ gθ(x)) ]+O(σ2). Since ε is Gaussian with mean zero and covariance σ2I, the conditional distribution is Gaussian with mean gθ(x) and covariance σ2JxJTx . The expectation with respect to p(E) is thus the negative entropy of a multivariate Gaussian and can be computed analytically as well A = Ep(X) [ −1 2 log ( det(2πeσ2JxJ T x ) )] +O(σ2) = Ep(X) [ − log |det(Jx)| ] − d log(σ)− d 2 log(2πe) +O(σ2) with d the dimension of X . To avoid running the model twice (for x and x+ ε), we approximate the expectation of the Jacobian determinant by 0th-order Taylor expansion as Ep(X) [ log |det(Jx)| ] = Ep(X),p(E) [ log |det(Jε)| ] +O(σ), where Jε is the Jacobian evaluated at x + ε instead of x. The residual can be moved outside of the log and the expectation because Jε is uniformly bounded in our networks. Putting everything together, we drop terms from CI(X,ZE) that are independent of the model or vanish with rate at least O(σ) as σ → 0. The resulting loss LX becomes LX = Ep(X), p(E) [ − log q ( gθ(x+ε) ) − log ∣∣ det(Jε)∣∣ ]. (4) Since the change of variables formula defines the network’s generative distribution as qX(x) = q ( Z = gθ(x) ) ∣∣det(Jx)∣∣, LX is the negative log-likelihood of the perturbed data under qX , LX = Ep(X),p(E) [ − log qX(x+ ε) ] . (5) The crucial difference between CI(X,ZE) and LX is the elimination of the term −d log(σ). It is huge for small σ and would dominate the model-dependent terms, making minimization of CI(X,ZE) very hard. Intuitively, the fact that CI(X,ZE) diverges for σ → 0 highlights why CI(X,Z) is undefined for bijectively related X and Z. In practice, we estimate LX by its empirical mean on a training set {xi, εi}Ni=1 of size N , denoted as L (N) X . It remains to be shown that replacing I(X,ZE) withL(N)X in the IB loss Eq. 1 does not fundamentally change the solution of the learning problem in the limit of large N , small σ and sufficient model power. Sufficient model power here means that the family of generative distributions realizable by gθ should be a universal density estimator (see Appendix, Assumption 2). This is the case if gθ can represent increasing triangular maps (Bogachev et al., 2005), which has been proven for certain network architectures explicitly (e.g. Jaini et al., 2019; Huang et al., 2018), including the affine coupling networks we use for the experiments (Teshima et al., 2020). Propositions 1 & 2 then tell us that we may optimize CI(X,ZE) as an estimator of I(X,ZE). The above derivation of the loss can be strengthened into Proposition 3. Under Assumptions 1 and 2, for any , η > 0 and 0 < δ < 1 there are σ0 > 0 and N0 ∈ N, such that ∀N ≥ N0 and ∀0 < σ < σ0, the following holds uniformly for all model parameters θ: Pr (∣∣∣CI(X,ZE) + d log√2πeσ2 − L(N)X ∣∣∣ > ) < δ and Pr (∥∥∥∥ ∂∂θCI(X,ZE)− ∂∂θL(N)X ∥∥∥∥ > η) < δ The first statement proves consistence of L(N)X , and the second justifies gradient-descent optimization on the basis of L(N)X . Proofs can be found in the appendix. 3.2 GMM-Based Formulation of the I(Z,Y)-Term in the IB Objective Similarly to the first term in the IB-loss in Eq. 1, we also replace the mutual information I(Y, Z) with CI(Y, ZE). Inserting the likelihood q(z | y) = N (z;µy, I) of our latent Gaussian mixture model into the definition and recalling that q(Y ) = p(Y ), this can be decomposed into CI(Y,ZE) = Ep(Y ) [ − log p(y) ] + Ep(X,Y ),p(E) [ log q ( gθ(x+ε) | y ) p(y)∑ y′ q ( gθ(x+ε) | y′ ) p(y′) ] . (6) In this case, CI(Y,ZE) is a lower bound on the true mutual information I(Y, ZE), allowing for its maximization in our objective. In fact, it corresponds to a bound originally proposed by Barber & Agakov (2003) (see their Eq. 3): The first term is simply the entropy h(Y ), because p(Y ) is known. The second term can be rewritten as the negative cross-entropy −hq(Y | ZE). For I(Y,ZE), we would have the negative entropy −h(Y | ZE) in its place, then Gibbs’ inequality leads directly to CI(Y, ZE) ≤ I(Y,ZE). The first expectation can be dropped during training, as it is model-independent. Note how the the second term can also be written as the expectation of the GMM’s log-posterior log q(y | z). Since all mixture components have unit covariance, the elements of Z are conditionally independent and the likelihood factorizes as q(z | y) = ∏j q(zj | y). Thus, q(y | z) can be interpreted as a naive Bayes classifier. In contrast to naive Bayes classifiers in data space, which typically perform badly because raw features are not conditionally independent, our training enforces this property in latent space and ensures accurate classification. Defining the loss L(N)Y as the empirical mean of the log-posterior in a training set {xi, yi, εi}Ni=1 of size N, we get L(N)Y = 1 N N∑ i=1 log N ( gθ(xi + εi);µyi , I ) p(yi)∑ y′ N ( gθ(xi + εi);µy′ , I ) p(y′) . (7) 3.3 The IB-INN-Loss and its Advantages Replacing the mutual information terms in Eq. 1 with their empirical estimates L(N)X and L (N) Y , our model parameters θ and {µ1, ..., µK} are trained by gradient descent of the IB-INN loss L(N)IB-INN = L (N) X − β L (N) Y (8) In the following, we will interpret and discuss the nature of the loss function in Eq. 8 and form an intuitive understanding of why it is more suitable than the class-conditional negativelog-likelihood (‘class-NLL’) traditionally used for normalizing-flow type generative classifiers: Lclass-NLL = −E log ( qθ(x|y) ) . The findings are represented graphically in Fig. 4. LX -term: As shown by Eq. 5, the term is the (unconditional) negative-log-likelihood loss used for normalizing flows, with the difference that q(Z) is a GMM rather than a unimodal Gaussian. We conclude that this loss term encourages the INN to become an accurate likelihood model under the marginalized latent distribution and to ignore any class information. LY -term: Examining Eq. 7, we see that for any pair (g(x+ ε), y), the cluster centers (µY 6=y) of the other classes are repulsed (by minimizing the denominator), while gθ(x+ ε) and the correct cluster center µy are drawn together. Note that the class-NLL loss only captures the second aspect and lacks repulsion, resulting in a much weaker training signal. We can also view this in a different way: by substituting q(x|y) ∣∣det(Jx)∣∣−1 for q(z|y), the second summand of Eq. 6 simplifies to log q(y|x), since the Jacobian cancels out. This means that our LY loss directly maximizes the correct class probability, while ignoring the data likelihood. Again, this improves the training signal: as Fetaya et al. (2019) showed, the data likelihood will otherwise dominate the class-NLL loss, so that lack of classification accuracy is insufficiently penalized. Classical class-NLL loss: The class-NLL loss or an approximation thereof is used to train standard GCs. The IB-INN loss reduces to this case for β = 1, because the first summand in LX (cf. Eq. 4) cancels with the denominator in Eq. 7. Then, the INN no longer receives a penalty when latent mixture components overlap, and the GMM looses its class discriminatory power, as Fig. 4 illustrates: Points are only drawn towards the correct class, but there is no loss component repulsing them from the incorrect classes. As a result, all cluster centers tend to collapse together, leading the INN to effectively just model the marginal data likelihood (as found by Fetaya et al., 2019). Similarly, Wu et al. (2019) found that β = 1 is the minimum possible value to perform classification with discriminative IB methods. 4 Experiments In the following, we examine the properties of the IB-INN used as a GC, especially the quality of uncertainty estimates and OoD detection. We construct our IB-INN by combining the design efforts of various works on INNs and normalizing flows. In brief, we use a Real-NVP architecture consisting of affine coupling blocks (Dinh et al., 2017), with added improvements from recent works (Kingma & Dhariwal, 2018; Jacobsen et al., 2019, 2018; Ardizzone et al., 2019). A detailed description of the architecture is given in the appendix. We learn the set of means µY as free parameters jointly with the remaining model parameters in an end-to-end fashion using the loss in Eq. 8. The practical implementation of the loss is explained in the appendix. We apply two additional techniques while learning the model, label smoothing and loss rebalancing: Label smoothing Hard labels force the Gaussian mixture components to be maximally separated, so they drift continually further apart during training, leading to instabilities. Label smoothing (Szegedy et al., 2016) with smoothing factor 0.05 prevents this, and we also apply it to all baseline models. Loss rebalancing The following rebalancing scheme allows us to use the same hyperparameters when changing β between 5 orders of magnitude. Firstly, we divide the loss LX by the number of dimensions of X , which approximately matches its magnitude to the LY loss. We define a corresponding γ := β/dim(X) to stay consistent with the IB definition. Secondly, we scale the entire loss by a factor 2/(1 + γ). This ensures that it keeps the same magnitude when changing γ. L(N)IB = 2 1 + γ ( L(N)X dim(X) − γ L(N)Y ) (9) Finally, the noise amplitude σ should be chosen to satisfy two criteria: it should be small enough so that the Taylor expansions in the loss for σ → 0 are sufficiently accurate, and it should also not hinder the model’s performance. Our ablation provided in the Appendix indicates that both criteria are satisfied when σ / 0.25∆X , with the quantization step size ∆X , so we fix σ = 10−3 for the remaining experiments. 4.1 Comparison of Methods In addition to the IB-INN, we train several alternative methods. For each, we use exactly the same INN model, or an equivalent feed-forward ResNet model. Every method has the exact same hyperparameters and training procedure, the only difference being the loss function and invertibility. Class-NLL: As a standard generative classifier, we firstly train an INN with a GMM in latent space naively as a conditional generative model, using the class-conditional maximum likelihood loss. Secondly, we also train a regularized version, to increase the classification accuracy. The regularization consists of leaving the class centroids µY fixed on a hyper-sphere, forcing some degree of class-separation. Feed-forward As a DC baseline, we train a standard ResNet (He et al., 2016) with softmax cross entropy loss. We replace each affine coupling block by a ResNet block, leaving all other hyperparameters the same. i-RevNet (Jacobsen et al., 2018): To rule out any differences stemming from the constraint of invertibility, we additionally train the INN as a standard softmax classifier, by projecting the outputs to class logits. While the architecture is invertible, it is not a generative model and trained just like a standard feed-forward classifier. Variational Information Bottleneck (VIB): To examine which observed behaviours are due to the IB in general, and what is specific to GCs, we also train the VIB (Alemi et al., 2017), a feed-forward DC, using a ResNet. We convert the authors definition of β to our γ for consistency. 4.2 Quantitative measurements RGB rotation (CIFAR10) Small noise (CIFAR10) QuickDraw ImageNet Figure 5: Examples from each OoD dataset used in the evaluation. The inlier data are original CIFAR10 images. In the following, we describe the scores used in Table 1. Bits/dim: The bits/dim metric is common for objectively comparing the performance of density estimation models such as normalizing flows, and is closely related to the KL divergence between real and estimated distributions. Details can be found e.g. in Theis et al. (2015). Calibration error: The calibration curve measures whether the confidence of a model agrees with its actual performance. All prediction outputs are binned according to their predicted probability P (‘confidence’), and it is recorded which fraction of predictions in each bin was correct, Q. For a perfectly calibrated model, we have P = Q, e.g. predictions with 70% confidence are correct 70% of the time. We use several metrics to measure deviations from this behaviour, largely in line with Guo et al. (2017). Specifically, we consider the expected calibration error (ECE, error weighted by bin count), the maximum calibration error (MCE, max error over all bins), and the integrated calibration error (ICE, summed error per bin), as well as the geometric mean of all three: 3 √ ECE ·MCE · ICE. The geometric mean is used because it properly accounts for the different magnitudes of the metrics. Exact definitions found in appendix. Increased out-of-distribution (OoD) prediction entropy: For data that is OoD, we expect from a model that it returns uncertain class predictions, as it has not been trained on such data. In the ideal case, each class is assigned the same probability of 1/(nr. classes). Ovadia et al. (2019) quantify this through the discrete entropy of the class prediction outputs H(Y |XOod). To counteract the effect of less accurate models having higher prediction entropy overall, we report the difference between OoD and in-distribution test set H(Y |XOod)−H(Y |XIn distrib.). OoD detection score: We use OoD detection capabilities intrinsically built in to GCs. For this, we apply the recently proposed typicality test (Nalisnick et al., 2019a). This is a hypothesis test that sets an upper and lower threshold on the estimated likelihood, beyond which batches of inputs are classified as OoD. We apply the test to single input images (i.e. batch size 1). For quantification, we vary the detection threshold to produce a receiver operator characteristic (ROC), and compute the area under this curve (ROC-AUC) in percent. For short, we call this the OoD detection score. It will be 100 for perfectly separated in- and outliers, and 50 if each point is assigned a random likelihood. OoD datasets: The inlier dataset consist of CIFAR10/100 images, i.e. 32× 32 colour images showing 10/100 object classes. Additionally, we created four different OoD datasets, that cover different aspects, see Fig. 5. Firstly, we create a random 3D rotation matrix with a rotation angle of α = 0.3π, and apply it to the RGB color vectors of each pixel of CIFAR10 images. Secondly, we add random uniform noise with a small amplitude to CIFAR10 images, as an alteration of the image statistics. Thirdly, we use the QuickDraw dataset of hand drawn objects (Ha & Eck, 2018), and filter only the categories corresponding to CIFAR10 classes and color each grayscale line drawing randomly. Therefore the semantic content is the same, but the image modality is different. Lastly, we downscale the ImageNet validation set to 32 × 32 pixels. In this case, the semantic content is different, but the image statistics are very similar to CIFAR10. 4.3 Results Quantitative Model Comparison A comparison of all models is performed in Table 1 for CIFAR10, and in the appendix for CIFAR100. At the extreme γ →∞, the model behaves almost identically to a standard feed forward classifier using the same architecture (i-RevNet), and for γ = 0, it closely mirrors a conventionally trained GC, as the bits/dim are the same. We find the most favourable setting to be at γ = 1: Here, the classification error and the bits/dim each only suffer a 10% penalty compared to the extremes. The uncertainty quantification for IB-INN at this setting (calibration and OoD prediction entropy) is far better than for pure DCs. Against expectations, standard GCs have worse calibration error. Our hypothesis is that their predictions are too noisy and inaccurate for a positive effect to be visible. For OoD detection, the IB-INN and standard GCs are all comparable, as we would expect from the similar bits/dim. Fig. 6 shows the trade-off between the two extremes in more detail: at low γ, the OoD detection and uncertainty quantification are improved, at the cost of classification accuracy. The VIB behaves in agreement with the other DCs: it has consistently lower classification error but higher calibration error than the IB-INN. This confirms that the IB-INN’s behaviour is due to the application of IB to GCs exclusively. This does not mean that the IB-INN should be preferred over VIB, or vice versa. The main advantages of the VIB are the increased robustness to overfitting and adversarial attacks, aspects that we do not examine in this work. Latent Space Exploration To better understand what the IB-INN learns, we analyze the latent space in different ways. Firstly, Fig. 7 shows the layout of the latent space GMM through a linear projection. We find that the clusters of ambiguous classes, e.g. truck and car, are connected in latent space, to account for uncertainty. Secondly, Fig. 9 shows interpolations in latent space between two test set images, using models trained with different values of γ. We observe that for low γ, the IB-INN has a well structured latent space, leading to good generative capabilities and plausible interpolations. For larger γ, class separation increases and interpolation quality continually degrades. Finally, generated images can give insight into the classification process, visualizing how the model understands each class. If a certain feature is not generated, this means it does not contribute positively to the likelihood, and in turn will be ignored for classification. Examples for this are shown in Fig. 8. 5 Conclusions We addressed the application of the Information Bottleneck (IB) as a loss function to Invertible Neural Networks (INNs) trained as generative models. We find that we can formulate an asymptotically exact version of the IB, which results in an INN that is a generative classifier. From our experiments, we conclude that the IB-INN provides high quality uncertainties and out-of-distribution detection, while reaching almost the same classification accuracy as standard feed-forward methods on CIFAR10 and CIFAR100. Acknowledgements LA received funding by the Federal Ministry of Education and Research of Germany project High Performance Deep Learning Framework (No 01IH17002). RM received funding from the Robert Bosch PhD scholarship. UK and CR received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). We thank the Center for Information Services and High Performance Computing (ZIH) at Dresden University of Technology for generous allocations of computation time. Furthermore we thank our colleagues (in alphabetical order) Tim Adler, Felix Draxler, Clemens Fruböse, Jakob Kruse, Titus Leistner, Jens Müller and Peter Sorrenson for their help and fruitful discussions. Broader Impact As our IB-INN is not bound to any particular application, and applies to settings that can in principle already be solved with existing methods, we foresee no societal advantages or dangers in terms of direct application. More generally, we think accurate uncertainty quantification plays an important role in a safe and productive use of AI.
1. What is the main contribution of the paper, and how does it relate to the Information Bottleneck principle? 2. What are the strengths of the proposed approach, particularly in its application and combination with Invertible Neural Networks? 3. Are there any concerns or weaknesses regarding the paper's proposal, such as the use of a variational proxy or the limitations of the empirical evaluation? 4. How does the reviewer assess the clarity and quality of the paper's content, including its novelty and significance? 5. Are there any questions or suggestions for future work related to the paper's topic or methodology?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper combines the Information Bottleneck (IB) principle with Invertible Neural Nets (INN) in a completely new setting, competitive Generative Classification. While the standard motivation and application of the IB is supervised learning, generating efficient compressed representations, Z, of patterns (X) which maximize the information about their class (Y), given a sample of P(X,Y), this work originally propose and apply it to the opposite task - generative classification - find efficient generative representation Z that produce efficient class conditional generative models, p(X|Y), given the labeled sample. The paper is using a variational proxy to the Mutual Information, the cross information CI(X,Y), which the show to upper bound the mutual information under their GMM quantization of Z. They show that replacing the mutual Informations I(X;Z) and I(Z,Y) by CI(X;Z) and CI(Z;Y) in the IB tradeoff can form a useful practical variational proxy for the IB in high dimensions. On the other hand they use an invertible mapping from X to Z using an invertible NN (like i-RevNets) to generate a non-lossy representation Z, but turn it into a stochastic lossy map by adding Gaussian noise to each sample representation (the GMM’s). They then combine thieve two optimization problems, finding the best bijection from X to Z and then coarsen the representation by adding Gaussian noise using the IB proxy, to obtain Generative models that are in turned used for Generative Classification. The paper provides proofs for the validity of their bounds as proxy to the IB. In the experimental section, they compared the model to several natural alternatives, form simple Maximum Likelihood generative model (Class-NLL), feedforward Discriminative Classifier (DC), fully reversible network ((i-RevNet), and Variational Information Bottleneck (VIB), under carefully monitored similar conditions. The classification results are surprisingly in favor of the IB tradeoff coarsening where the patterns are blurred by a Gaussian noise following the trained non-linear bijection. The experimental results are diverse over several datasets and are quite compelling. Strengths This is an original (at least too me) application of the IB which is counterintuitive, as it works against the original motivation and most applications of the IB. It is combined nicely with the idea of invertible networks ((INN) which in turn enable the authors to prove rigorous Variational bounds when only Gaussian additive noise is used for the lossy compression Of the bottleneck. I found it rather elegant. The empirical evaluation is thorough and rather compelling, even that the actual classification test is weak and the detection score is marginal, the calibration errors are very convincing. Overall, this is an interesting paper that propose new method and application by combining first principles in a rather surprising way. The empirical tests are well executed and convincing. Weaknesses The papers is well written but the overall clarity can be improved (minor). I found the fact that the Cross Information is not always a bound on the mutual information - only in the additive Gaussian noise setting with differential entropies - somewhat disturbing. I would like to see a proxy to I(X;Y) that always bounds the mutual Information and obeys at least approximately the Data Processing Inequality, which is quite fundamental to the IB. I think these can be proved using the chain rules of the KL and actually obtain a stronger statement than prop 1, which as written is trivial. But these are minor and more theoretical critiques which don’t change much the quality of the paper.
NIPS
Title Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification Abstract The Information Bottleneck (IB) objective uses information theory to formulate a task-performance versus robustness trade-off. It has been successfully applied in the standard discriminative classification setting. We pose the question whether the IB can also be used to train generative likelihood models such as normalizing flows. Since normalizing flows use invertible network architectures (INNs), they are information-preserving by construction. This seems contradictory to the idea of a bottleneck. In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN’s generative capabilities intact. Secondly, we investigate the properties of these models experimentally, specifically used as generative classifiers. This model class offers advantages such as improved uncertainty quantification and out-of-distribution detection, but traditional generative classifier solutions suffer considerably in classification accuracy. We find the trade-off parameter in the IB controls a mix of generative capabilities and accuracy close to standard classifiers. Empirically, our uncertainty estimates in this mixed regime compare favourably to conventional generative and discriminative classifiers. Code: github.com/VLL-HD/IB-INN 1 Introduction The Information Bottleneck (IB) objective (Tishby et al., 2000) allows for an information-theoretic view of neural networks, for the setting where we have some observed input variable X , and want to predict some Y from it. For simplicity, we limit the discussion to the common case of discrete Y (i.e. class labels), but results readily generalize. The IB postulates existence of a latent space Z, where all information flow between X and Y is channeled through (hence the method’s name). In order to optimize predictive performance, IB attempts to maximize the mutual information I(Y,Z) between Y andZ. Simultaneously, it strives to minimize the mutual information I(X,Z) betweenX and Z, forcing the model to ignore irrelevant aspects of X which do not contribute to classification performance and only increase the potential for overfitting. The objective can thus be expressed as LIB = I(X,Z)− β I(Y,Z) . (1) The trade-off parameter β is crucial to balance the two aspects. The IB was successfully applied in a variational form (Alemi et al., 2017; Kolchinsky et al., 2017) to train feed-forward classification models p(Y |X) with higher robustness to overfitting and adversarial attacks than standard ones. In this work, we consider the relationship between X and Y from the opposite perspective – using the IB, we train an invertible neural network (INN) as a conditional generative likelihood model 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. p(X|Y ), i.e. as a specific type of conditional normalizing flow. In this case, X is the variable of which the likelihood is predicted, and Y is the class condition. It is a generative model because one can sample from the learned p(X|Y ) at test time to generate new examples from any class, although we here focus on optimal likelihood estimation for existing inputs, not the generating aspect. We find that the IB, when applied to such a likelihood model p(X|Y ), has special implications for the use as a so-called generative classifier (GC). GCs stand in contrast to standard discriminative classifers (DCs), which directly predict the class probabilities p(Y |X). For a GC, the posterior class probabilities are indirectly inferred at test time by Bayes’ rule, cf. Fig. 1: p(Y |X) = p(X|Y )p(Y )/Ep(Y ) [p(X|Y )]. Because DCs optimize prediction performance directly, they achieve better results in this respect. However, their models for p(Y |X) tend to be most accurate near decision boundaries (where it matters), but deteriorate away from them (where deviations incur no noticeable loss). Consequently, they are poorly calibrated (Guo et al., 2017) and out-of-distribution data can not be easily recognized at test time (Ovadia et al., 2019). In contrast, GCs model full likelihoods p(X|Y ) and thus implicitly full posteriors p(Y |X), which leads to the opposite behavior – better predictive uncertainty at the price of reduced accuracy. Fig. 2 illustrates the decision process in latent space Z. In the past, deep learning models trained in a purely generative way, particularly flow-based models trained with maximum likelihood, achieved highly unsatisfactory accuracy, so that some recent work has called into question the overall effectiveness of GCs (Fetaya et al., 2019; Nalisnick et al., 2019b). In-depth studies of idealized settings (Bishop & Lasserre, 2007; Bishop, 2007) revealed the existence of a trade-off, controlling the balance between discriminative and generative performance. In this work, we find that the IB can represent this trade-off, when applied to generative likelihood models. To summarize our contributions, we combine two concepts – the Information Bottleneck (IB) objective and Invertible Neural Networks (INNs). Firstly, we derive an asymptotically exact formulation of the IB for this setting, resulting in our IB-INN model, a special type of conditional normalizing flow. Secondly, we show that this model is especially suitable for the use as a GC: the trade-off parameter β in the IB-INN’s loss smoothly interpolates between the advantages of GCs (accurate posterior calibration and outlier detection), and those of DCs (superior task performance). Empirically, at the right setting for β, our model only suffers a minor degradation in classification accuracy compared to DCs while exhibiting more accurate uncertainty quantification than pure DCs or GCs. 2 Related Work Information Bottleneck: The IB was introduced by Tishby et al. (2000) as a tool for informationtheoretic optimization of compression methods. This idea was expanded on by Chechik et al. (2005); Gilad-Bachrach et al. (2003); Shamir et al. (2010) and Friedman et al. (2013). A relationship between IB and deep learning was first proposed by Tishby & Zaslavsky (2015), and later experimentally examined by Shwartz-Ziv & Tishby (2017), who use IB for the understanding of neural network behavior and training dynamics. A close relation of IB to dropout, disentanglement, and variational autoencoding was discovered by Achille & Soatto (2018), which led them to introduce Information Dropout as a way to take advantage of IB in discriminative models. The approximation of IB in a variational setting was proposed independently by Kolchinsky et al. (2017) and Alemi et al. (2017), who especially demonstrate improved robustness against overfitting and adversarial attacks. Generative Classification: An in-depth analysis of the trade-offs between discriminative and generative models was first performed by Ng & Jordan (2001) and was later extended by Bouchard & Triggs (2004); Bishop & Lasserre (2007); Xue & Titterington (2010), who investigated the possibility of balancing the strengths of both methods via a hyperparameter, albeit for very simple models. GCs have been used more rarely in the deep learning era, some exceptions being application to natural language processing (Yogatama et al., 2017), and adversarial attack robustness (Li et al., 2019; Schott et al., 2019). However, Fetaya et al. (2019) found that conditional normalizing flows have poor discriminative performance, making them unsuitable as GCs. GCs should be clearly distinguished from so-called hybrid models (Raina et al., 2004): these commonly only model the marginal p(X) and jointly perform discriminate classification using shared features, with their main application being semi-supervised learning. Notable examples are Kingma et al. (2014); Chongxuan et al. (2017); Nalisnick et al. (2019c); Grathwohl et al. (2019). 3 Method Below, upper case letters denote random variables (RVs) (e.g. X) and lower case letters their instances (e.g. x). The probability density function of a RV is written as p(X), the evaluated density as p(x) or p(X=x), and all RVs are vector quantities. We distinguish true distributions from modeled ones by the letters p and q, respectively. The distributions q always depend on model parameters, but we do not make this explicit to avoid notation clutter. Assumption 1 in the appendix provides some weak assumptions about the domains of the RVs and their distributions. Full proofs for all results are also provided in the appendix. Our models have two kinds of learnable parameters. Firstly, an invertible neural network (INN) with parameters θ maps inputs X to latent variables Z bijectively: Z = gθ(X) ⇔ X = g−1θ (Z). Assumption 2 in the Appendix provides some explicit assumptions about the network, its gradients, and the parameter space, which are largely fulfilled by standard invertible network architectures, including the affine coupling architecture we use in the experiments. Secondly, a Gaussian mixture model with class-dependent means µy , where y are the class labels, and unit covariance matrices is used as a reference distribution for the latent variables Z: q(Z |Y ) = N (µy, I) and q(Z) = ∑ y p(y)N (µy, I). (2) For simplicity, we assume that the label distribution is known, i.e. q(Y ) = p(Y ). Our derivation rests on a quantity we call mutual cross-information CI (in analogy to the well-known cross-entropy): CI(U, V ) = Eu,v∼p(U,V ) [ log q(u, v) q(u)q(v) ] . (3) Note that the expectation is taken over the true distribution p, whereas the logarithm involves model distributions q. In contrast, plain mutual information uses the same distribution in both places. Our definition is equivalent to the recently proposed predictive V-information (Xu et al., 2020), whose authors provide additional intuition and guarantees. The following proposition (proof in Appendix) clarifies the relationship between mutual information I and CI: Proposition 1. Assume that q(.) can be chosen from a sufficiently rich model family (e.g. a universal density estimator, see Assumption 2). Then for every η > 0 there is a model such that ∣∣I(U, V ) − CI(U, V ) ∣∣ < η and I(U, V ) = CI(U, V ) if p(u, v) = q(u, v). We replace both mutual information terms I(X,Z) and I(Y, Z) in Eq. 1 with the mutual crossinformation CI , and derive optimization procedures for each term in the following subsections. 3.1 INN-Based Formulation of the I(X,Z)-Term in the IB Objective Estimation of the mutual cross-information CI(X,Z) between inputs and latents is problematic for deterministic mappings from X to Z (Amjad & Geiger, 2018), and specifically for INNs, which are bijective by construction. In this case, the joint distributions q(X,Z) and p(X,Z) are not valid Radon-Nikodym densities and both CI and I are undefined. Intuitively, I and CI become infinite, because p and q have an infinitely high delta-peak at Z = gθ(X), and are otherwise 0. For the IB to be applicable, some information has to be discarded in the mapping to Z, making p and q valid Radon-Nikodym densities. In contrast, normalizing flows rely on all information to be retained for optimal generative capabilities and density estimation. Our solution to this seeming contradiction comes from the practical use of normalizing flows. Here, a small amount of noise is commonly added to dequantize X (i.e. to turn discrete pixel values into real numbers), to avoid numerical issues during training. We adopt this approach to artificially introduce a minimal amount of information loss: Instead of feeding X to the network, we input a noisy version X ′ = X + E , where E ∼ N (0, σ2I) = p(E) is Gaussian with mean zero and covariance σ2I. For a quantization step size ∆X , the additional error on the estimated densities caused by the augmentation has a known bound decaying with exp(−∆X2/2σ2) (see Appendix). We are interested in the limit σ → 0, so in practice, we choose a very small fixed σ, that is smaller than ∆X . This makes the error practically indistinguishable from zero. The INN then learns the bijective mapping ZE = gθ(X + E), which guarantees CI(X,ZE) to be well defined. Minimizing this CI according to the IB principle means that gθ(X + E) is encouraged to amplify the noise E , so that X can be recovered less accurately, see Fig. 3 for illustration. If the global minimum of the loss is achieved w.r.t. θ, I and CI coincide, as CI(X,ZE) is an upper bound (also cf. Prop. 1): Proposition 2. For the specific case that ZE = gθ(X + E), it holds that I(X,ZE) ≤ CI(X,ZE). Our approach should be clearly distinguished from applications of the IB to DCs, such as Alemi et al. (2017), which pursue a different goal. There, the model learns to ignore the vast majority of input information and keeps only enough to predict the class posterior p(Y |X). In contrast, we induce only a small, explicitly adjustable loss of information to make the IB well-defined. As a result, the amount of retained information in our generative IB-INNs is orders of magnitude larger than in DC approaches, which is necessary to represent accurate class-conditional likelihoods p(X |Y ). We now derive the loss function that allows optimizing θ and µy to minimize the noise-augmented CI(X,ZE) in the limit of small noise σ → 0. Full details are found in appendix. We decompose the mutual cross-information into two terms CI(X,ZE) = Ep(X),p(E) [ −log q ( ZE=gθ(x+ε) ) ] + Ep(X),p(E) [ log q ( ZE=gθ(x+ ε) ∣∣x) ]︸ ︷︷ ︸ :=A . The first expectation can be approximated by the empirical mean over a finite dataset, because the Gaussian mixture distribution q(ZE) is known analytically. To approximate the second term, we first note that the condition X = x can be replaced with Z = gθ(x), because gθ is bijective and both conditions convey the same information A = Ep(X),p(E) [ log q ( ZE = gθ(x+ ε) ∣∣Z = gθ(x)) ]. We now linearize gθ by its first order Taylor expansion, gθ(x + ε) = gθ(x) + Jxε + O(ε2), where Jx = ∂gθ(X) ∂X ∣∣ x denotes the Jacobian at X = x. Going forward, we write O(σ2) instead of O(ε2) for clarity, noting that both are equivalent because we can write ε = σn with n ∼ N (0, I), and ‖ε‖ = σ‖n‖. Inserting the expansion into A, the O(σ2) can be moved outside of the expression: It can be moved outside the log, because that has a Lipschitz constant of 1/ inf q(gθ(X+E)), which we show is uniformly bounded in the full proof. The O(σ2) can then be exchanged with the expectation because the expectation’s argument is also uniformly bounded, finally leading to A = Ep(X),p(E) [ log q ( gθ(x) + Jxε ∣∣ gθ(x)) ]+O(σ2). Since ε is Gaussian with mean zero and covariance σ2I, the conditional distribution is Gaussian with mean gθ(x) and covariance σ2JxJTx . The expectation with respect to p(E) is thus the negative entropy of a multivariate Gaussian and can be computed analytically as well A = Ep(X) [ −1 2 log ( det(2πeσ2JxJ T x ) )] +O(σ2) = Ep(X) [ − log |det(Jx)| ] − d log(σ)− d 2 log(2πe) +O(σ2) with d the dimension of X . To avoid running the model twice (for x and x+ ε), we approximate the expectation of the Jacobian determinant by 0th-order Taylor expansion as Ep(X) [ log |det(Jx)| ] = Ep(X),p(E) [ log |det(Jε)| ] +O(σ), where Jε is the Jacobian evaluated at x + ε instead of x. The residual can be moved outside of the log and the expectation because Jε is uniformly bounded in our networks. Putting everything together, we drop terms from CI(X,ZE) that are independent of the model or vanish with rate at least O(σ) as σ → 0. The resulting loss LX becomes LX = Ep(X), p(E) [ − log q ( gθ(x+ε) ) − log ∣∣ det(Jε)∣∣ ]. (4) Since the change of variables formula defines the network’s generative distribution as qX(x) = q ( Z = gθ(x) ) ∣∣det(Jx)∣∣, LX is the negative log-likelihood of the perturbed data under qX , LX = Ep(X),p(E) [ − log qX(x+ ε) ] . (5) The crucial difference between CI(X,ZE) and LX is the elimination of the term −d log(σ). It is huge for small σ and would dominate the model-dependent terms, making minimization of CI(X,ZE) very hard. Intuitively, the fact that CI(X,ZE) diverges for σ → 0 highlights why CI(X,Z) is undefined for bijectively related X and Z. In practice, we estimate LX by its empirical mean on a training set {xi, εi}Ni=1 of size N , denoted as L (N) X . It remains to be shown that replacing I(X,ZE) withL(N)X in the IB loss Eq. 1 does not fundamentally change the solution of the learning problem in the limit of large N , small σ and sufficient model power. Sufficient model power here means that the family of generative distributions realizable by gθ should be a universal density estimator (see Appendix, Assumption 2). This is the case if gθ can represent increasing triangular maps (Bogachev et al., 2005), which has been proven for certain network architectures explicitly (e.g. Jaini et al., 2019; Huang et al., 2018), including the affine coupling networks we use for the experiments (Teshima et al., 2020). Propositions 1 & 2 then tell us that we may optimize CI(X,ZE) as an estimator of I(X,ZE). The above derivation of the loss can be strengthened into Proposition 3. Under Assumptions 1 and 2, for any , η > 0 and 0 < δ < 1 there are σ0 > 0 and N0 ∈ N, such that ∀N ≥ N0 and ∀0 < σ < σ0, the following holds uniformly for all model parameters θ: Pr (∣∣∣CI(X,ZE) + d log√2πeσ2 − L(N)X ∣∣∣ > ) < δ and Pr (∥∥∥∥ ∂∂θCI(X,ZE)− ∂∂θL(N)X ∥∥∥∥ > η) < δ The first statement proves consistence of L(N)X , and the second justifies gradient-descent optimization on the basis of L(N)X . Proofs can be found in the appendix. 3.2 GMM-Based Formulation of the I(Z,Y)-Term in the IB Objective Similarly to the first term in the IB-loss in Eq. 1, we also replace the mutual information I(Y, Z) with CI(Y, ZE). Inserting the likelihood q(z | y) = N (z;µy, I) of our latent Gaussian mixture model into the definition and recalling that q(Y ) = p(Y ), this can be decomposed into CI(Y,ZE) = Ep(Y ) [ − log p(y) ] + Ep(X,Y ),p(E) [ log q ( gθ(x+ε) | y ) p(y)∑ y′ q ( gθ(x+ε) | y′ ) p(y′) ] . (6) In this case, CI(Y,ZE) is a lower bound on the true mutual information I(Y, ZE), allowing for its maximization in our objective. In fact, it corresponds to a bound originally proposed by Barber & Agakov (2003) (see their Eq. 3): The first term is simply the entropy h(Y ), because p(Y ) is known. The second term can be rewritten as the negative cross-entropy −hq(Y | ZE). For I(Y,ZE), we would have the negative entropy −h(Y | ZE) in its place, then Gibbs’ inequality leads directly to CI(Y, ZE) ≤ I(Y,ZE). The first expectation can be dropped during training, as it is model-independent. Note how the the second term can also be written as the expectation of the GMM’s log-posterior log q(y | z). Since all mixture components have unit covariance, the elements of Z are conditionally independent and the likelihood factorizes as q(z | y) = ∏j q(zj | y). Thus, q(y | z) can be interpreted as a naive Bayes classifier. In contrast to naive Bayes classifiers in data space, which typically perform badly because raw features are not conditionally independent, our training enforces this property in latent space and ensures accurate classification. Defining the loss L(N)Y as the empirical mean of the log-posterior in a training set {xi, yi, εi}Ni=1 of size N, we get L(N)Y = 1 N N∑ i=1 log N ( gθ(xi + εi);µyi , I ) p(yi)∑ y′ N ( gθ(xi + εi);µy′ , I ) p(y′) . (7) 3.3 The IB-INN-Loss and its Advantages Replacing the mutual information terms in Eq. 1 with their empirical estimates L(N)X and L (N) Y , our model parameters θ and {µ1, ..., µK} are trained by gradient descent of the IB-INN loss L(N)IB-INN = L (N) X − β L (N) Y (8) In the following, we will interpret and discuss the nature of the loss function in Eq. 8 and form an intuitive understanding of why it is more suitable than the class-conditional negativelog-likelihood (‘class-NLL’) traditionally used for normalizing-flow type generative classifiers: Lclass-NLL = −E log ( qθ(x|y) ) . The findings are represented graphically in Fig. 4. LX -term: As shown by Eq. 5, the term is the (unconditional) negative-log-likelihood loss used for normalizing flows, with the difference that q(Z) is a GMM rather than a unimodal Gaussian. We conclude that this loss term encourages the INN to become an accurate likelihood model under the marginalized latent distribution and to ignore any class information. LY -term: Examining Eq. 7, we see that for any pair (g(x+ ε), y), the cluster centers (µY 6=y) of the other classes are repulsed (by minimizing the denominator), while gθ(x+ ε) and the correct cluster center µy are drawn together. Note that the class-NLL loss only captures the second aspect and lacks repulsion, resulting in a much weaker training signal. We can also view this in a different way: by substituting q(x|y) ∣∣det(Jx)∣∣−1 for q(z|y), the second summand of Eq. 6 simplifies to log q(y|x), since the Jacobian cancels out. This means that our LY loss directly maximizes the correct class probability, while ignoring the data likelihood. Again, this improves the training signal: as Fetaya et al. (2019) showed, the data likelihood will otherwise dominate the class-NLL loss, so that lack of classification accuracy is insufficiently penalized. Classical class-NLL loss: The class-NLL loss or an approximation thereof is used to train standard GCs. The IB-INN loss reduces to this case for β = 1, because the first summand in LX (cf. Eq. 4) cancels with the denominator in Eq. 7. Then, the INN no longer receives a penalty when latent mixture components overlap, and the GMM looses its class discriminatory power, as Fig. 4 illustrates: Points are only drawn towards the correct class, but there is no loss component repulsing them from the incorrect classes. As a result, all cluster centers tend to collapse together, leading the INN to effectively just model the marginal data likelihood (as found by Fetaya et al., 2019). Similarly, Wu et al. (2019) found that β = 1 is the minimum possible value to perform classification with discriminative IB methods. 4 Experiments In the following, we examine the properties of the IB-INN used as a GC, especially the quality of uncertainty estimates and OoD detection. We construct our IB-INN by combining the design efforts of various works on INNs and normalizing flows. In brief, we use a Real-NVP architecture consisting of affine coupling blocks (Dinh et al., 2017), with added improvements from recent works (Kingma & Dhariwal, 2018; Jacobsen et al., 2019, 2018; Ardizzone et al., 2019). A detailed description of the architecture is given in the appendix. We learn the set of means µY as free parameters jointly with the remaining model parameters in an end-to-end fashion using the loss in Eq. 8. The practical implementation of the loss is explained in the appendix. We apply two additional techniques while learning the model, label smoothing and loss rebalancing: Label smoothing Hard labels force the Gaussian mixture components to be maximally separated, so they drift continually further apart during training, leading to instabilities. Label smoothing (Szegedy et al., 2016) with smoothing factor 0.05 prevents this, and we also apply it to all baseline models. Loss rebalancing The following rebalancing scheme allows us to use the same hyperparameters when changing β between 5 orders of magnitude. Firstly, we divide the loss LX by the number of dimensions of X , which approximately matches its magnitude to the LY loss. We define a corresponding γ := β/dim(X) to stay consistent with the IB definition. Secondly, we scale the entire loss by a factor 2/(1 + γ). This ensures that it keeps the same magnitude when changing γ. L(N)IB = 2 1 + γ ( L(N)X dim(X) − γ L(N)Y ) (9) Finally, the noise amplitude σ should be chosen to satisfy two criteria: it should be small enough so that the Taylor expansions in the loss for σ → 0 are sufficiently accurate, and it should also not hinder the model’s performance. Our ablation provided in the Appendix indicates that both criteria are satisfied when σ / 0.25∆X , with the quantization step size ∆X , so we fix σ = 10−3 for the remaining experiments. 4.1 Comparison of Methods In addition to the IB-INN, we train several alternative methods. For each, we use exactly the same INN model, or an equivalent feed-forward ResNet model. Every method has the exact same hyperparameters and training procedure, the only difference being the loss function and invertibility. Class-NLL: As a standard generative classifier, we firstly train an INN with a GMM in latent space naively as a conditional generative model, using the class-conditional maximum likelihood loss. Secondly, we also train a regularized version, to increase the classification accuracy. The regularization consists of leaving the class centroids µY fixed on a hyper-sphere, forcing some degree of class-separation. Feed-forward As a DC baseline, we train a standard ResNet (He et al., 2016) with softmax cross entropy loss. We replace each affine coupling block by a ResNet block, leaving all other hyperparameters the same. i-RevNet (Jacobsen et al., 2018): To rule out any differences stemming from the constraint of invertibility, we additionally train the INN as a standard softmax classifier, by projecting the outputs to class logits. While the architecture is invertible, it is not a generative model and trained just like a standard feed-forward classifier. Variational Information Bottleneck (VIB): To examine which observed behaviours are due to the IB in general, and what is specific to GCs, we also train the VIB (Alemi et al., 2017), a feed-forward DC, using a ResNet. We convert the authors definition of β to our γ for consistency. 4.2 Quantitative measurements RGB rotation (CIFAR10) Small noise (CIFAR10) QuickDraw ImageNet Figure 5: Examples from each OoD dataset used in the evaluation. The inlier data are original CIFAR10 images. In the following, we describe the scores used in Table 1. Bits/dim: The bits/dim metric is common for objectively comparing the performance of density estimation models such as normalizing flows, and is closely related to the KL divergence between real and estimated distributions. Details can be found e.g. in Theis et al. (2015). Calibration error: The calibration curve measures whether the confidence of a model agrees with its actual performance. All prediction outputs are binned according to their predicted probability P (‘confidence’), and it is recorded which fraction of predictions in each bin was correct, Q. For a perfectly calibrated model, we have P = Q, e.g. predictions with 70% confidence are correct 70% of the time. We use several metrics to measure deviations from this behaviour, largely in line with Guo et al. (2017). Specifically, we consider the expected calibration error (ECE, error weighted by bin count), the maximum calibration error (MCE, max error over all bins), and the integrated calibration error (ICE, summed error per bin), as well as the geometric mean of all three: 3 √ ECE ·MCE · ICE. The geometric mean is used because it properly accounts for the different magnitudes of the metrics. Exact definitions found in appendix. Increased out-of-distribution (OoD) prediction entropy: For data that is OoD, we expect from a model that it returns uncertain class predictions, as it has not been trained on such data. In the ideal case, each class is assigned the same probability of 1/(nr. classes). Ovadia et al. (2019) quantify this through the discrete entropy of the class prediction outputs H(Y |XOod). To counteract the effect of less accurate models having higher prediction entropy overall, we report the difference between OoD and in-distribution test set H(Y |XOod)−H(Y |XIn distrib.). OoD detection score: We use OoD detection capabilities intrinsically built in to GCs. For this, we apply the recently proposed typicality test (Nalisnick et al., 2019a). This is a hypothesis test that sets an upper and lower threshold on the estimated likelihood, beyond which batches of inputs are classified as OoD. We apply the test to single input images (i.e. batch size 1). For quantification, we vary the detection threshold to produce a receiver operator characteristic (ROC), and compute the area under this curve (ROC-AUC) in percent. For short, we call this the OoD detection score. It will be 100 for perfectly separated in- and outliers, and 50 if each point is assigned a random likelihood. OoD datasets: The inlier dataset consist of CIFAR10/100 images, i.e. 32× 32 colour images showing 10/100 object classes. Additionally, we created four different OoD datasets, that cover different aspects, see Fig. 5. Firstly, we create a random 3D rotation matrix with a rotation angle of α = 0.3π, and apply it to the RGB color vectors of each pixel of CIFAR10 images. Secondly, we add random uniform noise with a small amplitude to CIFAR10 images, as an alteration of the image statistics. Thirdly, we use the QuickDraw dataset of hand drawn objects (Ha & Eck, 2018), and filter only the categories corresponding to CIFAR10 classes and color each grayscale line drawing randomly. Therefore the semantic content is the same, but the image modality is different. Lastly, we downscale the ImageNet validation set to 32 × 32 pixels. In this case, the semantic content is different, but the image statistics are very similar to CIFAR10. 4.3 Results Quantitative Model Comparison A comparison of all models is performed in Table 1 for CIFAR10, and in the appendix for CIFAR100. At the extreme γ →∞, the model behaves almost identically to a standard feed forward classifier using the same architecture (i-RevNet), and for γ = 0, it closely mirrors a conventionally trained GC, as the bits/dim are the same. We find the most favourable setting to be at γ = 1: Here, the classification error and the bits/dim each only suffer a 10% penalty compared to the extremes. The uncertainty quantification for IB-INN at this setting (calibration and OoD prediction entropy) is far better than for pure DCs. Against expectations, standard GCs have worse calibration error. Our hypothesis is that their predictions are too noisy and inaccurate for a positive effect to be visible. For OoD detection, the IB-INN and standard GCs are all comparable, as we would expect from the similar bits/dim. Fig. 6 shows the trade-off between the two extremes in more detail: at low γ, the OoD detection and uncertainty quantification are improved, at the cost of classification accuracy. The VIB behaves in agreement with the other DCs: it has consistently lower classification error but higher calibration error than the IB-INN. This confirms that the IB-INN’s behaviour is due to the application of IB to GCs exclusively. This does not mean that the IB-INN should be preferred over VIB, or vice versa. The main advantages of the VIB are the increased robustness to overfitting and adversarial attacks, aspects that we do not examine in this work. Latent Space Exploration To better understand what the IB-INN learns, we analyze the latent space in different ways. Firstly, Fig. 7 shows the layout of the latent space GMM through a linear projection. We find that the clusters of ambiguous classes, e.g. truck and car, are connected in latent space, to account for uncertainty. Secondly, Fig. 9 shows interpolations in latent space between two test set images, using models trained with different values of γ. We observe that for low γ, the IB-INN has a well structured latent space, leading to good generative capabilities and plausible interpolations. For larger γ, class separation increases and interpolation quality continually degrades. Finally, generated images can give insight into the classification process, visualizing how the model understands each class. If a certain feature is not generated, this means it does not contribute positively to the likelihood, and in turn will be ignored for classification. Examples for this are shown in Fig. 8. 5 Conclusions We addressed the application of the Information Bottleneck (IB) as a loss function to Invertible Neural Networks (INNs) trained as generative models. We find that we can formulate an asymptotically exact version of the IB, which results in an INN that is a generative classifier. From our experiments, we conclude that the IB-INN provides high quality uncertainties and out-of-distribution detection, while reaching almost the same classification accuracy as standard feed-forward methods on CIFAR10 and CIFAR100. Acknowledgements LA received funding by the Federal Ministry of Education and Research of Germany project High Performance Deep Learning Framework (No 01IH17002). RM received funding from the Robert Bosch PhD scholarship. UK and CR received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). We thank the Center for Information Services and High Performance Computing (ZIH) at Dresden University of Technology for generous allocations of computation time. Furthermore we thank our colleagues (in alphabetical order) Tim Adler, Felix Draxler, Clemens Fruböse, Jakob Kruse, Titus Leistner, Jens Müller and Peter Sorrenson for their help and fruitful discussions. Broader Impact As our IB-INN is not bound to any particular application, and applies to settings that can in principle already be solved with existing methods, we foresee no societal advantages or dangers in terms of direct application. More generally, we think accurate uncertainty quantification plays an important role in a safe and productive use of AI.
1. What is the main contribution of the paper regarding the connection between Information Bottleneck principle and invertible neural networks? 2. What are the strengths of the proposed framework, particularly in its application to generative classifiers? 3. What is the reviewer's concern regarding the role of σ in the generative-discriminative interpolation, and how does it relate to the paper's weaknesses?
Summary and Contributions Strengths Weaknesses
Summary and Contributions [Update after author response: Thank you for your excellent response, especially the effect of hyperparameter sigma in the proposed framework. Though I still would like to know how can O(\sigma^2) be moved out of the log q(.) in Line 158-159, this is a small technical question and does not affect much the main conclusions of the paper. Thus, overall, I am happy to keep my score as it is.] The paper proposes a framework that connects Information Bottleneck principle into training of invertible neural networks (INN) by disturbing the input to the invertible map with a controlled noise. Various forms of approximation to the IB objective is derived for practical training objective in which an asymptotic bound for the approximations is also provided. The experimental evaluation illustrates the usefulness of the proposed framework to generative classifiers where the IB trade-off hyper-parameter controls the interpolation from being good uncertainty qualification and outlier detection to classification accuracy. Strengths There are several things to like about this paper: -A principled way of constructing an IB loss into INN with strong intuitions for each of the loss components. The derived loss generalises the standard INN loss. -An asymptotical analysis of the empirical loss in large sample and small noise magnitude limit. -Demonstrating an interesting applications for uncertainty estimates and OOD detection. -Extensive experiments with interesting results about interpolations between classification accuracy and generative capabilities Weaknesses However, I find the paper lack of a proper explanation/investigation for the role of \sigma in the generative-discriminative interpolation. \sigma is important in this problem setting because with \sigma = 0, the entire proposed framework is undefined. Thus, I think it is worthy to discuss/study the role of \sigma in controlling the generative-discriminative interpolation (beside the stability reason where \sigma makes the MI well-defined in this setting). Does such interpolation ability remains for a very small or very large value of \sigma?
NIPS
Title Robustness of Community Detection to Random Geometric Perturbations Abstract We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest. N/A Introduction In a d-dimensional random geometric graph, N vertices are assigned random coordinates in Rd, and only points close enough to each other are connected by an edge. Random geometric graphs are used to model complex networks such as social networks, the world wide web and so on. We refer to [19] - and references therein - for a comprehensive introduction to random geometric graphs. On the other hand, in social networks, users are more likely to connect if they belong to some specific community (groups of friends, political party, etc.). This has motivated the introduction of the stochastic block models (see the recent survey [1] and the more recent breakthrough [5] for more details), where in the simplest case, each of the N vertices belongs to one (and only one) of the two communities that are present in the network. The two types of connections – geometric graph vs. block model – are conceptually quite different and co-exist independently. Two users might be connected because they are “endogenously similar” (their latent coordinates are close enough to each others) or because they are “exogenously similar” (they belong to the same community). For instance, to oversimplify a social network, we can consider that two different types of connections can occur between users: either they are childhood friends (with similar latent variables) or they have the same political views (right/left wing). We therefore model these simultaneous types of interaction in social networks as a simple stochastic block model (with 2 balanced communities) perturbed by a latent geometric graph. More precisely, we are going to assume that the probability of endogenous connections between vertices i and j, with respective latent variables Xi, Xj 2 Rd, is given by the Gaussian1 kernel exp( kXi Xjk2) where is the (inverse) width. On the other hand, exogenous connections are defined by the block model where half of the N vertices belong to some community, half of them to the other one. The probability of connection between two members of the same community is equal to p1 and between two members from different communities is equal to p2. We also consider an extra parameter 1We emphasize here that geometric interactions are defined through some kernel so that different recovery regimes can be identified with respect to a unique, simple width parameter . Similarly, the choice of the Gaussian kernel might seem a bit specific and arbitrary, but this purely for the sake of presentation: our approach can be generalized to other kernels (the “constants” will be different; they are defined w.r.t. the kernel chosen). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 [0, 1] to represent the respective strengths of exogenous vs. endogenous connections (and we assume that +max{p1, p2} 1 for technical reason). Overall, the probability of connection between i and j, of latent variable Xi and Xj is P i ⇠ j Xi, Xj = e kXi Xjk 2 + ⇢ p1 if i, j are in the same community p2 otherwise In stochastic block models, the key idea is to recover the two communities from the observed set of edges (and only from those observations, i.e., the latent variables Xi are not observed). This recovery can have different variants that we enumerate now (from the strongest to the weakest). Let us denote by 2 { ±1p N } N the normalized community vector illustrating to which community each vertex belong ( i = 1p N if i belongs the the first community and i = 1p N otherwise). Given the graph-adjency matrix A 2 {0, 1}N 2 , the objective is to output a normalized vector x 2 RN (i.e., with kxk = 1) such that, for some " > 0, Exact recovery: with probability tending to 1, >x = 1, thus x 2 { ±1p N } N Weak recovery: with probability tending to 1, >x " and x 2 { ±1p N } N Soft recovery: with probability tending to 1, >x " We recall here that if x is chosen at random, independently from , then >x would be of the order of 1p N , thus tends to 0. On the other hand, weak recovery implies that the vector x has (up to a change of sign) at least N2 (1 + ") coordinates equal to those of . Moreover, we speak of soft recovery (as opposed to hard recovery) in the third case by analogy to soft vs. hard classifiers. Indeed, given any normalized vector x 2 Rd, let us construct the vector sign(x) = 21{Xi 0} 1p N 2 { ±1p N } N . Then sign(x) is a candidate for weak/exact recovery. Standard comparisons between Hamming and Euclidian distance (see, e.g., [16]) relates soft to weak recovery as > sign(x) 4 >x 3; In particular, weak-recovery is ensured as soon as soft recovery is attained above the threshold of " = 3/4 (and obviously exact recovery after the threshold 1 1/4N ). For simplicity, we are going to assume2 that Xi are i.i.d., drawn from the 2-dimensional Gaussian distribution N (0, I2). In particular, this implies that the law Ai,j (equal to 1 if there is an edge between i and j and 0 otherwise) is a Bernoulli random variable (integrated over Xi and Xj) Ber ⇣ p1+p2 2 + 1+4 ⌘ ; Notice that Ai,j and Ai0,j0 are identically distributed but not independent if i = i0 or j = j0. Recovering communities can be done efficiently (in some regime) using spectral methods and we will generalize them to this perturbed (or mis-specified) model. For this purpose, we will need a precise and detailed spectral analysis of the random geometric graphs considered (this has been initiated in [20], [10] and [4] for instance). There has been several extensions of the standard stochastic block models to incorporate latent variables or covariables in perturbed stochastic block models. We can mention cases where covariables are observed (and thus the algorithm can take their values into account to optimize the community recovery) [25, 23, 9, 14], when the degree of nodes are corrected [12] or the case of labeled edges [13, 24, 15, 16, 26]. However, these papers do not focus on the very simple question of the robustness of recovery algorithm to (slight) mis-specifications in the model, i.e., to some small perturbations of the original model and this is precisely our original motivations. Regarding this question, [21] consider the robustness of spectral methods for a SBM perturbed by adversarial perturbation in the sparse degree setup. Can we prove that a specific efficient algorithm (here, based on spectral methods) still exactly/weakly/softly recover communities even if it is agnostic to the presence, or not, of endogenous noise ? Of course, if that noise is too big, then recovery is impossible (consider for instance the case = 0 and 0). However, and this is our main contribution, we are able 2The fact that d = 2 does not change much compared to d > 3; it is merely for the sake of computations; any Gaussian distribution N (0, 2I2) can be recovered by dividing by 2. to pinpoint specific range of perturbations (i.e., values of and ) such that spectral methods – in short, output the normalized second highest eigenvector – still manage to perform some recovery of the communities. Our model is motivated to simplify the exposition but can be generalized to more complicated models (more than two communities of different sizes). To be more precise, we will prove that: - if 1/ is in the same order than p1 and p2 (assuming that p1 ⇠ p2 is a standard assumption in stochastic block model), then soft recovery is possible under a mild assumption (p1 p22 4 (1+")); - if (p1 p2) goes to infinity, then exact recovery happens. However, we mention here that we do not consider the “sparse” case (when pi ⇠ an ), in which regimes where partial recovery is possible or not (and efficiently) are now clearly understood [7, 17, 8, 18], as the geometric graphs perturbes too much the delicate arguments. Our main results are summarised in Theorem 8 (when the different parameters are given) and Theorem 10 (without knowing them, the most interesting case). It is a first step for the study of the robustness of spectral methods in the presence of endogenous noise regarding the question of community detection. As mentioned before, those results highly rely on a careful and detailed analysis of the spectrum of the random graph adjencency matrix. This is the purpose of the following Section 1, which has its own interest in random graphs. Then we investigate the robustness of spectral methods in a perturbed stochastic block model, which is the main focus of the paper, in Section 2. Finally, more detailed analysis, other statements and some proofs are given in the Appendix. 1 Spectral analysis for the adjacency matrix of the random grah Let us denote by P the conditional expectation matrix (w.r.t the Gaussian kernel), where Pij = Pji = e ||Xi Xj || 2 , for i < j 2 [1, .., N ], and Pii = 0 for all i = 1, .., N . We will denote by µ1 µ2 · · · µN its ordered eigenvalues (in Section 2, µk are the eigenvalues of P ). 1.1 The case where is bounded We study apart the case where lim sup N!1 < 1. The simplest case corresponds to the case where log(N) ! 0 as N ! 1 as with probability one, each Pi,j converges to one. And as a consequence, the spectrum of P has a nonzero eigenvalue which converges to N (with probability arbitrarily close to 1). In the case where is not negligible w.r.t. 1log(N) , arguments to understand the spectrum of P – or at least its spectral radius – are a bit more involved. Proposition 1. Assume that (N) is a sequence such that limN!1 (N) = 0 0. Then there exists a constant C1( 0) such that the largest eigenvalue of P satisfies µ1(P ) NC1( 0) ! 1 as N ! 1. 1.2 The spectral radius of P when ! 1, ⌧ N/ lnN We now investigate the special case where ! 1, but when ⌧ N/ lnN (as in this regime the spectral radius ⇢(P ) of P does not vanish). We will show that ⇢(P ) is in the order of N2 . We formally state this case under the following Assumption (H1) (implying that ln ⌧ N ). ! 1 and 1 N lnN ! 1. (H1) Proposition 2. If Assumption (H1) holds then, with probability tending to one, N 2 ⇢(P ) N 2 (1 + o(1)) . Proof. By the Perron Frobenius theorem, one has that min i=1,...,N NX l=1 Pil ⇢(P ) max i=1,...,N NX l=1 Pil. To obtain an estimate of the spectral radius of P , we show that, with probability tending to 1, maxi P N l=1 Pil cannot exceed N 2 and for “a large enough number" of indices i, their connectivity satisfies NX l=1 Pil = N 2 (1 + o(1)) . The proof is going to be decomposed into three parts (each corresponding to a different lemma, whose proofs are delayed to Appendix B.). 1. We first consider only vertices close to 0, i.e., such that |Xi|2 2 log( ) . For those vertices,P j Pi,j is of the order of N/2 with probability close to 1. See Lemma 3 2. For the other vertices, farther away from 0, it is easier to only provide an upper bound onP j Pi,j with a similar proof. See Lemma 4 3. Then we show that the spectral radius has to be of the order N/2 by considering the subset J of vertices "close to 0" (actually introduced in the first step) and by proving that their inner connectivity – restricted to J –, must be of the order N/2 . See Lemma 5. Combining the following three Lemmas 3, 4 and 5 will immediately give the result. Lemma 3. Assume that Assumption (H1) holds, then, as N grows to infinity, P n 9i N s.t. |Xi| 2 2 ln , NX j=1 Pij N 2 o ⇣ N 2 ⌘o ! 1. Lemma 3 states that the connectivities of vertices close to the origin converge to their expectation (conditionally to Xi). Its proof decomposes the set of vertices into those that are close to i (the main contribution in the connectivity, with some concentration argument), far from i but close to the origin (negligible numbers) and those far from i and the origin (negligible contribution to the connectivity). The second step of the proof of Proposition 2 considers indices i such that |Xi|2 2 ln . Lemma 4. For indices i such that |Xi|2 2 ln one has with probability tending to 1 that NX j=1 Pij N 2 (1 + o(1)) . The proof just uses the fact that for those vertices, Pij are typically negligible. To get a lower bound on the spectral radius of P , we show that if one selects the submatrix PJ := (Pij)i,j2J where J is the collection of indices J = n 1 i N, |Xi| 2 2 ln o , (1) the spectral radius of PJ is almost N2 . This will give the desired estimate on the spectral radius of P . Lemma 5. Let J be the subset defined in (1) and PJ the associated sub matrix. Let µ1(J) denote the largest eigenvalue of PJ . Then, with h.p., one has that µ1(J) N 2 (1 o(1)). The proof relies on the fact that vertices close to the origin get the most contribution to their connectivity from the other vertices close to the origin. The constant 1/2 that arises in the Proposition 2 is a direct consequence of the choice of the Gaussian kernel. Had we chosen a different kernel, this constant would have been different (once the width parameter normalized appropriately). The techniques we developed can be used to compute it; this is merely a matter of computations, left as exercices. 2 A stochastic block model perturbed by a geometric graph 2.1 The model We consider in this section the stochastic block model, with two communities (it can easily be extended to the coexistence of more communities), yet perturbed by a geometric graph. More precisely, we assume that each member i of the network (regardless of its community) is characterized by an i.i.d. Gaussian vector Xi in R2 with distribution N (0, I2). The perturbed stochastic block model is characterized by four parameters: the two probabilities of intra-inter connection of communities (denoted respectively by p1 and p2 > 0) and two connectivity parameters , , chosen so that max(p1, p2) + 1: -In the usual stochastic block model, vertices i and j are connected with probability ri,j where rij = ⇢ p1 if Xi, Xj belong to the same community p2 otherwise , where p1 and p2 are in the same order (the ratio p1/p2 is uniformly bounded). -The geometric perturbation of the stochastic block model we consider is defined as follows. Conditionally on the values of Xi, the entries of the adjacency matrix A = (Aij) are independent (up to symmetry) Bernoulli random variables with parameter qij = e |Xi Xj | 2 + rij . We remind that the motivation is independent to incorporate the fact that members from two different communities can actually be “closer" in the latent space than members of the same community. Thus in comparison with preceding model, the matrix P of the geometric graph is now replaced with Q := P + ✓ p1J p2J p2J p1J ◆ , where we assume, without loss of generality, that Xi, i N/2 (resp. i N/2 + 1) belong to the same community. The matrix P0 := ✓ p1J p2J p2J p1J ◆ has two non zero eigenvalues which are 1 = N(p1 + p2)/2 with associated normalized eigenvector v1 = 1p N (1, 1, . . . 1)> and 2 = N(p1 p2)/2 associated to v2 = = 1p N (1, . . . , 1, 1, . . . 1)>. Thus, in principle, communities can be detected from the eigenvectors of P0 by using the fact that two vertices i, j such that v2(i)v2(j) = 1 belong to the same community. Our method can be generalized (using sign vectors) to more complicated models where the two communities are of different size, as well as to the case of k communities (and thus the matrix P0 has k non zero eigenvalues). For the sake of notations, we write the adjacency matrix of the graph as : A = P0 + P1 +Ac, where P1 = P with P the N ⇥ N -random symmetric matrix with entries (Pij) – studied in the previous section – and Ac is, conditionnally on the Xi’s a random matrix with independent Bernoulli entries which are centered. 2.2 Separation of eigenvalues: the easy case We are going to use spectral methods to identify communities. We therefore study in this section a regime where the eigenvalues of A are well separated and the second eigenvector is approximately v2, i.e. the vector which identifies precisely the two communities. Proposition 6. Assume that N(p1 p2) p N + N . Then, with probability tending to 1, the two largest eigenvalues of A denoted by ⇢1 ⇢2 are given by ⇢i = i(1 + o(1)), i = 1, 2. Furthermore, with probability tending to 1, associated normalized eigenvectors (with non negative first coordinate) denoted by w1 and w2 satisfy hvi, wii = 1 o(1); i = 1, 2. Proposition 6 implies that, in the regime considered, the spectral analysis of the adjacency matrix can be directly used to detect communities, in the same way it is a standard technique for the classical stochastic block model (if |p1 p2| is big enough compared to p1 + p2, which is the case here). Finding the exact threshold C0 such that if N(p1 p2) = C0( p N + N ) then the conclusion of Proposition 6 is still an open question. 2.3 Partial reconstruction when N p N(p1 + p2) From Theorem 2.7 in [2], the spectral norm of Ac cannot exceed ⇢(Ac) s N + r N( p1 + p2 2 +O( 2 )) ! (1 + ✏), with probability tending to 1, since the maximal connectivity of a vertex does not exceed N p1+p2 2 + 2 (1 + o(1)). In the specific regime where N 2 ⌧ r N p1 + p2 2 , standard techniques [5] of communities detection would work, at the cost of additional perturbation arguments. As a consequence, we will concentrate on the reconstruction of communities when N 2 r N p1 + p2 2 . This essentially means that the spectrum of Ac is blurred into that of P1. More precisely, we are from now going to consider the case where the noise induced by the latent random graph is of the same order of magnitude as the signal (which is the interesting regime): 90 < c,C < 1 s.t. 12 N 2 2 [c, C], 2 1 2 [c, C] and 2 p 1. (H2) If (H2) holds, then the spectrum of P0 + P1 overwhelms that of Ac. As a consequence, the problem becomes that of community detection based on P0 + P1, which will be done using spectral methods. To analyze the spectrum of P0 + P1, we will use extensively the resolvent identity [3] : consider ✓ 2 C \R and set S = P0 + P1;RS(✓) = (S ✓I) 1, R1(✓) := (P1 ✓I) 1. One then has that RS(I + P0R1) = R1, (2) where the variable ✓ is omitted for clarity when they are no possible confusion. Since P0 is a rank two matrix, then P0 can be written as P0 = 1v1v⇤1 + 2v2v⇤2 where v1 and v2 are the eigenvectors introduced before. Eigenvalues of S that are not eigenvalues of P1 are roots of the rational equation det(I +P0R1) = 0: det(I + P0R1) = 1 + 1 2hR1v1, v1ihR1v2, v2i+ 1hR1v1, v1i + 2hR1v2, v2i 1 2hR1v1, v2i 2 . (3) Let µ1 µ2 · · ·µN be the ordered eigenvalues of P1 with associated normalized eigenvectors w1, w2, . . . , wN , then one has that R1(✓) = P N j=1 1 µj ✓wjw ⇤ j . Denote, for every j 2 {1, .., N}, rj = hv1, wji and sj = hv2, wji, so that Equation (3) rewrites into det(I + P0R1(✓)) =: f 1, 2(✓) =1 + NX j=1 1 µj ✓ ( 1r 2 j + 2s 2 j ) + 1 2/2 X j 6=k 1 (µj ✓)(µk ✓) (rjsk rksj) 2 . (4) As mentioned before, we aim at using spectral methods to reconstruct communities based on the second eigenvector of S. As a consequence, these techniques may work only if (at least) two eigenvalues of S, that are roots of det(I + P0R1(✓)) = 0 exit the support of the spectrum of P1, i.e., such that they are greater than µ1. So we will examine conditions under which there exist two real solutions to Equation (4), with the restriction that they must be greater than µ1. If two such solutions exist, by considering the singularities in (2), then two eigenvalues of S indeed lie outside the spectrum of P1. 2.3.1 Separation of Eigenvalues in the rank two case. We now prove that two eigenvalues of S exit the support of the spectrum of P1. Recall the definition of the function f 1, 2 given in Equation (4) (or equivalently Equation (3)). One has that lim✓!1 f 1, 2(✓) = 1 , f 1, 2(✓( 1)) < 0 and similarly f 1, 2(✓( 2)) < 0, where ✓(·) is the function introduced in the rank 1 case. Thus two eigenvalues exit the spectrum of P1 if lim ✓!µ+1 f 1, 2(✓) > 0. First, let us make the following claim (a consequence of (H1) and (H2), see Lemma 9). lim inf N!1 1r 2 1 > 0. (H3) Lemma 7. Assume (H1), (H2) and (H3) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) = 4 N 2 (1 + ✏). Then at least two eigenvalues of P0 + P1 separate from the spectrum of P1. Proof. Let us first assume that µ1 is isolated; there exists ⌘ > 0 such that for N large enough µ1 > µ2 + ⌘. In this case, we look at the leading terms in the expansion of g as ✓ approaches µ1. It holds that f 1, 2(✓) ⇠ 1 ✓ µ1 0 @ 1 2 X j 2 1 ✓ µj (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A . Using that the spectral radius of P1 does not exceed µ1, we deduce that f 1, 2(✓) 1 ✓ µ1 0 @ 1 2 2✓ X j 2 (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A 1 ✓ µ1 ✓ 1 2 2✓ (r21 + s 2 1) 1r 2 1 2s 2 1 ◆ 1 ✓ µ1 1(r 2 1 + s 2 1)✏, provided 2 2µ1(1 + ✏). Note that if µ1 is isolated, the bound on 2 is improved by a factor of 2. Now we examine the case where µ1 is not isolated. We then define I ⇤ := {i : lim sup N!1 µi µ1 = 0}, and we define ṽi = P j2I⇤hvi, wjiwj , i = 1, 2. Then mimicking the above computations, we get f 1, 2(✓) 1 + o(1) ✓ µ1 ✓ 1 2 4✓ (||ṽ21 ||+ ||ṽ 2 2 ||) 1||ṽ 2 1 || 2||ṽ 2 2 || ◆ (5) so that two eigenvalues separate from the rest of the spectrum as soon as 2 > 4µ1(1 + ✏). To get that statement we simply modify step by step the above arguments. This finishes the proof of Lemma 7 as soon as lim infN!1 1r21 > 0. The threshold exhibited for the critical value of 2 might not be the optimal one, however it is in the correct scale as we do not a priori expect a separation if 2 µ1. 2.3.2 Partial reconstruction when N p1+p22 is known In the specific case where N p1+p22 is known beforehand for some reason, it is possible to weakly recover communities using Davis-Kahan sin(✓)-theorem under the same condition than Lemma 7. We recall that this theorem states that if M = ↵xx> and fM = exex> is the best rank-1 approximation of M 0, where both x and ex are normalized to kxk = kexk = 1, then min kx exk, kx+ exk 2 p 2 max{|↵|, | |} kM M 0 k. Theorem 8. Assume that (H1) and (H2) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) () p1 p2 2 2 (1 + ✏), then weak recovery of the communities is possible. Proof. We are going to appeal to Davis-Kahan theorem with respect to M = P0 N p1 + p2 2 v1v > 1 = N p1 p2 2 v2v > 2 and M 0 = A N p1 + p2 2 v1v > 1 = P0 + P1 +Ac N p1 + p2 2 v1v > 1 = P1 +Ac +M As a consequence, let us denote by ex the first eigenvector of M 0 of norm 1 so that 1 N dH(v2, sign(ex)) kv2 exk2 8 2 2 kP1 +Ack 2 = 8 2 2 µ 2 1(1 + o(1)) . Weak reconstruction is possible if the l.h.s. is strictly smaller than 1/2, hence if 2 4µ1(1+"). It is quite interesting that weak recovery is possible in the same regime where two eigenvalues of P0+P1 separate from the spectrum of P1. Yet the above computations imply that in order to compute ex, it is necessary to know p1+p22 (at least up to some negligible terms). In the standard stochastic block model, when = 0, this quantity can be efficiently estimated since the N(N 1)2 edges are independently drawn with overall probability p1+p22 . As a consequence, the average number of edges is a good estimate of p1+p22 up to its standard deviation. The latter is indeed negligible compared to p1+p2 2 as it is in the order of 1 N q p1+p2 2 . On the other hand, when 6= 0, such trivial estimates are no longer available; indeed, we recall that the probability of having an edge between Xi and Xj is equal to p1+p22 + 1+4 , where all those terms are unknown (and moreover, activations of edges are no longer independent). We study in the following section, the case where p1 + p2 is not known. First, we will prove that Assumption (H3) is actually always satisfied (notice that it was actually not required for weak recovery). In a second step, we will prove that soft recovery is possible, where we recall that this means we can output a vector x 2 RN such that kxk = 1 and x>v2 does not converge to 0. Moreover, we also prove that weak (and exact) recovery is possible if the different parameters p1, p2 and 1 are sufficiently separated. 2.3.3 The case of unknown p1 + p2 We now proceed to show that Assumption (H3) holds in the regime considered. Lemma 9. Under (H1) and (H2), one has that 1) for some constant C > 0, r21 C. and 2) for some ✏ > 0 small enough, 1r21 ✏. The first point of Lemma 9 implies (H3) with an explicit rate if AN 1 2 for some constant A. The second point proves this result in the general case. Theorem 10. If (H1) and (H2) hold true and 1 > 2+2 2 then the correlation |w > 2 v2| is uniformly bounded away from 0 hence soft recovery is always possible. Moreover, if the ratio 2/µ1 goes to infinity, then |w > 2 v2| tends to 1, which gives weak (and even exact at the limit) recovery. An (asymptotic) formula for the level of correlation is provided at the end of the proof. 3 Experiments The different results provided are theoretical and we proved that two eigenvalues separate from the bulk of the spectrum if the different parameters are big enough and sufficiently far from each other. And if they are too close to each other, it is also quite clear that spectral methods will not work. However, we highlight these statements in Figure 1. It illustrates the effect of perturbation on the spectrum of the stochastic block models for the following specific values: N = 2000, p1 = 2.5%, p2 = 1%, = 0.97 and 2 {50, 70, 100, 110}. Notice that for those specific values with get 1 = 35, 2 = 15 and µ1 2 {20, 14.3, 10, 9.1}; in particular, two eigenvalues are well separated in the unperturbed stochastic block model. The spectrum of the classical stochastic block model is coloured in red while the spectrum of the perturbed one is in blue ( the spectrum of the conditionnal adjacency matrix, given the Xi’s is in gray). As expected, for the value of = 50, the highest eigenvalue of P1 is bigger than 2 and the spectrum of the expected adjacency matrix (in red) as some "tail". This prevents the separation of eigenvalues in the perturbed stochastic block model. Separation of eigenvalues starts to happen, empirically and for those range of parameters, around = 70 for whichp 1 µ1 = 10 2. We also provide how the correlations between the second highest eigenvector and , the normalized vector indicating to which community vertices belong, evolve with respect to for this choice of parameters, see Figure 2. Conclusion The method exposed hereabove can be generalized easily. In the case where there are k 2 communities of different sizes, P0 has rank k. If k eigenvalues of S exit the support of the spectrum of P1, then communities may be reconstructed using a set of k associated (sign) eigenvectors, whether the parameters are known or not. We have proved that spectral methods to recover communities are robust to slight mis-specifications of the model, i.e., the presence of endogenous noise not assumed by the model (especially when p1 + p2 is not known in advance). Our results hold in the regime where 1 logN N and with 2 communities (balancedness and the small dimension of latent variables were just assumed for the sake of computations) - those theoretical results are validated empirically by some simulations provided in the Appendix. Obtaining the same robustness results for more than 2 communities, for different types of perturbations and especially in the sparse regime 1 ⇠ pi ⇠ 1 N seems quite challenging as standard spectral techniques in this regime involve the non-backtracking matrix [5], and its concentration properties are quite challenging to establish. Broader Impact This paper deals with theoretical detection of community in networks. Even if an entity wants to use community detection with some mercantile objectives (maybe in order to target some specific community), it would probably use spectral methods, no matter if the existing theory gives it guarantee that it is going to work. At worst, our paper will provide a positive answer: the very specific assumptions of stochastic block models are not required for theoretical (and certainly practical) recovery. On the other hand, theoretical robustness results as ours can lead to substantial follow up research on finding the transition between regimes in complex models (almost ill-posed). Theory papers like this one are therefore win-win. Acknowledgments and Disclosure of Funding This research was supported by the Institut Universitaire de France. It was also supported in part by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences and by the French Agence Nationale de la Recherche under the grant number ANR19-CE23-0026-04. No other competing interests.
1. What is the focus of the paper in terms of graph partitioning? 2. What is the contribution of the paper regarding the combination of two models? 3. What are the strengths of the paper, particularly in its theoretical analysis? 4. What are the weaknesses of the paper, especially regarding its relevance to the NeurIPS community? 5. Is there any limitation to the approach proposed in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper discusses graph partitioning of the 2-block stochastic block model (SBM) when the realized graph includes edges that are drawn not only from the SBM model but also from a random geometric model. The combination of the two models makes sense in practice since a lot of large-scale social networks include edges that are explained by endogenous features (geometric model) as well as exogenous features (SBM) of the nodes. The authors are interested in recovering the clusters that are created due to exogenous features using the second eigenvector of the adjacency matrix. The authors show exact and weak recovery guarantees of the clusters that are defined by the 2-block SBM. To the best of my knowledge, the results are novel. Strengths * The paper is theoretical sound. The exact and weak recovery results that are presented are novel and are closer to what we observe in practice. Weaknesses * Although the paper is solid theoretical and interesting work. I am not certain if this paper is relevant to the NeurIPS community. I think this paper is best suited for theoretical statistical venues. * Only the 2-block case is considered, which makes the results less interesting.
NIPS
Title Robustness of Community Detection to Random Geometric Perturbations Abstract We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest. N/A Introduction In a d-dimensional random geometric graph, N vertices are assigned random coordinates in Rd, and only points close enough to each other are connected by an edge. Random geometric graphs are used to model complex networks such as social networks, the world wide web and so on. We refer to [19] - and references therein - for a comprehensive introduction to random geometric graphs. On the other hand, in social networks, users are more likely to connect if they belong to some specific community (groups of friends, political party, etc.). This has motivated the introduction of the stochastic block models (see the recent survey [1] and the more recent breakthrough [5] for more details), where in the simplest case, each of the N vertices belongs to one (and only one) of the two communities that are present in the network. The two types of connections – geometric graph vs. block model – are conceptually quite different and co-exist independently. Two users might be connected because they are “endogenously similar” (their latent coordinates are close enough to each others) or because they are “exogenously similar” (they belong to the same community). For instance, to oversimplify a social network, we can consider that two different types of connections can occur between users: either they are childhood friends (with similar latent variables) or they have the same political views (right/left wing). We therefore model these simultaneous types of interaction in social networks as a simple stochastic block model (with 2 balanced communities) perturbed by a latent geometric graph. More precisely, we are going to assume that the probability of endogenous connections between vertices i and j, with respective latent variables Xi, Xj 2 Rd, is given by the Gaussian1 kernel exp( kXi Xjk2) where is the (inverse) width. On the other hand, exogenous connections are defined by the block model where half of the N vertices belong to some community, half of them to the other one. The probability of connection between two members of the same community is equal to p1 and between two members from different communities is equal to p2. We also consider an extra parameter 1We emphasize here that geometric interactions are defined through some kernel so that different recovery regimes can be identified with respect to a unique, simple width parameter . Similarly, the choice of the Gaussian kernel might seem a bit specific and arbitrary, but this purely for the sake of presentation: our approach can be generalized to other kernels (the “constants” will be different; they are defined w.r.t. the kernel chosen). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 [0, 1] to represent the respective strengths of exogenous vs. endogenous connections (and we assume that +max{p1, p2} 1 for technical reason). Overall, the probability of connection between i and j, of latent variable Xi and Xj is P i ⇠ j Xi, Xj = e kXi Xjk 2 + ⇢ p1 if i, j are in the same community p2 otherwise In stochastic block models, the key idea is to recover the two communities from the observed set of edges (and only from those observations, i.e., the latent variables Xi are not observed). This recovery can have different variants that we enumerate now (from the strongest to the weakest). Let us denote by 2 { ±1p N } N the normalized community vector illustrating to which community each vertex belong ( i = 1p N if i belongs the the first community and i = 1p N otherwise). Given the graph-adjency matrix A 2 {0, 1}N 2 , the objective is to output a normalized vector x 2 RN (i.e., with kxk = 1) such that, for some " > 0, Exact recovery: with probability tending to 1, >x = 1, thus x 2 { ±1p N } N Weak recovery: with probability tending to 1, >x " and x 2 { ±1p N } N Soft recovery: with probability tending to 1, >x " We recall here that if x is chosen at random, independently from , then >x would be of the order of 1p N , thus tends to 0. On the other hand, weak recovery implies that the vector x has (up to a change of sign) at least N2 (1 + ") coordinates equal to those of . Moreover, we speak of soft recovery (as opposed to hard recovery) in the third case by analogy to soft vs. hard classifiers. Indeed, given any normalized vector x 2 Rd, let us construct the vector sign(x) = 21{Xi 0} 1p N 2 { ±1p N } N . Then sign(x) is a candidate for weak/exact recovery. Standard comparisons between Hamming and Euclidian distance (see, e.g., [16]) relates soft to weak recovery as > sign(x) 4 >x 3; In particular, weak-recovery is ensured as soon as soft recovery is attained above the threshold of " = 3/4 (and obviously exact recovery after the threshold 1 1/4N ). For simplicity, we are going to assume2 that Xi are i.i.d., drawn from the 2-dimensional Gaussian distribution N (0, I2). In particular, this implies that the law Ai,j (equal to 1 if there is an edge between i and j and 0 otherwise) is a Bernoulli random variable (integrated over Xi and Xj) Ber ⇣ p1+p2 2 + 1+4 ⌘ ; Notice that Ai,j and Ai0,j0 are identically distributed but not independent if i = i0 or j = j0. Recovering communities can be done efficiently (in some regime) using spectral methods and we will generalize them to this perturbed (or mis-specified) model. For this purpose, we will need a precise and detailed spectral analysis of the random geometric graphs considered (this has been initiated in [20], [10] and [4] for instance). There has been several extensions of the standard stochastic block models to incorporate latent variables or covariables in perturbed stochastic block models. We can mention cases where covariables are observed (and thus the algorithm can take their values into account to optimize the community recovery) [25, 23, 9, 14], when the degree of nodes are corrected [12] or the case of labeled edges [13, 24, 15, 16, 26]. However, these papers do not focus on the very simple question of the robustness of recovery algorithm to (slight) mis-specifications in the model, i.e., to some small perturbations of the original model and this is precisely our original motivations. Regarding this question, [21] consider the robustness of spectral methods for a SBM perturbed by adversarial perturbation in the sparse degree setup. Can we prove that a specific efficient algorithm (here, based on spectral methods) still exactly/weakly/softly recover communities even if it is agnostic to the presence, or not, of endogenous noise ? Of course, if that noise is too big, then recovery is impossible (consider for instance the case = 0 and 0). However, and this is our main contribution, we are able 2The fact that d = 2 does not change much compared to d > 3; it is merely for the sake of computations; any Gaussian distribution N (0, 2I2) can be recovered by dividing by 2. to pinpoint specific range of perturbations (i.e., values of and ) such that spectral methods – in short, output the normalized second highest eigenvector – still manage to perform some recovery of the communities. Our model is motivated to simplify the exposition but can be generalized to more complicated models (more than two communities of different sizes). To be more precise, we will prove that: - if 1/ is in the same order than p1 and p2 (assuming that p1 ⇠ p2 is a standard assumption in stochastic block model), then soft recovery is possible under a mild assumption (p1 p22 4 (1+")); - if (p1 p2) goes to infinity, then exact recovery happens. However, we mention here that we do not consider the “sparse” case (when pi ⇠ an ), in which regimes where partial recovery is possible or not (and efficiently) are now clearly understood [7, 17, 8, 18], as the geometric graphs perturbes too much the delicate arguments. Our main results are summarised in Theorem 8 (when the different parameters are given) and Theorem 10 (without knowing them, the most interesting case). It is a first step for the study of the robustness of spectral methods in the presence of endogenous noise regarding the question of community detection. As mentioned before, those results highly rely on a careful and detailed analysis of the spectrum of the random graph adjencency matrix. This is the purpose of the following Section 1, which has its own interest in random graphs. Then we investigate the robustness of spectral methods in a perturbed stochastic block model, which is the main focus of the paper, in Section 2. Finally, more detailed analysis, other statements and some proofs are given in the Appendix. 1 Spectral analysis for the adjacency matrix of the random grah Let us denote by P the conditional expectation matrix (w.r.t the Gaussian kernel), where Pij = Pji = e ||Xi Xj || 2 , for i < j 2 [1, .., N ], and Pii = 0 for all i = 1, .., N . We will denote by µ1 µ2 · · · µN its ordered eigenvalues (in Section 2, µk are the eigenvalues of P ). 1.1 The case where is bounded We study apart the case where lim sup N!1 < 1. The simplest case corresponds to the case where log(N) ! 0 as N ! 1 as with probability one, each Pi,j converges to one. And as a consequence, the spectrum of P has a nonzero eigenvalue which converges to N (with probability arbitrarily close to 1). In the case where is not negligible w.r.t. 1log(N) , arguments to understand the spectrum of P – or at least its spectral radius – are a bit more involved. Proposition 1. Assume that (N) is a sequence such that limN!1 (N) = 0 0. Then there exists a constant C1( 0) such that the largest eigenvalue of P satisfies µ1(P ) NC1( 0) ! 1 as N ! 1. 1.2 The spectral radius of P when ! 1, ⌧ N/ lnN We now investigate the special case where ! 1, but when ⌧ N/ lnN (as in this regime the spectral radius ⇢(P ) of P does not vanish). We will show that ⇢(P ) is in the order of N2 . We formally state this case under the following Assumption (H1) (implying that ln ⌧ N ). ! 1 and 1 N lnN ! 1. (H1) Proposition 2. If Assumption (H1) holds then, with probability tending to one, N 2 ⇢(P ) N 2 (1 + o(1)) . Proof. By the Perron Frobenius theorem, one has that min i=1,...,N NX l=1 Pil ⇢(P ) max i=1,...,N NX l=1 Pil. To obtain an estimate of the spectral radius of P , we show that, with probability tending to 1, maxi P N l=1 Pil cannot exceed N 2 and for “a large enough number" of indices i, their connectivity satisfies NX l=1 Pil = N 2 (1 + o(1)) . The proof is going to be decomposed into three parts (each corresponding to a different lemma, whose proofs are delayed to Appendix B.). 1. We first consider only vertices close to 0, i.e., such that |Xi|2 2 log( ) . For those vertices,P j Pi,j is of the order of N/2 with probability close to 1. See Lemma 3 2. For the other vertices, farther away from 0, it is easier to only provide an upper bound onP j Pi,j with a similar proof. See Lemma 4 3. Then we show that the spectral radius has to be of the order N/2 by considering the subset J of vertices "close to 0" (actually introduced in the first step) and by proving that their inner connectivity – restricted to J –, must be of the order N/2 . See Lemma 5. Combining the following three Lemmas 3, 4 and 5 will immediately give the result. Lemma 3. Assume that Assumption (H1) holds, then, as N grows to infinity, P n 9i N s.t. |Xi| 2 2 ln , NX j=1 Pij N 2 o ⇣ N 2 ⌘o ! 1. Lemma 3 states that the connectivities of vertices close to the origin converge to their expectation (conditionally to Xi). Its proof decomposes the set of vertices into those that are close to i (the main contribution in the connectivity, with some concentration argument), far from i but close to the origin (negligible numbers) and those far from i and the origin (negligible contribution to the connectivity). The second step of the proof of Proposition 2 considers indices i such that |Xi|2 2 ln . Lemma 4. For indices i such that |Xi|2 2 ln one has with probability tending to 1 that NX j=1 Pij N 2 (1 + o(1)) . The proof just uses the fact that for those vertices, Pij are typically negligible. To get a lower bound on the spectral radius of P , we show that if one selects the submatrix PJ := (Pij)i,j2J where J is the collection of indices J = n 1 i N, |Xi| 2 2 ln o , (1) the spectral radius of PJ is almost N2 . This will give the desired estimate on the spectral radius of P . Lemma 5. Let J be the subset defined in (1) and PJ the associated sub matrix. Let µ1(J) denote the largest eigenvalue of PJ . Then, with h.p., one has that µ1(J) N 2 (1 o(1)). The proof relies on the fact that vertices close to the origin get the most contribution to their connectivity from the other vertices close to the origin. The constant 1/2 that arises in the Proposition 2 is a direct consequence of the choice of the Gaussian kernel. Had we chosen a different kernel, this constant would have been different (once the width parameter normalized appropriately). The techniques we developed can be used to compute it; this is merely a matter of computations, left as exercices. 2 A stochastic block model perturbed by a geometric graph 2.1 The model We consider in this section the stochastic block model, with two communities (it can easily be extended to the coexistence of more communities), yet perturbed by a geometric graph. More precisely, we assume that each member i of the network (regardless of its community) is characterized by an i.i.d. Gaussian vector Xi in R2 with distribution N (0, I2). The perturbed stochastic block model is characterized by four parameters: the two probabilities of intra-inter connection of communities (denoted respectively by p1 and p2 > 0) and two connectivity parameters , , chosen so that max(p1, p2) + 1: -In the usual stochastic block model, vertices i and j are connected with probability ri,j where rij = ⇢ p1 if Xi, Xj belong to the same community p2 otherwise , where p1 and p2 are in the same order (the ratio p1/p2 is uniformly bounded). -The geometric perturbation of the stochastic block model we consider is defined as follows. Conditionally on the values of Xi, the entries of the adjacency matrix A = (Aij) are independent (up to symmetry) Bernoulli random variables with parameter qij = e |Xi Xj | 2 + rij . We remind that the motivation is independent to incorporate the fact that members from two different communities can actually be “closer" in the latent space than members of the same community. Thus in comparison with preceding model, the matrix P of the geometric graph is now replaced with Q := P + ✓ p1J p2J p2J p1J ◆ , where we assume, without loss of generality, that Xi, i N/2 (resp. i N/2 + 1) belong to the same community. The matrix P0 := ✓ p1J p2J p2J p1J ◆ has two non zero eigenvalues which are 1 = N(p1 + p2)/2 with associated normalized eigenvector v1 = 1p N (1, 1, . . . 1)> and 2 = N(p1 p2)/2 associated to v2 = = 1p N (1, . . . , 1, 1, . . . 1)>. Thus, in principle, communities can be detected from the eigenvectors of P0 by using the fact that two vertices i, j such that v2(i)v2(j) = 1 belong to the same community. Our method can be generalized (using sign vectors) to more complicated models where the two communities are of different size, as well as to the case of k communities (and thus the matrix P0 has k non zero eigenvalues). For the sake of notations, we write the adjacency matrix of the graph as : A = P0 + P1 +Ac, where P1 = P with P the N ⇥ N -random symmetric matrix with entries (Pij) – studied in the previous section – and Ac is, conditionnally on the Xi’s a random matrix with independent Bernoulli entries which are centered. 2.2 Separation of eigenvalues: the easy case We are going to use spectral methods to identify communities. We therefore study in this section a regime where the eigenvalues of A are well separated and the second eigenvector is approximately v2, i.e. the vector which identifies precisely the two communities. Proposition 6. Assume that N(p1 p2) p N + N . Then, with probability tending to 1, the two largest eigenvalues of A denoted by ⇢1 ⇢2 are given by ⇢i = i(1 + o(1)), i = 1, 2. Furthermore, with probability tending to 1, associated normalized eigenvectors (with non negative first coordinate) denoted by w1 and w2 satisfy hvi, wii = 1 o(1); i = 1, 2. Proposition 6 implies that, in the regime considered, the spectral analysis of the adjacency matrix can be directly used to detect communities, in the same way it is a standard technique for the classical stochastic block model (if |p1 p2| is big enough compared to p1 + p2, which is the case here). Finding the exact threshold C0 such that if N(p1 p2) = C0( p N + N ) then the conclusion of Proposition 6 is still an open question. 2.3 Partial reconstruction when N p N(p1 + p2) From Theorem 2.7 in [2], the spectral norm of Ac cannot exceed ⇢(Ac) s N + r N( p1 + p2 2 +O( 2 )) ! (1 + ✏), with probability tending to 1, since the maximal connectivity of a vertex does not exceed N p1+p2 2 + 2 (1 + o(1)). In the specific regime where N 2 ⌧ r N p1 + p2 2 , standard techniques [5] of communities detection would work, at the cost of additional perturbation arguments. As a consequence, we will concentrate on the reconstruction of communities when N 2 r N p1 + p2 2 . This essentially means that the spectrum of Ac is blurred into that of P1. More precisely, we are from now going to consider the case where the noise induced by the latent random graph is of the same order of magnitude as the signal (which is the interesting regime): 90 < c,C < 1 s.t. 12 N 2 2 [c, C], 2 1 2 [c, C] and 2 p 1. (H2) If (H2) holds, then the spectrum of P0 + P1 overwhelms that of Ac. As a consequence, the problem becomes that of community detection based on P0 + P1, which will be done using spectral methods. To analyze the spectrum of P0 + P1, we will use extensively the resolvent identity [3] : consider ✓ 2 C \R and set S = P0 + P1;RS(✓) = (S ✓I) 1, R1(✓) := (P1 ✓I) 1. One then has that RS(I + P0R1) = R1, (2) where the variable ✓ is omitted for clarity when they are no possible confusion. Since P0 is a rank two matrix, then P0 can be written as P0 = 1v1v⇤1 + 2v2v⇤2 where v1 and v2 are the eigenvectors introduced before. Eigenvalues of S that are not eigenvalues of P1 are roots of the rational equation det(I +P0R1) = 0: det(I + P0R1) = 1 + 1 2hR1v1, v1ihR1v2, v2i+ 1hR1v1, v1i + 2hR1v2, v2i 1 2hR1v1, v2i 2 . (3) Let µ1 µ2 · · ·µN be the ordered eigenvalues of P1 with associated normalized eigenvectors w1, w2, . . . , wN , then one has that R1(✓) = P N j=1 1 µj ✓wjw ⇤ j . Denote, for every j 2 {1, .., N}, rj = hv1, wji and sj = hv2, wji, so that Equation (3) rewrites into det(I + P0R1(✓)) =: f 1, 2(✓) =1 + NX j=1 1 µj ✓ ( 1r 2 j + 2s 2 j ) + 1 2/2 X j 6=k 1 (µj ✓)(µk ✓) (rjsk rksj) 2 . (4) As mentioned before, we aim at using spectral methods to reconstruct communities based on the second eigenvector of S. As a consequence, these techniques may work only if (at least) two eigenvalues of S, that are roots of det(I + P0R1(✓)) = 0 exit the support of the spectrum of P1, i.e., such that they are greater than µ1. So we will examine conditions under which there exist two real solutions to Equation (4), with the restriction that they must be greater than µ1. If two such solutions exist, by considering the singularities in (2), then two eigenvalues of S indeed lie outside the spectrum of P1. 2.3.1 Separation of Eigenvalues in the rank two case. We now prove that two eigenvalues of S exit the support of the spectrum of P1. Recall the definition of the function f 1, 2 given in Equation (4) (or equivalently Equation (3)). One has that lim✓!1 f 1, 2(✓) = 1 , f 1, 2(✓( 1)) < 0 and similarly f 1, 2(✓( 2)) < 0, where ✓(·) is the function introduced in the rank 1 case. Thus two eigenvalues exit the spectrum of P1 if lim ✓!µ+1 f 1, 2(✓) > 0. First, let us make the following claim (a consequence of (H1) and (H2), see Lemma 9). lim inf N!1 1r 2 1 > 0. (H3) Lemma 7. Assume (H1), (H2) and (H3) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) = 4 N 2 (1 + ✏). Then at least two eigenvalues of P0 + P1 separate from the spectrum of P1. Proof. Let us first assume that µ1 is isolated; there exists ⌘ > 0 such that for N large enough µ1 > µ2 + ⌘. In this case, we look at the leading terms in the expansion of g as ✓ approaches µ1. It holds that f 1, 2(✓) ⇠ 1 ✓ µ1 0 @ 1 2 X j 2 1 ✓ µj (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A . Using that the spectral radius of P1 does not exceed µ1, we deduce that f 1, 2(✓) 1 ✓ µ1 0 @ 1 2 2✓ X j 2 (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A 1 ✓ µ1 ✓ 1 2 2✓ (r21 + s 2 1) 1r 2 1 2s 2 1 ◆ 1 ✓ µ1 1(r 2 1 + s 2 1)✏, provided 2 2µ1(1 + ✏). Note that if µ1 is isolated, the bound on 2 is improved by a factor of 2. Now we examine the case where µ1 is not isolated. We then define I ⇤ := {i : lim sup N!1 µi µ1 = 0}, and we define ṽi = P j2I⇤hvi, wjiwj , i = 1, 2. Then mimicking the above computations, we get f 1, 2(✓) 1 + o(1) ✓ µ1 ✓ 1 2 4✓ (||ṽ21 ||+ ||ṽ 2 2 ||) 1||ṽ 2 1 || 2||ṽ 2 2 || ◆ (5) so that two eigenvalues separate from the rest of the spectrum as soon as 2 > 4µ1(1 + ✏). To get that statement we simply modify step by step the above arguments. This finishes the proof of Lemma 7 as soon as lim infN!1 1r21 > 0. The threshold exhibited for the critical value of 2 might not be the optimal one, however it is in the correct scale as we do not a priori expect a separation if 2 µ1. 2.3.2 Partial reconstruction when N p1+p22 is known In the specific case where N p1+p22 is known beforehand for some reason, it is possible to weakly recover communities using Davis-Kahan sin(✓)-theorem under the same condition than Lemma 7. We recall that this theorem states that if M = ↵xx> and fM = exex> is the best rank-1 approximation of M 0, where both x and ex are normalized to kxk = kexk = 1, then min kx exk, kx+ exk 2 p 2 max{|↵|, | |} kM M 0 k. Theorem 8. Assume that (H1) and (H2) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) () p1 p2 2 2 (1 + ✏), then weak recovery of the communities is possible. Proof. We are going to appeal to Davis-Kahan theorem with respect to M = P0 N p1 + p2 2 v1v > 1 = N p1 p2 2 v2v > 2 and M 0 = A N p1 + p2 2 v1v > 1 = P0 + P1 +Ac N p1 + p2 2 v1v > 1 = P1 +Ac +M As a consequence, let us denote by ex the first eigenvector of M 0 of norm 1 so that 1 N dH(v2, sign(ex)) kv2 exk2 8 2 2 kP1 +Ack 2 = 8 2 2 µ 2 1(1 + o(1)) . Weak reconstruction is possible if the l.h.s. is strictly smaller than 1/2, hence if 2 4µ1(1+"). It is quite interesting that weak recovery is possible in the same regime where two eigenvalues of P0+P1 separate from the spectrum of P1. Yet the above computations imply that in order to compute ex, it is necessary to know p1+p22 (at least up to some negligible terms). In the standard stochastic block model, when = 0, this quantity can be efficiently estimated since the N(N 1)2 edges are independently drawn with overall probability p1+p22 . As a consequence, the average number of edges is a good estimate of p1+p22 up to its standard deviation. The latter is indeed negligible compared to p1+p2 2 as it is in the order of 1 N q p1+p2 2 . On the other hand, when 6= 0, such trivial estimates are no longer available; indeed, we recall that the probability of having an edge between Xi and Xj is equal to p1+p22 + 1+4 , where all those terms are unknown (and moreover, activations of edges are no longer independent). We study in the following section, the case where p1 + p2 is not known. First, we will prove that Assumption (H3) is actually always satisfied (notice that it was actually not required for weak recovery). In a second step, we will prove that soft recovery is possible, where we recall that this means we can output a vector x 2 RN such that kxk = 1 and x>v2 does not converge to 0. Moreover, we also prove that weak (and exact) recovery is possible if the different parameters p1, p2 and 1 are sufficiently separated. 2.3.3 The case of unknown p1 + p2 We now proceed to show that Assumption (H3) holds in the regime considered. Lemma 9. Under (H1) and (H2), one has that 1) for some constant C > 0, r21 C. and 2) for some ✏ > 0 small enough, 1r21 ✏. The first point of Lemma 9 implies (H3) with an explicit rate if AN 1 2 for some constant A. The second point proves this result in the general case. Theorem 10. If (H1) and (H2) hold true and 1 > 2+2 2 then the correlation |w > 2 v2| is uniformly bounded away from 0 hence soft recovery is always possible. Moreover, if the ratio 2/µ1 goes to infinity, then |w > 2 v2| tends to 1, which gives weak (and even exact at the limit) recovery. An (asymptotic) formula for the level of correlation is provided at the end of the proof. 3 Experiments The different results provided are theoretical and we proved that two eigenvalues separate from the bulk of the spectrum if the different parameters are big enough and sufficiently far from each other. And if they are too close to each other, it is also quite clear that spectral methods will not work. However, we highlight these statements in Figure 1. It illustrates the effect of perturbation on the spectrum of the stochastic block models for the following specific values: N = 2000, p1 = 2.5%, p2 = 1%, = 0.97 and 2 {50, 70, 100, 110}. Notice that for those specific values with get 1 = 35, 2 = 15 and µ1 2 {20, 14.3, 10, 9.1}; in particular, two eigenvalues are well separated in the unperturbed stochastic block model. The spectrum of the classical stochastic block model is coloured in red while the spectrum of the perturbed one is in blue ( the spectrum of the conditionnal adjacency matrix, given the Xi’s is in gray). As expected, for the value of = 50, the highest eigenvalue of P1 is bigger than 2 and the spectrum of the expected adjacency matrix (in red) as some "tail". This prevents the separation of eigenvalues in the perturbed stochastic block model. Separation of eigenvalues starts to happen, empirically and for those range of parameters, around = 70 for whichp 1 µ1 = 10 2. We also provide how the correlations between the second highest eigenvector and , the normalized vector indicating to which community vertices belong, evolve with respect to for this choice of parameters, see Figure 2. Conclusion The method exposed hereabove can be generalized easily. In the case where there are k 2 communities of different sizes, P0 has rank k. If k eigenvalues of S exit the support of the spectrum of P1, then communities may be reconstructed using a set of k associated (sign) eigenvectors, whether the parameters are known or not. We have proved that spectral methods to recover communities are robust to slight mis-specifications of the model, i.e., the presence of endogenous noise not assumed by the model (especially when p1 + p2 is not known in advance). Our results hold in the regime where 1 logN N and with 2 communities (balancedness and the small dimension of latent variables were just assumed for the sake of computations) - those theoretical results are validated empirically by some simulations provided in the Appendix. Obtaining the same robustness results for more than 2 communities, for different types of perturbations and especially in the sparse regime 1 ⇠ pi ⇠ 1 N seems quite challenging as standard spectral techniques in this regime involve the non-backtracking matrix [5], and its concentration properties are quite challenging to establish. Broader Impact This paper deals with theoretical detection of community in networks. Even if an entity wants to use community detection with some mercantile objectives (maybe in order to target some specific community), it would probably use spectral methods, no matter if the existing theory gives it guarantee that it is going to work. At worst, our paper will provide a positive answer: the very specific assumptions of stochastic block models are not required for theoretical (and certainly practical) recovery. On the other hand, theoretical robustness results as ours can lead to substantial follow up research on finding the transition between regimes in complex models (almost ill-posed). Theory papers like this one are therefore win-win. Acknowledgments and Disclosure of Funding This research was supported by the Institut Universitaire de France. It was also supported in part by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences and by the French Agence Nationale de la Recherche under the grant number ANR19-CE23-0026-04. No other competing interests.
1. What is the focus and contribution of the paper regarding community detection in networks? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical soundness and significance? 3. What are the weaknesses of the paper, especially regarding its claims and experimental validation? 4. Do you have any concerns or suggestions regarding the paper's relevance and potential applications? 5. Are there any limitations or areas that could benefit from further exploration in future works related to this topic?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper deals with the community detection for networks generated from the stochastic block model (SBM) perturbed with errors from random geometric graphs (RGG). The paper considers that the probability of edge formation in the SBM is perturbed by an error, which is a radial function of the distance between the latent embeddings of the nodes in a two-dimensional Euclidean space. The new model has the variance parameter of the radial kernel as an extra parameter along with the parameters of the SBM. The main contributions of the paper are - (1) Deriving a theoretical result for spectral properties of RGG with radial kernel. (2) Deriving theoretical results for SBM perturbed with noise as RGG. Strengths Soundness of claim: The paper provides rigorous theoretical justification for the claims made in the paper. The proofs of the two main results, one on random geometric graphs and another on SBM perturbed by RGG are given in the Appendix. However, some of the proof structure and intermediate results are represented in the paper in the form of Proposition and Lemma. Significance and novelty: The paper is significant and novel in terms of giving a theoretical results on spectral structure of RGG and spectral structure of SBM with errors in terms of RGG noise. The paper extends the current knowledge of community detection for SBM and RGG. Relevance: The results presented in the paper are relevant to the networks community, as it gives a basic framework based on which further theoretical and practical studies will be possible in presence of covariates and latent variables within RGG. Weaknesses Soundness of claim: The paper provides proof structure of the theoretical results in the main paper, but the simulation study is relegated to the Supplement. Significance and novelty: The paper has significant results, but the main mathetical toolbox for the proofs already exist and are in frequent use in the current literature. It would be better if the paper had gone for regimes sparser than degree of log(n), as the regime of 1/gamma > log(n)/n is interesting but 1/gamma < log(n)/n regime can become more significant and might have to involve novel technique. Relevance: The paper is relevant but some follow-up discussions, such as, use of covariates in place of latent variable based noise for analysis, might help in follow-up works.
NIPS
Title Robustness of Community Detection to Random Geometric Perturbations Abstract We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest. N/A Introduction In a d-dimensional random geometric graph, N vertices are assigned random coordinates in Rd, and only points close enough to each other are connected by an edge. Random geometric graphs are used to model complex networks such as social networks, the world wide web and so on. We refer to [19] - and references therein - for a comprehensive introduction to random geometric graphs. On the other hand, in social networks, users are more likely to connect if they belong to some specific community (groups of friends, political party, etc.). This has motivated the introduction of the stochastic block models (see the recent survey [1] and the more recent breakthrough [5] for more details), where in the simplest case, each of the N vertices belongs to one (and only one) of the two communities that are present in the network. The two types of connections – geometric graph vs. block model – are conceptually quite different and co-exist independently. Two users might be connected because they are “endogenously similar” (their latent coordinates are close enough to each others) or because they are “exogenously similar” (they belong to the same community). For instance, to oversimplify a social network, we can consider that two different types of connections can occur between users: either they are childhood friends (with similar latent variables) or they have the same political views (right/left wing). We therefore model these simultaneous types of interaction in social networks as a simple stochastic block model (with 2 balanced communities) perturbed by a latent geometric graph. More precisely, we are going to assume that the probability of endogenous connections between vertices i and j, with respective latent variables Xi, Xj 2 Rd, is given by the Gaussian1 kernel exp( kXi Xjk2) where is the (inverse) width. On the other hand, exogenous connections are defined by the block model where half of the N vertices belong to some community, half of them to the other one. The probability of connection between two members of the same community is equal to p1 and between two members from different communities is equal to p2. We also consider an extra parameter 1We emphasize here that geometric interactions are defined through some kernel so that different recovery regimes can be identified with respect to a unique, simple width parameter . Similarly, the choice of the Gaussian kernel might seem a bit specific and arbitrary, but this purely for the sake of presentation: our approach can be generalized to other kernels (the “constants” will be different; they are defined w.r.t. the kernel chosen). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 [0, 1] to represent the respective strengths of exogenous vs. endogenous connections (and we assume that +max{p1, p2} 1 for technical reason). Overall, the probability of connection between i and j, of latent variable Xi and Xj is P i ⇠ j Xi, Xj = e kXi Xjk 2 + ⇢ p1 if i, j are in the same community p2 otherwise In stochastic block models, the key idea is to recover the two communities from the observed set of edges (and only from those observations, i.e., the latent variables Xi are not observed). This recovery can have different variants that we enumerate now (from the strongest to the weakest). Let us denote by 2 { ±1p N } N the normalized community vector illustrating to which community each vertex belong ( i = 1p N if i belongs the the first community and i = 1p N otherwise). Given the graph-adjency matrix A 2 {0, 1}N 2 , the objective is to output a normalized vector x 2 RN (i.e., with kxk = 1) such that, for some " > 0, Exact recovery: with probability tending to 1, >x = 1, thus x 2 { ±1p N } N Weak recovery: with probability tending to 1, >x " and x 2 { ±1p N } N Soft recovery: with probability tending to 1, >x " We recall here that if x is chosen at random, independently from , then >x would be of the order of 1p N , thus tends to 0. On the other hand, weak recovery implies that the vector x has (up to a change of sign) at least N2 (1 + ") coordinates equal to those of . Moreover, we speak of soft recovery (as opposed to hard recovery) in the third case by analogy to soft vs. hard classifiers. Indeed, given any normalized vector x 2 Rd, let us construct the vector sign(x) = 21{Xi 0} 1p N 2 { ±1p N } N . Then sign(x) is a candidate for weak/exact recovery. Standard comparisons between Hamming and Euclidian distance (see, e.g., [16]) relates soft to weak recovery as > sign(x) 4 >x 3; In particular, weak-recovery is ensured as soon as soft recovery is attained above the threshold of " = 3/4 (and obviously exact recovery after the threshold 1 1/4N ). For simplicity, we are going to assume2 that Xi are i.i.d., drawn from the 2-dimensional Gaussian distribution N (0, I2). In particular, this implies that the law Ai,j (equal to 1 if there is an edge between i and j and 0 otherwise) is a Bernoulli random variable (integrated over Xi and Xj) Ber ⇣ p1+p2 2 + 1+4 ⌘ ; Notice that Ai,j and Ai0,j0 are identically distributed but not independent if i = i0 or j = j0. Recovering communities can be done efficiently (in some regime) using spectral methods and we will generalize them to this perturbed (or mis-specified) model. For this purpose, we will need a precise and detailed spectral analysis of the random geometric graphs considered (this has been initiated in [20], [10] and [4] for instance). There has been several extensions of the standard stochastic block models to incorporate latent variables or covariables in perturbed stochastic block models. We can mention cases where covariables are observed (and thus the algorithm can take their values into account to optimize the community recovery) [25, 23, 9, 14], when the degree of nodes are corrected [12] or the case of labeled edges [13, 24, 15, 16, 26]. However, these papers do not focus on the very simple question of the robustness of recovery algorithm to (slight) mis-specifications in the model, i.e., to some small perturbations of the original model and this is precisely our original motivations. Regarding this question, [21] consider the robustness of spectral methods for a SBM perturbed by adversarial perturbation in the sparse degree setup. Can we prove that a specific efficient algorithm (here, based on spectral methods) still exactly/weakly/softly recover communities even if it is agnostic to the presence, or not, of endogenous noise ? Of course, if that noise is too big, then recovery is impossible (consider for instance the case = 0 and 0). However, and this is our main contribution, we are able 2The fact that d = 2 does not change much compared to d > 3; it is merely for the sake of computations; any Gaussian distribution N (0, 2I2) can be recovered by dividing by 2. to pinpoint specific range of perturbations (i.e., values of and ) such that spectral methods – in short, output the normalized second highest eigenvector – still manage to perform some recovery of the communities. Our model is motivated to simplify the exposition but can be generalized to more complicated models (more than two communities of different sizes). To be more precise, we will prove that: - if 1/ is in the same order than p1 and p2 (assuming that p1 ⇠ p2 is a standard assumption in stochastic block model), then soft recovery is possible under a mild assumption (p1 p22 4 (1+")); - if (p1 p2) goes to infinity, then exact recovery happens. However, we mention here that we do not consider the “sparse” case (when pi ⇠ an ), in which regimes where partial recovery is possible or not (and efficiently) are now clearly understood [7, 17, 8, 18], as the geometric graphs perturbes too much the delicate arguments. Our main results are summarised in Theorem 8 (when the different parameters are given) and Theorem 10 (without knowing them, the most interesting case). It is a first step for the study of the robustness of spectral methods in the presence of endogenous noise regarding the question of community detection. As mentioned before, those results highly rely on a careful and detailed analysis of the spectrum of the random graph adjencency matrix. This is the purpose of the following Section 1, which has its own interest in random graphs. Then we investigate the robustness of spectral methods in a perturbed stochastic block model, which is the main focus of the paper, in Section 2. Finally, more detailed analysis, other statements and some proofs are given in the Appendix. 1 Spectral analysis for the adjacency matrix of the random grah Let us denote by P the conditional expectation matrix (w.r.t the Gaussian kernel), where Pij = Pji = e ||Xi Xj || 2 , for i < j 2 [1, .., N ], and Pii = 0 for all i = 1, .., N . We will denote by µ1 µ2 · · · µN its ordered eigenvalues (in Section 2, µk are the eigenvalues of P ). 1.1 The case where is bounded We study apart the case where lim sup N!1 < 1. The simplest case corresponds to the case where log(N) ! 0 as N ! 1 as with probability one, each Pi,j converges to one. And as a consequence, the spectrum of P has a nonzero eigenvalue which converges to N (with probability arbitrarily close to 1). In the case where is not negligible w.r.t. 1log(N) , arguments to understand the spectrum of P – or at least its spectral radius – are a bit more involved. Proposition 1. Assume that (N) is a sequence such that limN!1 (N) = 0 0. Then there exists a constant C1( 0) such that the largest eigenvalue of P satisfies µ1(P ) NC1( 0) ! 1 as N ! 1. 1.2 The spectral radius of P when ! 1, ⌧ N/ lnN We now investigate the special case where ! 1, but when ⌧ N/ lnN (as in this regime the spectral radius ⇢(P ) of P does not vanish). We will show that ⇢(P ) is in the order of N2 . We formally state this case under the following Assumption (H1) (implying that ln ⌧ N ). ! 1 and 1 N lnN ! 1. (H1) Proposition 2. If Assumption (H1) holds then, with probability tending to one, N 2 ⇢(P ) N 2 (1 + o(1)) . Proof. By the Perron Frobenius theorem, one has that min i=1,...,N NX l=1 Pil ⇢(P ) max i=1,...,N NX l=1 Pil. To obtain an estimate of the spectral radius of P , we show that, with probability tending to 1, maxi P N l=1 Pil cannot exceed N 2 and for “a large enough number" of indices i, their connectivity satisfies NX l=1 Pil = N 2 (1 + o(1)) . The proof is going to be decomposed into three parts (each corresponding to a different lemma, whose proofs are delayed to Appendix B.). 1. We first consider only vertices close to 0, i.e., such that |Xi|2 2 log( ) . For those vertices,P j Pi,j is of the order of N/2 with probability close to 1. See Lemma 3 2. For the other vertices, farther away from 0, it is easier to only provide an upper bound onP j Pi,j with a similar proof. See Lemma 4 3. Then we show that the spectral radius has to be of the order N/2 by considering the subset J of vertices "close to 0" (actually introduced in the first step) and by proving that their inner connectivity – restricted to J –, must be of the order N/2 . See Lemma 5. Combining the following three Lemmas 3, 4 and 5 will immediately give the result. Lemma 3. Assume that Assumption (H1) holds, then, as N grows to infinity, P n 9i N s.t. |Xi| 2 2 ln , NX j=1 Pij N 2 o ⇣ N 2 ⌘o ! 1. Lemma 3 states that the connectivities of vertices close to the origin converge to their expectation (conditionally to Xi). Its proof decomposes the set of vertices into those that are close to i (the main contribution in the connectivity, with some concentration argument), far from i but close to the origin (negligible numbers) and those far from i and the origin (negligible contribution to the connectivity). The second step of the proof of Proposition 2 considers indices i such that |Xi|2 2 ln . Lemma 4. For indices i such that |Xi|2 2 ln one has with probability tending to 1 that NX j=1 Pij N 2 (1 + o(1)) . The proof just uses the fact that for those vertices, Pij are typically negligible. To get a lower bound on the spectral radius of P , we show that if one selects the submatrix PJ := (Pij)i,j2J where J is the collection of indices J = n 1 i N, |Xi| 2 2 ln o , (1) the spectral radius of PJ is almost N2 . This will give the desired estimate on the spectral radius of P . Lemma 5. Let J be the subset defined in (1) and PJ the associated sub matrix. Let µ1(J) denote the largest eigenvalue of PJ . Then, with h.p., one has that µ1(J) N 2 (1 o(1)). The proof relies on the fact that vertices close to the origin get the most contribution to their connectivity from the other vertices close to the origin. The constant 1/2 that arises in the Proposition 2 is a direct consequence of the choice of the Gaussian kernel. Had we chosen a different kernel, this constant would have been different (once the width parameter normalized appropriately). The techniques we developed can be used to compute it; this is merely a matter of computations, left as exercices. 2 A stochastic block model perturbed by a geometric graph 2.1 The model We consider in this section the stochastic block model, with two communities (it can easily be extended to the coexistence of more communities), yet perturbed by a geometric graph. More precisely, we assume that each member i of the network (regardless of its community) is characterized by an i.i.d. Gaussian vector Xi in R2 with distribution N (0, I2). The perturbed stochastic block model is characterized by four parameters: the two probabilities of intra-inter connection of communities (denoted respectively by p1 and p2 > 0) and two connectivity parameters , , chosen so that max(p1, p2) + 1: -In the usual stochastic block model, vertices i and j are connected with probability ri,j where rij = ⇢ p1 if Xi, Xj belong to the same community p2 otherwise , where p1 and p2 are in the same order (the ratio p1/p2 is uniformly bounded). -The geometric perturbation of the stochastic block model we consider is defined as follows. Conditionally on the values of Xi, the entries of the adjacency matrix A = (Aij) are independent (up to symmetry) Bernoulli random variables with parameter qij = e |Xi Xj | 2 + rij . We remind that the motivation is independent to incorporate the fact that members from two different communities can actually be “closer" in the latent space than members of the same community. Thus in comparison with preceding model, the matrix P of the geometric graph is now replaced with Q := P + ✓ p1J p2J p2J p1J ◆ , where we assume, without loss of generality, that Xi, i N/2 (resp. i N/2 + 1) belong to the same community. The matrix P0 := ✓ p1J p2J p2J p1J ◆ has two non zero eigenvalues which are 1 = N(p1 + p2)/2 with associated normalized eigenvector v1 = 1p N (1, 1, . . . 1)> and 2 = N(p1 p2)/2 associated to v2 = = 1p N (1, . . . , 1, 1, . . . 1)>. Thus, in principle, communities can be detected from the eigenvectors of P0 by using the fact that two vertices i, j such that v2(i)v2(j) = 1 belong to the same community. Our method can be generalized (using sign vectors) to more complicated models where the two communities are of different size, as well as to the case of k communities (and thus the matrix P0 has k non zero eigenvalues). For the sake of notations, we write the adjacency matrix of the graph as : A = P0 + P1 +Ac, where P1 = P with P the N ⇥ N -random symmetric matrix with entries (Pij) – studied in the previous section – and Ac is, conditionnally on the Xi’s a random matrix with independent Bernoulli entries which are centered. 2.2 Separation of eigenvalues: the easy case We are going to use spectral methods to identify communities. We therefore study in this section a regime where the eigenvalues of A are well separated and the second eigenvector is approximately v2, i.e. the vector which identifies precisely the two communities. Proposition 6. Assume that N(p1 p2) p N + N . Then, with probability tending to 1, the two largest eigenvalues of A denoted by ⇢1 ⇢2 are given by ⇢i = i(1 + o(1)), i = 1, 2. Furthermore, with probability tending to 1, associated normalized eigenvectors (with non negative first coordinate) denoted by w1 and w2 satisfy hvi, wii = 1 o(1); i = 1, 2. Proposition 6 implies that, in the regime considered, the spectral analysis of the adjacency matrix can be directly used to detect communities, in the same way it is a standard technique for the classical stochastic block model (if |p1 p2| is big enough compared to p1 + p2, which is the case here). Finding the exact threshold C0 such that if N(p1 p2) = C0( p N + N ) then the conclusion of Proposition 6 is still an open question. 2.3 Partial reconstruction when N p N(p1 + p2) From Theorem 2.7 in [2], the spectral norm of Ac cannot exceed ⇢(Ac) s N + r N( p1 + p2 2 +O( 2 )) ! (1 + ✏), with probability tending to 1, since the maximal connectivity of a vertex does not exceed N p1+p2 2 + 2 (1 + o(1)). In the specific regime where N 2 ⌧ r N p1 + p2 2 , standard techniques [5] of communities detection would work, at the cost of additional perturbation arguments. As a consequence, we will concentrate on the reconstruction of communities when N 2 r N p1 + p2 2 . This essentially means that the spectrum of Ac is blurred into that of P1. More precisely, we are from now going to consider the case where the noise induced by the latent random graph is of the same order of magnitude as the signal (which is the interesting regime): 90 < c,C < 1 s.t. 12 N 2 2 [c, C], 2 1 2 [c, C] and 2 p 1. (H2) If (H2) holds, then the spectrum of P0 + P1 overwhelms that of Ac. As a consequence, the problem becomes that of community detection based on P0 + P1, which will be done using spectral methods. To analyze the spectrum of P0 + P1, we will use extensively the resolvent identity [3] : consider ✓ 2 C \R and set S = P0 + P1;RS(✓) = (S ✓I) 1, R1(✓) := (P1 ✓I) 1. One then has that RS(I + P0R1) = R1, (2) where the variable ✓ is omitted for clarity when they are no possible confusion. Since P0 is a rank two matrix, then P0 can be written as P0 = 1v1v⇤1 + 2v2v⇤2 where v1 and v2 are the eigenvectors introduced before. Eigenvalues of S that are not eigenvalues of P1 are roots of the rational equation det(I +P0R1) = 0: det(I + P0R1) = 1 + 1 2hR1v1, v1ihR1v2, v2i+ 1hR1v1, v1i + 2hR1v2, v2i 1 2hR1v1, v2i 2 . (3) Let µ1 µ2 · · ·µN be the ordered eigenvalues of P1 with associated normalized eigenvectors w1, w2, . . . , wN , then one has that R1(✓) = P N j=1 1 µj ✓wjw ⇤ j . Denote, for every j 2 {1, .., N}, rj = hv1, wji and sj = hv2, wji, so that Equation (3) rewrites into det(I + P0R1(✓)) =: f 1, 2(✓) =1 + NX j=1 1 µj ✓ ( 1r 2 j + 2s 2 j ) + 1 2/2 X j 6=k 1 (µj ✓)(µk ✓) (rjsk rksj) 2 . (4) As mentioned before, we aim at using spectral methods to reconstruct communities based on the second eigenvector of S. As a consequence, these techniques may work only if (at least) two eigenvalues of S, that are roots of det(I + P0R1(✓)) = 0 exit the support of the spectrum of P1, i.e., such that they are greater than µ1. So we will examine conditions under which there exist two real solutions to Equation (4), with the restriction that they must be greater than µ1. If two such solutions exist, by considering the singularities in (2), then two eigenvalues of S indeed lie outside the spectrum of P1. 2.3.1 Separation of Eigenvalues in the rank two case. We now prove that two eigenvalues of S exit the support of the spectrum of P1. Recall the definition of the function f 1, 2 given in Equation (4) (or equivalently Equation (3)). One has that lim✓!1 f 1, 2(✓) = 1 , f 1, 2(✓( 1)) < 0 and similarly f 1, 2(✓( 2)) < 0, where ✓(·) is the function introduced in the rank 1 case. Thus two eigenvalues exit the spectrum of P1 if lim ✓!µ+1 f 1, 2(✓) > 0. First, let us make the following claim (a consequence of (H1) and (H2), see Lemma 9). lim inf N!1 1r 2 1 > 0. (H3) Lemma 7. Assume (H1), (H2) and (H3) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) = 4 N 2 (1 + ✏). Then at least two eigenvalues of P0 + P1 separate from the spectrum of P1. Proof. Let us first assume that µ1 is isolated; there exists ⌘ > 0 such that for N large enough µ1 > µ2 + ⌘. In this case, we look at the leading terms in the expansion of g as ✓ approaches µ1. It holds that f 1, 2(✓) ⇠ 1 ✓ µ1 0 @ 1 2 X j 2 1 ✓ µj (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A . Using that the spectral radius of P1 does not exceed µ1, we deduce that f 1, 2(✓) 1 ✓ µ1 0 @ 1 2 2✓ X j 2 (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A 1 ✓ µ1 ✓ 1 2 2✓ (r21 + s 2 1) 1r 2 1 2s 2 1 ◆ 1 ✓ µ1 1(r 2 1 + s 2 1)✏, provided 2 2µ1(1 + ✏). Note that if µ1 is isolated, the bound on 2 is improved by a factor of 2. Now we examine the case where µ1 is not isolated. We then define I ⇤ := {i : lim sup N!1 µi µ1 = 0}, and we define ṽi = P j2I⇤hvi, wjiwj , i = 1, 2. Then mimicking the above computations, we get f 1, 2(✓) 1 + o(1) ✓ µ1 ✓ 1 2 4✓ (||ṽ21 ||+ ||ṽ 2 2 ||) 1||ṽ 2 1 || 2||ṽ 2 2 || ◆ (5) so that two eigenvalues separate from the rest of the spectrum as soon as 2 > 4µ1(1 + ✏). To get that statement we simply modify step by step the above arguments. This finishes the proof of Lemma 7 as soon as lim infN!1 1r21 > 0. The threshold exhibited for the critical value of 2 might not be the optimal one, however it is in the correct scale as we do not a priori expect a separation if 2 µ1. 2.3.2 Partial reconstruction when N p1+p22 is known In the specific case where N p1+p22 is known beforehand for some reason, it is possible to weakly recover communities using Davis-Kahan sin(✓)-theorem under the same condition than Lemma 7. We recall that this theorem states that if M = ↵xx> and fM = exex> is the best rank-1 approximation of M 0, where both x and ex are normalized to kxk = kexk = 1, then min kx exk, kx+ exk 2 p 2 max{|↵|, | |} kM M 0 k. Theorem 8. Assume that (H1) and (H2) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) () p1 p2 2 2 (1 + ✏), then weak recovery of the communities is possible. Proof. We are going to appeal to Davis-Kahan theorem with respect to M = P0 N p1 + p2 2 v1v > 1 = N p1 p2 2 v2v > 2 and M 0 = A N p1 + p2 2 v1v > 1 = P0 + P1 +Ac N p1 + p2 2 v1v > 1 = P1 +Ac +M As a consequence, let us denote by ex the first eigenvector of M 0 of norm 1 so that 1 N dH(v2, sign(ex)) kv2 exk2 8 2 2 kP1 +Ack 2 = 8 2 2 µ 2 1(1 + o(1)) . Weak reconstruction is possible if the l.h.s. is strictly smaller than 1/2, hence if 2 4µ1(1+"). It is quite interesting that weak recovery is possible in the same regime where two eigenvalues of P0+P1 separate from the spectrum of P1. Yet the above computations imply that in order to compute ex, it is necessary to know p1+p22 (at least up to some negligible terms). In the standard stochastic block model, when = 0, this quantity can be efficiently estimated since the N(N 1)2 edges are independently drawn with overall probability p1+p22 . As a consequence, the average number of edges is a good estimate of p1+p22 up to its standard deviation. The latter is indeed negligible compared to p1+p2 2 as it is in the order of 1 N q p1+p2 2 . On the other hand, when 6= 0, such trivial estimates are no longer available; indeed, we recall that the probability of having an edge between Xi and Xj is equal to p1+p22 + 1+4 , where all those terms are unknown (and moreover, activations of edges are no longer independent). We study in the following section, the case where p1 + p2 is not known. First, we will prove that Assumption (H3) is actually always satisfied (notice that it was actually not required for weak recovery). In a second step, we will prove that soft recovery is possible, where we recall that this means we can output a vector x 2 RN such that kxk = 1 and x>v2 does not converge to 0. Moreover, we also prove that weak (and exact) recovery is possible if the different parameters p1, p2 and 1 are sufficiently separated. 2.3.3 The case of unknown p1 + p2 We now proceed to show that Assumption (H3) holds in the regime considered. Lemma 9. Under (H1) and (H2), one has that 1) for some constant C > 0, r21 C. and 2) for some ✏ > 0 small enough, 1r21 ✏. The first point of Lemma 9 implies (H3) with an explicit rate if AN 1 2 for some constant A. The second point proves this result in the general case. Theorem 10. If (H1) and (H2) hold true and 1 > 2+2 2 then the correlation |w > 2 v2| is uniformly bounded away from 0 hence soft recovery is always possible. Moreover, if the ratio 2/µ1 goes to infinity, then |w > 2 v2| tends to 1, which gives weak (and even exact at the limit) recovery. An (asymptotic) formula for the level of correlation is provided at the end of the proof. 3 Experiments The different results provided are theoretical and we proved that two eigenvalues separate from the bulk of the spectrum if the different parameters are big enough and sufficiently far from each other. And if they are too close to each other, it is also quite clear that spectral methods will not work. However, we highlight these statements in Figure 1. It illustrates the effect of perturbation on the spectrum of the stochastic block models for the following specific values: N = 2000, p1 = 2.5%, p2 = 1%, = 0.97 and 2 {50, 70, 100, 110}. Notice that for those specific values with get 1 = 35, 2 = 15 and µ1 2 {20, 14.3, 10, 9.1}; in particular, two eigenvalues are well separated in the unperturbed stochastic block model. The spectrum of the classical stochastic block model is coloured in red while the spectrum of the perturbed one is in blue ( the spectrum of the conditionnal adjacency matrix, given the Xi’s is in gray). As expected, for the value of = 50, the highest eigenvalue of P1 is bigger than 2 and the spectrum of the expected adjacency matrix (in red) as some "tail". This prevents the separation of eigenvalues in the perturbed stochastic block model. Separation of eigenvalues starts to happen, empirically and for those range of parameters, around = 70 for whichp 1 µ1 = 10 2. We also provide how the correlations between the second highest eigenvector and , the normalized vector indicating to which community vertices belong, evolve with respect to for this choice of parameters, see Figure 2. Conclusion The method exposed hereabove can be generalized easily. In the case where there are k 2 communities of different sizes, P0 has rank k. If k eigenvalues of S exit the support of the spectrum of P1, then communities may be reconstructed using a set of k associated (sign) eigenvectors, whether the parameters are known or not. We have proved that spectral methods to recover communities are robust to slight mis-specifications of the model, i.e., the presence of endogenous noise not assumed by the model (especially when p1 + p2 is not known in advance). Our results hold in the regime where 1 logN N and with 2 communities (balancedness and the small dimension of latent variables were just assumed for the sake of computations) - those theoretical results are validated empirically by some simulations provided in the Appendix. Obtaining the same robustness results for more than 2 communities, for different types of perturbations and especially in the sparse regime 1 ⇠ pi ⇠ 1 N seems quite challenging as standard spectral techniques in this regime involve the non-backtracking matrix [5], and its concentration properties are quite challenging to establish. Broader Impact This paper deals with theoretical detection of community in networks. Even if an entity wants to use community detection with some mercantile objectives (maybe in order to target some specific community), it would probably use spectral methods, no matter if the existing theory gives it guarantee that it is going to work. At worst, our paper will provide a positive answer: the very specific assumptions of stochastic block models are not required for theoretical (and certainly practical) recovery. On the other hand, theoretical robustness results as ours can lead to substantial follow up research on finding the transition between regimes in complex models (almost ill-posed). Theory papers like this one are therefore win-win. Acknowledgments and Disclosure of Funding This research was supported by the Institut Universitaire de France. It was also supported in part by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences and by the French Agence Nationale de la Recherche under the grant number ANR19-CE23-0026-04. No other competing interests.
1. What is the focus and contribution of the paper regarding community detection and stochastic block models? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its experimental section? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work provides theoretical analysis of spectral methods to do community detection over the graphs that hold topology generated by stochastic block model while perturbed by random geometric variables. This work discussed the impact of different regimes of the model parameters on the algorithm. Strengths (1) It is novel and interesting to consider the community detection problem with this new model SBM + perturbation from random geometric latent variables. (2) The analysis reads rigorous and the logic is clear. I enjoy reading it. Weaknesses (1) The template is weird. There is no section number before introduction. (2) Although this is a theoretical work, it is not good to miss the total section for empirical evaluation. At least, the dependence on the parameters of the obtained regime should be demonstrated via simulation. --- Thank the authors for preparing the response. I think this work has solid theory and I lean to accept this work. However, just as previously argued, more intuition should be provided and detailed proof can be postponed to the supplement. Moreover, more numerical evaluation is needed before I can increase my overall evaluation.
NIPS
Title Robustness of Community Detection to Random Geometric Perturbations Abstract We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest. N/A Introduction In a d-dimensional random geometric graph, N vertices are assigned random coordinates in Rd, and only points close enough to each other are connected by an edge. Random geometric graphs are used to model complex networks such as social networks, the world wide web and so on. We refer to [19] - and references therein - for a comprehensive introduction to random geometric graphs. On the other hand, in social networks, users are more likely to connect if they belong to some specific community (groups of friends, political party, etc.). This has motivated the introduction of the stochastic block models (see the recent survey [1] and the more recent breakthrough [5] for more details), where in the simplest case, each of the N vertices belongs to one (and only one) of the two communities that are present in the network. The two types of connections – geometric graph vs. block model – are conceptually quite different and co-exist independently. Two users might be connected because they are “endogenously similar” (their latent coordinates are close enough to each others) or because they are “exogenously similar” (they belong to the same community). For instance, to oversimplify a social network, we can consider that two different types of connections can occur between users: either they are childhood friends (with similar latent variables) or they have the same political views (right/left wing). We therefore model these simultaneous types of interaction in social networks as a simple stochastic block model (with 2 balanced communities) perturbed by a latent geometric graph. More precisely, we are going to assume that the probability of endogenous connections between vertices i and j, with respective latent variables Xi, Xj 2 Rd, is given by the Gaussian1 kernel exp( kXi Xjk2) where is the (inverse) width. On the other hand, exogenous connections are defined by the block model where half of the N vertices belong to some community, half of them to the other one. The probability of connection between two members of the same community is equal to p1 and between two members from different communities is equal to p2. We also consider an extra parameter 1We emphasize here that geometric interactions are defined through some kernel so that different recovery regimes can be identified with respect to a unique, simple width parameter . Similarly, the choice of the Gaussian kernel might seem a bit specific and arbitrary, but this purely for the sake of presentation: our approach can be generalized to other kernels (the “constants” will be different; they are defined w.r.t. the kernel chosen). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 [0, 1] to represent the respective strengths of exogenous vs. endogenous connections (and we assume that +max{p1, p2} 1 for technical reason). Overall, the probability of connection between i and j, of latent variable Xi and Xj is P i ⇠ j Xi, Xj = e kXi Xjk 2 + ⇢ p1 if i, j are in the same community p2 otherwise In stochastic block models, the key idea is to recover the two communities from the observed set of edges (and only from those observations, i.e., the latent variables Xi are not observed). This recovery can have different variants that we enumerate now (from the strongest to the weakest). Let us denote by 2 { ±1p N } N the normalized community vector illustrating to which community each vertex belong ( i = 1p N if i belongs the the first community and i = 1p N otherwise). Given the graph-adjency matrix A 2 {0, 1}N 2 , the objective is to output a normalized vector x 2 RN (i.e., with kxk = 1) such that, for some " > 0, Exact recovery: with probability tending to 1, >x = 1, thus x 2 { ±1p N } N Weak recovery: with probability tending to 1, >x " and x 2 { ±1p N } N Soft recovery: with probability tending to 1, >x " We recall here that if x is chosen at random, independently from , then >x would be of the order of 1p N , thus tends to 0. On the other hand, weak recovery implies that the vector x has (up to a change of sign) at least N2 (1 + ") coordinates equal to those of . Moreover, we speak of soft recovery (as opposed to hard recovery) in the third case by analogy to soft vs. hard classifiers. Indeed, given any normalized vector x 2 Rd, let us construct the vector sign(x) = 21{Xi 0} 1p N 2 { ±1p N } N . Then sign(x) is a candidate for weak/exact recovery. Standard comparisons between Hamming and Euclidian distance (see, e.g., [16]) relates soft to weak recovery as > sign(x) 4 >x 3; In particular, weak-recovery is ensured as soon as soft recovery is attained above the threshold of " = 3/4 (and obviously exact recovery after the threshold 1 1/4N ). For simplicity, we are going to assume2 that Xi are i.i.d., drawn from the 2-dimensional Gaussian distribution N (0, I2). In particular, this implies that the law Ai,j (equal to 1 if there is an edge between i and j and 0 otherwise) is a Bernoulli random variable (integrated over Xi and Xj) Ber ⇣ p1+p2 2 + 1+4 ⌘ ; Notice that Ai,j and Ai0,j0 are identically distributed but not independent if i = i0 or j = j0. Recovering communities can be done efficiently (in some regime) using spectral methods and we will generalize them to this perturbed (or mis-specified) model. For this purpose, we will need a precise and detailed spectral analysis of the random geometric graphs considered (this has been initiated in [20], [10] and [4] for instance). There has been several extensions of the standard stochastic block models to incorporate latent variables or covariables in perturbed stochastic block models. We can mention cases where covariables are observed (and thus the algorithm can take their values into account to optimize the community recovery) [25, 23, 9, 14], when the degree of nodes are corrected [12] or the case of labeled edges [13, 24, 15, 16, 26]. However, these papers do not focus on the very simple question of the robustness of recovery algorithm to (slight) mis-specifications in the model, i.e., to some small perturbations of the original model and this is precisely our original motivations. Regarding this question, [21] consider the robustness of spectral methods for a SBM perturbed by adversarial perturbation in the sparse degree setup. Can we prove that a specific efficient algorithm (here, based on spectral methods) still exactly/weakly/softly recover communities even if it is agnostic to the presence, or not, of endogenous noise ? Of course, if that noise is too big, then recovery is impossible (consider for instance the case = 0 and 0). However, and this is our main contribution, we are able 2The fact that d = 2 does not change much compared to d > 3; it is merely for the sake of computations; any Gaussian distribution N (0, 2I2) can be recovered by dividing by 2. to pinpoint specific range of perturbations (i.e., values of and ) such that spectral methods – in short, output the normalized second highest eigenvector – still manage to perform some recovery of the communities. Our model is motivated to simplify the exposition but can be generalized to more complicated models (more than two communities of different sizes). To be more precise, we will prove that: - if 1/ is in the same order than p1 and p2 (assuming that p1 ⇠ p2 is a standard assumption in stochastic block model), then soft recovery is possible under a mild assumption (p1 p22 4 (1+")); - if (p1 p2) goes to infinity, then exact recovery happens. However, we mention here that we do not consider the “sparse” case (when pi ⇠ an ), in which regimes where partial recovery is possible or not (and efficiently) are now clearly understood [7, 17, 8, 18], as the geometric graphs perturbes too much the delicate arguments. Our main results are summarised in Theorem 8 (when the different parameters are given) and Theorem 10 (without knowing them, the most interesting case). It is a first step for the study of the robustness of spectral methods in the presence of endogenous noise regarding the question of community detection. As mentioned before, those results highly rely on a careful and detailed analysis of the spectrum of the random graph adjencency matrix. This is the purpose of the following Section 1, which has its own interest in random graphs. Then we investigate the robustness of spectral methods in a perturbed stochastic block model, which is the main focus of the paper, in Section 2. Finally, more detailed analysis, other statements and some proofs are given in the Appendix. 1 Spectral analysis for the adjacency matrix of the random grah Let us denote by P the conditional expectation matrix (w.r.t the Gaussian kernel), where Pij = Pji = e ||Xi Xj || 2 , for i < j 2 [1, .., N ], and Pii = 0 for all i = 1, .., N . We will denote by µ1 µ2 · · · µN its ordered eigenvalues (in Section 2, µk are the eigenvalues of P ). 1.1 The case where is bounded We study apart the case where lim sup N!1 < 1. The simplest case corresponds to the case where log(N) ! 0 as N ! 1 as with probability one, each Pi,j converges to one. And as a consequence, the spectrum of P has a nonzero eigenvalue which converges to N (with probability arbitrarily close to 1). In the case where is not negligible w.r.t. 1log(N) , arguments to understand the spectrum of P – or at least its spectral radius – are a bit more involved. Proposition 1. Assume that (N) is a sequence such that limN!1 (N) = 0 0. Then there exists a constant C1( 0) such that the largest eigenvalue of P satisfies µ1(P ) NC1( 0) ! 1 as N ! 1. 1.2 The spectral radius of P when ! 1, ⌧ N/ lnN We now investigate the special case where ! 1, but when ⌧ N/ lnN (as in this regime the spectral radius ⇢(P ) of P does not vanish). We will show that ⇢(P ) is in the order of N2 . We formally state this case under the following Assumption (H1) (implying that ln ⌧ N ). ! 1 and 1 N lnN ! 1. (H1) Proposition 2. If Assumption (H1) holds then, with probability tending to one, N 2 ⇢(P ) N 2 (1 + o(1)) . Proof. By the Perron Frobenius theorem, one has that min i=1,...,N NX l=1 Pil ⇢(P ) max i=1,...,N NX l=1 Pil. To obtain an estimate of the spectral radius of P , we show that, with probability tending to 1, maxi P N l=1 Pil cannot exceed N 2 and for “a large enough number" of indices i, their connectivity satisfies NX l=1 Pil = N 2 (1 + o(1)) . The proof is going to be decomposed into three parts (each corresponding to a different lemma, whose proofs are delayed to Appendix B.). 1. We first consider only vertices close to 0, i.e., such that |Xi|2 2 log( ) . For those vertices,P j Pi,j is of the order of N/2 with probability close to 1. See Lemma 3 2. For the other vertices, farther away from 0, it is easier to only provide an upper bound onP j Pi,j with a similar proof. See Lemma 4 3. Then we show that the spectral radius has to be of the order N/2 by considering the subset J of vertices "close to 0" (actually introduced in the first step) and by proving that their inner connectivity – restricted to J –, must be of the order N/2 . See Lemma 5. Combining the following three Lemmas 3, 4 and 5 will immediately give the result. Lemma 3. Assume that Assumption (H1) holds, then, as N grows to infinity, P n 9i N s.t. |Xi| 2 2 ln , NX j=1 Pij N 2 o ⇣ N 2 ⌘o ! 1. Lemma 3 states that the connectivities of vertices close to the origin converge to their expectation (conditionally to Xi). Its proof decomposes the set of vertices into those that are close to i (the main contribution in the connectivity, with some concentration argument), far from i but close to the origin (negligible numbers) and those far from i and the origin (negligible contribution to the connectivity). The second step of the proof of Proposition 2 considers indices i such that |Xi|2 2 ln . Lemma 4. For indices i such that |Xi|2 2 ln one has with probability tending to 1 that NX j=1 Pij N 2 (1 + o(1)) . The proof just uses the fact that for those vertices, Pij are typically negligible. To get a lower bound on the spectral radius of P , we show that if one selects the submatrix PJ := (Pij)i,j2J where J is the collection of indices J = n 1 i N, |Xi| 2 2 ln o , (1) the spectral radius of PJ is almost N2 . This will give the desired estimate on the spectral radius of P . Lemma 5. Let J be the subset defined in (1) and PJ the associated sub matrix. Let µ1(J) denote the largest eigenvalue of PJ . Then, with h.p., one has that µ1(J) N 2 (1 o(1)). The proof relies on the fact that vertices close to the origin get the most contribution to their connectivity from the other vertices close to the origin. The constant 1/2 that arises in the Proposition 2 is a direct consequence of the choice of the Gaussian kernel. Had we chosen a different kernel, this constant would have been different (once the width parameter normalized appropriately). The techniques we developed can be used to compute it; this is merely a matter of computations, left as exercices. 2 A stochastic block model perturbed by a geometric graph 2.1 The model We consider in this section the stochastic block model, with two communities (it can easily be extended to the coexistence of more communities), yet perturbed by a geometric graph. More precisely, we assume that each member i of the network (regardless of its community) is characterized by an i.i.d. Gaussian vector Xi in R2 with distribution N (0, I2). The perturbed stochastic block model is characterized by four parameters: the two probabilities of intra-inter connection of communities (denoted respectively by p1 and p2 > 0) and two connectivity parameters , , chosen so that max(p1, p2) + 1: -In the usual stochastic block model, vertices i and j are connected with probability ri,j where rij = ⇢ p1 if Xi, Xj belong to the same community p2 otherwise , where p1 and p2 are in the same order (the ratio p1/p2 is uniformly bounded). -The geometric perturbation of the stochastic block model we consider is defined as follows. Conditionally on the values of Xi, the entries of the adjacency matrix A = (Aij) are independent (up to symmetry) Bernoulli random variables with parameter qij = e |Xi Xj | 2 + rij . We remind that the motivation is independent to incorporate the fact that members from two different communities can actually be “closer" in the latent space than members of the same community. Thus in comparison with preceding model, the matrix P of the geometric graph is now replaced with Q := P + ✓ p1J p2J p2J p1J ◆ , where we assume, without loss of generality, that Xi, i N/2 (resp. i N/2 + 1) belong to the same community. The matrix P0 := ✓ p1J p2J p2J p1J ◆ has two non zero eigenvalues which are 1 = N(p1 + p2)/2 with associated normalized eigenvector v1 = 1p N (1, 1, . . . 1)> and 2 = N(p1 p2)/2 associated to v2 = = 1p N (1, . . . , 1, 1, . . . 1)>. Thus, in principle, communities can be detected from the eigenvectors of P0 by using the fact that two vertices i, j such that v2(i)v2(j) = 1 belong to the same community. Our method can be generalized (using sign vectors) to more complicated models where the two communities are of different size, as well as to the case of k communities (and thus the matrix P0 has k non zero eigenvalues). For the sake of notations, we write the adjacency matrix of the graph as : A = P0 + P1 +Ac, where P1 = P with P the N ⇥ N -random symmetric matrix with entries (Pij) – studied in the previous section – and Ac is, conditionnally on the Xi’s a random matrix with independent Bernoulli entries which are centered. 2.2 Separation of eigenvalues: the easy case We are going to use spectral methods to identify communities. We therefore study in this section a regime where the eigenvalues of A are well separated and the second eigenvector is approximately v2, i.e. the vector which identifies precisely the two communities. Proposition 6. Assume that N(p1 p2) p N + N . Then, with probability tending to 1, the two largest eigenvalues of A denoted by ⇢1 ⇢2 are given by ⇢i = i(1 + o(1)), i = 1, 2. Furthermore, with probability tending to 1, associated normalized eigenvectors (with non negative first coordinate) denoted by w1 and w2 satisfy hvi, wii = 1 o(1); i = 1, 2. Proposition 6 implies that, in the regime considered, the spectral analysis of the adjacency matrix can be directly used to detect communities, in the same way it is a standard technique for the classical stochastic block model (if |p1 p2| is big enough compared to p1 + p2, which is the case here). Finding the exact threshold C0 such that if N(p1 p2) = C0( p N + N ) then the conclusion of Proposition 6 is still an open question. 2.3 Partial reconstruction when N p N(p1 + p2) From Theorem 2.7 in [2], the spectral norm of Ac cannot exceed ⇢(Ac) s N + r N( p1 + p2 2 +O( 2 )) ! (1 + ✏), with probability tending to 1, since the maximal connectivity of a vertex does not exceed N p1+p2 2 + 2 (1 + o(1)). In the specific regime where N 2 ⌧ r N p1 + p2 2 , standard techniques [5] of communities detection would work, at the cost of additional perturbation arguments. As a consequence, we will concentrate on the reconstruction of communities when N 2 r N p1 + p2 2 . This essentially means that the spectrum of Ac is blurred into that of P1. More precisely, we are from now going to consider the case where the noise induced by the latent random graph is of the same order of magnitude as the signal (which is the interesting regime): 90 < c,C < 1 s.t. 12 N 2 2 [c, C], 2 1 2 [c, C] and 2 p 1. (H2) If (H2) holds, then the spectrum of P0 + P1 overwhelms that of Ac. As a consequence, the problem becomes that of community detection based on P0 + P1, which will be done using spectral methods. To analyze the spectrum of P0 + P1, we will use extensively the resolvent identity [3] : consider ✓ 2 C \R and set S = P0 + P1;RS(✓) = (S ✓I) 1, R1(✓) := (P1 ✓I) 1. One then has that RS(I + P0R1) = R1, (2) where the variable ✓ is omitted for clarity when they are no possible confusion. Since P0 is a rank two matrix, then P0 can be written as P0 = 1v1v⇤1 + 2v2v⇤2 where v1 and v2 are the eigenvectors introduced before. Eigenvalues of S that are not eigenvalues of P1 are roots of the rational equation det(I +P0R1) = 0: det(I + P0R1) = 1 + 1 2hR1v1, v1ihR1v2, v2i+ 1hR1v1, v1i + 2hR1v2, v2i 1 2hR1v1, v2i 2 . (3) Let µ1 µ2 · · ·µN be the ordered eigenvalues of P1 with associated normalized eigenvectors w1, w2, . . . , wN , then one has that R1(✓) = P N j=1 1 µj ✓wjw ⇤ j . Denote, for every j 2 {1, .., N}, rj = hv1, wji and sj = hv2, wji, so that Equation (3) rewrites into det(I + P0R1(✓)) =: f 1, 2(✓) =1 + NX j=1 1 µj ✓ ( 1r 2 j + 2s 2 j ) + 1 2/2 X j 6=k 1 (µj ✓)(µk ✓) (rjsk rksj) 2 . (4) As mentioned before, we aim at using spectral methods to reconstruct communities based on the second eigenvector of S. As a consequence, these techniques may work only if (at least) two eigenvalues of S, that are roots of det(I + P0R1(✓)) = 0 exit the support of the spectrum of P1, i.e., such that they are greater than µ1. So we will examine conditions under which there exist two real solutions to Equation (4), with the restriction that they must be greater than µ1. If two such solutions exist, by considering the singularities in (2), then two eigenvalues of S indeed lie outside the spectrum of P1. 2.3.1 Separation of Eigenvalues in the rank two case. We now prove that two eigenvalues of S exit the support of the spectrum of P1. Recall the definition of the function f 1, 2 given in Equation (4) (or equivalently Equation (3)). One has that lim✓!1 f 1, 2(✓) = 1 , f 1, 2(✓( 1)) < 0 and similarly f 1, 2(✓( 2)) < 0, where ✓(·) is the function introduced in the rank 1 case. Thus two eigenvalues exit the spectrum of P1 if lim ✓!µ+1 f 1, 2(✓) > 0. First, let us make the following claim (a consequence of (H1) and (H2), see Lemma 9). lim inf N!1 1r 2 1 > 0. (H3) Lemma 7. Assume (H1), (H2) and (H3) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) = 4 N 2 (1 + ✏). Then at least two eigenvalues of P0 + P1 separate from the spectrum of P1. Proof. Let us first assume that µ1 is isolated; there exists ⌘ > 0 such that for N large enough µ1 > µ2 + ⌘. In this case, we look at the leading terms in the expansion of g as ✓ approaches µ1. It holds that f 1, 2(✓) ⇠ 1 ✓ µ1 0 @ 1 2 X j 2 1 ✓ µj (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A . Using that the spectral radius of P1 does not exceed µ1, we deduce that f 1, 2(✓) 1 ✓ µ1 0 @ 1 2 2✓ X j 2 (r1sj rjs1) 2 1r 2 1 2s 2 1 1 A 1 ✓ µ1 ✓ 1 2 2✓ (r21 + s 2 1) 1r 2 1 2s 2 1 ◆ 1 ✓ µ1 1(r 2 1 + s 2 1)✏, provided 2 2µ1(1 + ✏). Note that if µ1 is isolated, the bound on 2 is improved by a factor of 2. Now we examine the case where µ1 is not isolated. We then define I ⇤ := {i : lim sup N!1 µi µ1 = 0}, and we define ṽi = P j2I⇤hvi, wjiwj , i = 1, 2. Then mimicking the above computations, we get f 1, 2(✓) 1 + o(1) ✓ µ1 ✓ 1 2 4✓ (||ṽ21 ||+ ||ṽ 2 2 ||) 1||ṽ 2 1 || 2||ṽ 2 2 || ◆ (5) so that two eigenvalues separate from the rest of the spectrum as soon as 2 > 4µ1(1 + ✏). To get that statement we simply modify step by step the above arguments. This finishes the proof of Lemma 7 as soon as lim infN!1 1r21 > 0. The threshold exhibited for the critical value of 2 might not be the optimal one, however it is in the correct scale as we do not a priori expect a separation if 2 µ1. 2.3.2 Partial reconstruction when N p1+p22 is known In the specific case where N p1+p22 is known beforehand for some reason, it is possible to weakly recover communities using Davis-Kahan sin(✓)-theorem under the same condition than Lemma 7. We recall that this theorem states that if M = ↵xx> and fM = exex> is the best rank-1 approximation of M 0, where both x and ex are normalized to kxk = kexk = 1, then min kx exk, kx+ exk 2 p 2 max{|↵|, | |} kM M 0 k. Theorem 8. Assume that (H1) and (H2) hold and that there exists ✏ > 0 such that 2 4µ1(1 + ✏) () p1 p2 2 2 (1 + ✏), then weak recovery of the communities is possible. Proof. We are going to appeal to Davis-Kahan theorem with respect to M = P0 N p1 + p2 2 v1v > 1 = N p1 p2 2 v2v > 2 and M 0 = A N p1 + p2 2 v1v > 1 = P0 + P1 +Ac N p1 + p2 2 v1v > 1 = P1 +Ac +M As a consequence, let us denote by ex the first eigenvector of M 0 of norm 1 so that 1 N dH(v2, sign(ex)) kv2 exk2 8 2 2 kP1 +Ack 2 = 8 2 2 µ 2 1(1 + o(1)) . Weak reconstruction is possible if the l.h.s. is strictly smaller than 1/2, hence if 2 4µ1(1+"). It is quite interesting that weak recovery is possible in the same regime where two eigenvalues of P0+P1 separate from the spectrum of P1. Yet the above computations imply that in order to compute ex, it is necessary to know p1+p22 (at least up to some negligible terms). In the standard stochastic block model, when = 0, this quantity can be efficiently estimated since the N(N 1)2 edges are independently drawn with overall probability p1+p22 . As a consequence, the average number of edges is a good estimate of p1+p22 up to its standard deviation. The latter is indeed negligible compared to p1+p2 2 as it is in the order of 1 N q p1+p2 2 . On the other hand, when 6= 0, such trivial estimates are no longer available; indeed, we recall that the probability of having an edge between Xi and Xj is equal to p1+p22 + 1+4 , where all those terms are unknown (and moreover, activations of edges are no longer independent). We study in the following section, the case where p1 + p2 is not known. First, we will prove that Assumption (H3) is actually always satisfied (notice that it was actually not required for weak recovery). In a second step, we will prove that soft recovery is possible, where we recall that this means we can output a vector x 2 RN such that kxk = 1 and x>v2 does not converge to 0. Moreover, we also prove that weak (and exact) recovery is possible if the different parameters p1, p2 and 1 are sufficiently separated. 2.3.3 The case of unknown p1 + p2 We now proceed to show that Assumption (H3) holds in the regime considered. Lemma 9. Under (H1) and (H2), one has that 1) for some constant C > 0, r21 C. and 2) for some ✏ > 0 small enough, 1r21 ✏. The first point of Lemma 9 implies (H3) with an explicit rate if AN 1 2 for some constant A. The second point proves this result in the general case. Theorem 10. If (H1) and (H2) hold true and 1 > 2+2 2 then the correlation |w > 2 v2| is uniformly bounded away from 0 hence soft recovery is always possible. Moreover, if the ratio 2/µ1 goes to infinity, then |w > 2 v2| tends to 1, which gives weak (and even exact at the limit) recovery. An (asymptotic) formula for the level of correlation is provided at the end of the proof. 3 Experiments The different results provided are theoretical and we proved that two eigenvalues separate from the bulk of the spectrum if the different parameters are big enough and sufficiently far from each other. And if they are too close to each other, it is also quite clear that spectral methods will not work. However, we highlight these statements in Figure 1. It illustrates the effect of perturbation on the spectrum of the stochastic block models for the following specific values: N = 2000, p1 = 2.5%, p2 = 1%, = 0.97 and 2 {50, 70, 100, 110}. Notice that for those specific values with get 1 = 35, 2 = 15 and µ1 2 {20, 14.3, 10, 9.1}; in particular, two eigenvalues are well separated in the unperturbed stochastic block model. The spectrum of the classical stochastic block model is coloured in red while the spectrum of the perturbed one is in blue ( the spectrum of the conditionnal adjacency matrix, given the Xi’s is in gray). As expected, for the value of = 50, the highest eigenvalue of P1 is bigger than 2 and the spectrum of the expected adjacency matrix (in red) as some "tail". This prevents the separation of eigenvalues in the perturbed stochastic block model. Separation of eigenvalues starts to happen, empirically and for those range of parameters, around = 70 for whichp 1 µ1 = 10 2. We also provide how the correlations between the second highest eigenvector and , the normalized vector indicating to which community vertices belong, evolve with respect to for this choice of parameters, see Figure 2. Conclusion The method exposed hereabove can be generalized easily. In the case where there are k 2 communities of different sizes, P0 has rank k. If k eigenvalues of S exit the support of the spectrum of P1, then communities may be reconstructed using a set of k associated (sign) eigenvectors, whether the parameters are known or not. We have proved that spectral methods to recover communities are robust to slight mis-specifications of the model, i.e., the presence of endogenous noise not assumed by the model (especially when p1 + p2 is not known in advance). Our results hold in the regime where 1 logN N and with 2 communities (balancedness and the small dimension of latent variables were just assumed for the sake of computations) - those theoretical results are validated empirically by some simulations provided in the Appendix. Obtaining the same robustness results for more than 2 communities, for different types of perturbations and especially in the sparse regime 1 ⇠ pi ⇠ 1 N seems quite challenging as standard spectral techniques in this regime involve the non-backtracking matrix [5], and its concentration properties are quite challenging to establish. Broader Impact This paper deals with theoretical detection of community in networks. Even if an entity wants to use community detection with some mercantile objectives (maybe in order to target some specific community), it would probably use spectral methods, no matter if the existing theory gives it guarantee that it is going to work. At worst, our paper will provide a positive answer: the very specific assumptions of stochastic block models are not required for theoretical (and certainly practical) recovery. On the other hand, theoretical robustness results as ours can lead to substantial follow up research on finding the transition between regimes in complex models (almost ill-posed). Theory papers like this one are therefore win-win. Acknowledgments and Disclosure of Funding This research was supported by the Institut Universitaire de France. It was also supported in part by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences and by the French Agence Nationale de la Recherche under the grant number ANR19-CE23-0026-04. No other competing interests.
1. What is the focus of the paper regarding random graphs and community detection? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and robustness? 3. What are the weaknesses of the paper, especially in terms of its limitations and applicability? 4. How does the reviewer assess the significance and impact of the paper's contributions? 5. Are there any concerns or suggestions for future research related to the paper's topic?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors consider random graphs with nodes partitioned into two equal-sized communities, and where edges are put randomly with a probability that is a sum of a community-dependent term and a geometry-dependent term. The geometry part is defined through a gaussian kernel, and 2-diemensional node features themselves iid gaussian. They ask whether spectral methods based on the adjacency matrix of such graphs could enable non-trivial reconstruction of the underlying partition into two communities. They provide positive answers in specific parameter ranges. Strengths Little work has addressed robustness of spectral clustering methods to the types of pertubations considered in this paper. Weaknesses The model is very specific, and so the results are of limited applicability.
NIPS
Title Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control Abstract Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n) space complexity compared to O(n) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. N/A Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. 1 Introduction Linear dynamical systems arise in many areas of machine learning and time series modeling with active research applications in computer vision [2], robotics [28], and control [8, 19, 20]. Linear representations are often desirable because they admit closed-form solutions, simplify modeling, and are general enough to be useful in many applications (e.g. Kalman filters). Further, there are well-established tools for the analysis (e.g. investigating properties of a system, such as stability and dissipativity), prediction, estimation, and control of linear systems [16]. They are, in general, computationally more efficient than nonlinear systems and highly promising candidates for real-time applications or data-intensive tasks. Last but not least, linear dynamical models can also be used to capture nonlinear systems using Koopman operators, which linearly evolve nonlinear functions of the states [22, 4, 27, 15]. LDSs are models that are learned in a self-supervised manner and are therefore promising for data-driven applications. Consequently, with the availability of higher computational power and the wide applicability of data-driven modeling, there is renewed interest in learning LDSs from data. Examples include learning spatio-temporal data for dynamic texture classification [2, 10], 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. video action recognition [24, 37], robotic tactile sensing [25] and nonlinear control using Koopman operators [4, 3]. Although linear system identification is a well-studied subject [26, 29], algorithms that learn LDSs from data have often overlooked important properties, such as stability. Stability describes the long-term behavior of a system and is critical both for numerical computations to converge and to accurately represent the true properties of many physical systems. When stability is overlooked, the learned model may be unstable even when the underlying dynamics are stable [7], in which case the long-term prediction accuracy dramatically suffers. This is why there are increasing efforts to impose stability on data-driven models [2, 18, 21, 11, 1]. However, the available methods do not scale well or are not applicable for control. In this work, we present a novel method for learning stable LDSs for prediction and control. Using a recent characterization of matrix stability [14], we derive a gradient-descent algorithm that iteratively improves the reconstruction error of a projected stable model. Contrary to current top-performing methods that start from the least-squares (LS) solution and iteratively push the LDSs towards the stability region, our method enforces stability in each step. As a result, it returns a stable LDS even after one single iteration. This feature can become crucial in online applications and time-sensitive tasks where obtaining a stable state-transition matrix as early in the optimization process as possible becomes of central importance. Furthermore, whereas alternative methods terminate upon reaching stability, our method can iterate on already stable solutions to improve the reconstruction error. It can therefore be used to further improve the solutions of other methods. Our proposed method is provably more memory efficient, with an O(n2) space complexity—n being the state dimension—compared to O(n4) of the competing alternative schemes for stable LDS. For systems with inputs, we derive the gradient directions that update both state and control linear matrices. By doing so, we expand the space of possible solutions and enable the discovery of models achieving lower error metrics compared to searching only for a stable state matrix which, to the best of our knowledge, is what the current top-performing algorithms do. To demonstrate the superior performance of our method, we test it on the task of learning dynamic textures from videos (using benchmark datasets that have been used to assess models that learn stable LDSs), as well as learning and controlling (in simulation and experiment) the Franka Emika Panda robotic arm [12]. When compared to the current top-performing models, a constraint generation (CG) [2] and a weighted least squares (WLS) [18] approach, our method achieves an orders-ofmagnitude lower reconstruction error, robustness even in low-resource settings, and better control performance. Notably, our approach is the first that tests the control performance of stable LDS; CG has been formulated but not evaluated for control tasks and it is not straightforward that WLS can be implemented for such applications, as the results in this paper suggest. The paper is structured as follows. In Section II, we review linear systems and stability. In Section III, we introduce and derive the proposed algorithm for learning stable LDSs. In Section IV, we compare our method to competing alternative algorithms that learn stable LDSs in prediction and control. In Section V, we discuss our findings and point to areas for future research. 2 Linear Dynamical Systems We consider states x ∈ RN , controls u ∈ RM and discrete time LDSs modeled as yt ≡ xt+1 = Axt +But, (1) where A ∈ RN×N and B ∈ RN×M are the state and control matrices, respectively. For systems without inputs, one can simply set B = 0. We use SA,B = {(A,B) | xt+1 = Axt + But} to denote the solution space of the matrices A and B that describe a LDS of the form (1). Further, let {λi(A)}Ni=1 be the eigenvalues of an N ×N matrix A in decreasing order of magnitude, ρ(A) ≡ |λ1(A)| be the spectral radius of A, and S be the set of all stable matrices of size N ×N . 2.1 Learning Data-Driven LDSs Next, we provide an overview of data-driven learning of LDSs. First, we consider systems without control for which CG and WLS were developed. Later, in Section 3, we modify the learning objective to include control terms and learn stable representations for LDSs with inputs. Given p pairs of measurements (xt, yt), learning LDSs from data typically takes the form  = inf A 1 2 ‖Y −AX‖2F , (2) where Y = [y1 y2 . . . yp] ∈ RN×p, X = [x1 x2 . . . xp] ∈ RN×p, and || · ||F is the Frobenius norm. The LS solution is then computed as Als = Y X †. (3) where X† denotes the Moore-Penrose inverse of X . The optimization problem in (2) does not impose stability constraints on Â. To learn stable LDSs, the learning objective is typically formulated as  = inf A∈S 1 2 ‖Y −AX‖2F , (4) and is highly nonconvex. The current top-performing methods for computing stable LDSs are a constraint generation [2] and a weighted least squares [18] approach. CG formulates the optimization as a quadratic program without constraints, which is an approximation to the original problem. It then iterates on the solution to the approximate optimization by adding constraints and terminates when a stable solution is reached. WLS determines the components of the LS transition matrix that cause instability and uses a weight matrix to enforce stability, while minimizing the reconstruction error. Note that both methods consider an entire sequence of observations, sayD ∈ RN×p, such that X = D[0:p−1] and Y = D[1:p], thereby making the assumption that all measurements belong to a unique time-series dataset. In the case of the WLS method, this assumption is necessary and the method fails dramatically for datasets with disjoint windows of time, as we demonstrate later in Section 4.3. CG and our proposed method, on the other hand, do not require contiguous observations. 2.2 Subspace Methods For high-dimensional LDSs, as is the case with image reconstruction, it is computationally prohibitive to learn a state transition matrix. Even for small images of size 100× 100 pixels, the dimensionality of the state transition matrix A would be 1004. For such high-dimensional systems, models are obtained using subspace methods that reduce the dimensionality of the learning task. Subspace methods for learning LDSs typically apply singular value decomposition (SVD) on the original dataset [17] decomposing the observation matrix D ≈ UΣV T , where U ∈ RN×r, V ∈ Rp×r are orthonormal matrices, Σ = {σ1, . . . , σr} ∈ Rr×r contains the r largest singular values, and r < N is the subspace dimension. Then, the learning optimization is performed on the reduced observation matrix Dr = ΣV T , with Xr = Dr[0:p−1] and Yr = Dr[1:p]. U is used to project the solutions back to the original state space. For a more complete description of standard subspace methods, the reader can refer to [6, 30, 33, 36, 35]. 3 The Algorithm The optimization problem for finding stable LDSs has traditionally only considered solving for a stable matrix A that minimizes the reconstruction loss. In this work, we formulate the objective as [Â, B̂] = inf A∈S,B 1 2 ‖Y −AX −BU‖2F , (5) to expand the solution space and solve both for a stable state matrix A and a matrix B. We denote the least-square solution for the control system by [Als, Bls] = Y · [X;U ]†. 3.1 Optimization Objective and Gradient Descents The proposed algorithm uses a recent characterization of stable matrices [14]. Specifically, a matrix A is stable if and only if it can be written as A = S−1OCS, where S is invertible, O is orthogonal, and C is a positive semidefinite contraction (that is, C is a positive semidefinite matrix with norm less than or equal to 1). By constraining the norm of C, one bounds the eigenvalues of A and ensures stability. Using this property, we formulate the optimization problem as [Â, B̂] = inf S 0,O orthogonal,C 0,‖C‖≤1 1 2 ‖Y − S−1OCSX −BU‖2F , (6) where  ≡ S−1OCS. Then, for f(S,O,C,B) = 12‖Y − S −1OCSX − BU‖2F , we derive the gradient directions with respect to the four matrices S,O,C, and B as follows: ∇Sf(S,O,C,B) =S−TEXTSTCTOTS−T − CTOTS−TEXT (7) ∇Of(S,O,C,B) =− S−TEXTSTCT (8) ∇Cf(S,O,C,B) =−OTS−TEXTST (9) ∇Bf(S,O,C,B) =− EUT (10) where E = Y − S−1OCSX − BU . Due to space constraints, the derivation of the gradients is presented in the supplementary material. We then use the fast projected gradient descent optimization from [13] to reach a local minimum of the reconstruction cost. The algorithmic steps are presented in Algorithm 1. The proposed algorithm enforces stability in every iteration step by projecting the solution onto the feasible set. For more details, the reader can refer to [13] or the provided code. Henceforth, we refer to our proposed algorithm as SOC. Note that, contrary to CG and WLS that search stable LDSs in SA,Bls by iterating over only A, SOC updates both linear matrices A and B, thereby expanding the feasible solution space to SA,B , where SA,B ⊃ SA,Bls . Further, SOC does not assume time continuity of the training measurements, contrary to WLS. The novelty of SOC with respect to [14] is the derivation of new gradient directions that not only account for control inputs, but that are also calculated so as to best fit training measurements instead of finding the nearest stable solution to an initial unstable matrix. Algorithm 1 SOC Algorithm using Fast Gradient Method (FGM) with restart from [13] Input: X,Y, U . State and control measurements Output: A ∈ S, B . Stable LDS 1: Initialize Z , (S,O,C,B), kmax, γo, λ ∈ (0, 1), α1 ∈ (0, 1) 2: Ẑ = Z 3: while k < kmax do 4: Zk = P(Ẑ − γ∇f(Ẑ)); γ = γo . P is the projection to the feasible set 5: while f(Zk) > f(Z) and γ ≥ γmin do . Line search to find gradient step size 6: Zk = P(Ẑ − γ∇f(Ẑ)) 7: γ = λγ 8: end while 9: if γ < γmin then . If line search fails, FGM restarts 10: Ẑ = Z; ak = a1 11: else . If cost is decreased, the solution is stored 12: αk+1 = 1 2 ( √ α4k + 4α 2 k − α2k); βk = αk(1−αk) α2k+αk+1 13: Ẑ = Zk + βk(Zk − Z); Z = Zk 14: end if 15: end while 16: A = S−1OCS 17: return A ∈ S, B 4 Experiments We implement LS, CG, WLS, and the proposed SOC method for learning LDSs and compare their performance on dynamical systems with and without control inputs. We omit the seminal work of [23] in our comparisons as it has been outperformed in terms of error, scalability, and execution time by both CG and WLS. For systems without inputs, we focus on learning dynamic texture from frame sequences extracted from videos using standard benchmark datasets [32, 5, 31]. For systems with inputs, we use experimental data from the Franka Emika Panda robotic manipulator and illustrate the learning and control performance of all the methods considered. We split the results in three parts: memory requirements, reconstruction error performance, and control performance. For an impartial assessment, we perform all comparisons in MATLAB using the publicly available code of the CG and WLS algorithms1. All simulations are performed using MATLAB R2019b on a machine with a 14-core Intel E5-2680v4 2.4-GHz CPU with 20GB RAM. 4.1 Memory Usage First, we compare the three algorithms on their memory demands. For an objective comparison, we only measure the size of all MATLAB workspace variables created by the algorithms. That is, we consider a matrix with 4 double-precision cells to use 32 bytes. We compare the algorithms on a sequence of frames extracted from a coffee cup video downloaded from Youtube2. We use this video because it exhibits dynamical motion and has a sufficient number of frames to allow for relatively higher subspace dimensions (the SVD decomposition limits the subspace dimension to be no larger than the number of frames). The results are shown in Figure 1. SOC scales proportionately to r2, whereas both CG and WLS scale proportionately to r4. This is because CG and WLS both rely on solving a quadratic programming problem with a state dimension n2, which generates matrices of dimension n4, whereas SOC uses a gradient descent approach that employs only matrix inversion, transposition, multiplication and addition, all of which are operations of space complexity O(n2). At r = 150, SOC uses about 5.04 MB of memory; CG and WLS use about 3.78 GB of memory and fail to run at higher dimensions due to memory constraints. Though such high dimensions may perhaps seem out of scope for the image reconstruction examples demonstrated next, they can typically occur in the field of robotics. For example, a recent study [3] used a linear data-driven Koopman representation with dimensions r = 330 to identify and control a pneumatic soft robotic arm. For this dimension, WLS and CG would require about 88 GB of memory and SOC would need about 25 MB. As a result, only SOC would be able to successfully train a stable Koopman model on a standard personal laptop and, as we show in the control performance section, failing to impose stability on the learned model can lead to unsafe robot movements. 4.2 Error Performance To measure the predictive accuracy of the learned representations, we use three benchmark datasets: UCLA [32], UCSD [5], and DynTex [31]. The UCLA dataset consists of 200 gray-scale frame sequences that demonstrate 50 different categories of dynamic motion (e.g. flame flickering, wave motion, flowers in the wind), each captured from 4 different viewpoints. Every frame sequence contains 75 frames of size 48 × 48 pixels. The UCSD dataset consists of 254 frame sequences showing highway traffic in different environmental conditions. Each sequence contains between 42 and 52 frames of size 48× 48 pixels. For the DynTex dataset, we use 99 sequences from 5 groups of 1https://github.com/huangwb/LDS-toolbox 2https://www.youtube.com/watch?v=npkBC4GYodg UCLA UCSD DynTex dynamic texture (smoke and rotation from the Beta subset and foliage, escalator, and flags from the Gamma subset) that exhibit periodic motion. The frames are of size 352 × 288 pixels. We convert the frames to grayscale and use the bicubic interpolation algorithm implemented in the Python library pillow to scale down the frames without ratio distortion down to 48 × 39 pixels. Each DynTex sequence contains between 250 and 1576 frames. As explained in Section 2, the dimensionality of images can be prohibitively high and cause slow computations or memory failures: the transition matrix for an image of size as small as 48× 48 pixels would require hundreds of TBs for CG and WLS to run. For this reason, we use subspace methods to reduce the problem dimensionality. For each dataset, we consider a set of subspace dimensions r ∈ {3, 30}. Then, for each dimension, we use the four methods (LS, CG, WLS, and SOC) to obtain a LDS for each of the frame sequences. To compare the performance of the four algorithms, we use the reconstruction error relative to the LS solution: e(Â) = e(Â)−e(Als)e(Als) × 100. We report the results in Figure 2 and focus on three metrics: best error frequency, average reconstruction error, and execution time. The best error graphs plot the percentage of frame sequences for a given dimension for which an algorithm computes the best relative error (that is, lower than or equal to the other two methods). This metric credits all schemes that achieve the lowest error and so curves may add up to more than 100%. The average error and time graphs show the average reconstruction error and average execution time of all frame sequences for each dimension, respectively. Across the three datasets, SOC computes the best error for more frame sequences than the other methods across any dimension. In the UCLA and UCSD datasets, the SOC best error frequency reaches 100% for the majority of the dimensions contrary to less than 80% (for UCLA) and 40% (for UCSD) attained by CG and WLS. This means that, for the aforementioned datasets, CG and WLS only rarely find a better solution than SOC. While for the DynTex dataset the differences are not as pronounced, SOC still computes the best error for most of the frame sequences for any dimension f = 100 f = 500 f = 1000 Training Data LS and about 20% more often than the other methods. Second, SOC has orders-of-magnitude lower average relative error across all dimensions and datasets. Last, in terms of the execution time, SOC is slower than CG and WLS for low dimensions (r < 20). However, it scales better than the other two methods, such that it becomes faster than CG for r > 20. For the UCSD dataset, SOC and WLS become comparable in terms of average execution time near n = 30. This observation is in line with the fact that CG and WLS are high space-complexity algorithms that may even fail to perform at high dimensions due to memory limitations. Next, we compare the three methods on the steam sequence (composed of 120× 170 pixel images) and the fountain sequence (composed of 150× 90 pixel images) from the MIT temporal texture database [34], together with the coffee cup sequence used in Figure 1. Results are shown in Table 1. To show the effect on the predictive quality of the solutions, we plot the frames reconstructed from the learned LDS for each method in Figure 3. Note that the LS solution degrades over time and generates unrealistic frames. 4.3 Control In this section, we demonstrate the superior performance of our approach in control systems. Using experimental data gathered from the robotic arm Franka Emika Panda, we illustrate the improvement in both the reconstruction error of the learned model and the control performance. To use CG and WLS to compute a stable Â, we use the LS solution for the control matrix and modify the objective to  = inf A∈S 1 2 ‖Y ′ −AX‖2F , (11) where Y ′ = Y − BlsU . The learning performance is then measured as the % error increase when compared to the LS solution (Als, Bls). Note that this error depends both on  and B̂; for WLS and CG, we use the LS solution for the control matrix (B = Bls), whereas SOC computes both A and B. We collected training data on the experimental platform at 50 Hz, using a controller to manually move the robotic arm. We gathered 400 measurements (8 seconds) in eight separate runs. The training data, along with the experimental and simulation environments used in this section are shown in Figure 4. Table 2 compares the performance of the SOC, CG, and WLS algorithms on learning stable models for the Franka Emika Panda robotic manipulator using experimental data. The performance is compared for different numbers of measurements p. As the data show, SOC is the only algorithm that never fails to find stable solutions, regardless of the amount of training data. As more measurements are used, the LS solution itself becomes more stable and CG and WLS are both able to converge to stable solutions. Further, the quality of CG solutions improves with more training measurements; the performance of SOC remains robust throughout the testing cases. In Figure 5, we plot the reconstruction error for the three methods for different training data sizes. In this setting, however, measurement sets (xt, yt, ut) are randomly drawn from the training data such that the matrices Y and X have discontiguous measurements. Note how such a choice worsens the performance of WLS that assumes continuity in the observation matrices. On the other hand, CG and SOC are similar in learning performance. With regard to controlling the system, we use LQR control computed using the models from each algorithm and simulate tracking a figure-8 pattern. The states are the x, y, z coordinates of the end effector, the 7 joint angles of the arm, and the 7 joint angular velocities and the applied control is the joint velocities. The trajectory is generated in the y − z plane for the end effector; the desired angle configurations of the robotic arm are solved offline using inverse kinematics; the desired angular joint velocities are set to 0. LQR control is generated using Q = diag([ci]) ∈ R17×17, where ci = 1 for i ∈ {1, 10} and 0 elsewhere and R = 0.1× I7×7. The LS model is unstable and fails at the task. Similarly, WLS—despite the stable model—performs poorly, highlighting the need for both stability and fidelity of the learned representation. On the other hand, CG and SOC are similar in performance. To measure robustness across the initial conditions, we run 50 trials, varying both the y and z initial positions with displacements sampled uniformly in U(−0.1, 0.1). Across all trials, LS has an average error of 7556, WLS scores 38.73, CG scores 0.0810 and SOC scores 0.0799. Then, we test LQR control computed on the LDS obtained from the SOC algorithm in an experiment to demonstrate that the simulation results are indicative of the performance in a physical experiment. Figure 6 shows the control performance of three trials tracking a figure-8 pattern. Due to COVID-19 limitations, we were unable to extend the experimental tests. However, these results serve primarily to experimentally validate our approach and illustrate that the simulation results are an accurate prediction of the experimental behavior as well. 5 Conclusion In this work, we introduce a novel algorithm for computing stable LDSs. Compared to the current top-performing alternatives, the proposed scheme is significantly more memory efficient and, as a result, scales better for high-dimensional systems often encountered in image processing and robotic applications. Further, the suggested method outperforms the alternatives in terms of error and control performance, as demonstrated on three benchmark datasets and the Franka Emika Panda robotic arm experiments. These features make it a promising tool for compression and data-driven system identification tasks. Coupled with the ongoing research around Koopman-operator-based nonlinear control, this algorithm can be a promising candidate for high-dimensional nonlinear control and other machine learning applications, as well. Indeed, recent work in [9] uses Koopman operators to optimize training of neural network methods; also work in [38] learns deep neural network models for Koopman operators of nonlinear dynamical systems. Imposing stability on Koopman operators represented using basis functions learned via deep learning will combine the benefits of linear representations with the predictive power of neural networks. Broader Impact Our methods can improve robotic tasks that are safety-critical, particularly those that include a human-in-the-loop (such as rehabilitation devices and prosthetics) where the human-robot interaction dynamics are not known ahead of time. For such tasks, a robotic platform prioritizes stability and safety during operation. Unstable data-driven models may lead to catastrophic robotic behavior, as we demonstrate in our simulations with the Franka Emika Panda robotic arm. Our work provides a mechanism for online learning of models that satisfy stability constraints, improving the safety and reliability of closed-loop control of those systems. Acknowledgments and Disclosure of Funding First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.
1. What is the focus and contribution of the paper on linear dynamical systems? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity, theoretical analysis, and practical implications? 3. What are the weaknesses of the paper, especially regarding its relevance to machine learning and the availability of the associated code? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper leverages a novel characterization of the space of stable matrices into a novel algorithms for estimating the parameters of a linear dynamical system from data. The algorithm provides marked improvements to the estimation error and the memory complexity compared to standard approaches. Strengths This is a very clear paper, with a simple premise, and a derivation which explores both the theoretical and practical consequences of the new algorithm. The math is clearly stated and detailed. The multiple evaluation domains provide a strong degree of confidence in the value of the approach. The improvements appear material and relevant. Weaknesses Topically, this work is not directly about machine learning since it describes a pure optimization method. However, estimation of LDS is relevant to the overall problem of deriving control strategies for data, so this isn't necessarily an issue, though it may be more valued if published at a venue more centrally devoted to control and optimization. No mention is made as to whether the attached code will be open-sourced, which would greatly enhance the value of the paper.
NIPS
Title Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control Abstract Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n) space complexity compared to O(n) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. N/A Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. 1 Introduction Linear dynamical systems arise in many areas of machine learning and time series modeling with active research applications in computer vision [2], robotics [28], and control [8, 19, 20]. Linear representations are often desirable because they admit closed-form solutions, simplify modeling, and are general enough to be useful in many applications (e.g. Kalman filters). Further, there are well-established tools for the analysis (e.g. investigating properties of a system, such as stability and dissipativity), prediction, estimation, and control of linear systems [16]. They are, in general, computationally more efficient than nonlinear systems and highly promising candidates for real-time applications or data-intensive tasks. Last but not least, linear dynamical models can also be used to capture nonlinear systems using Koopman operators, which linearly evolve nonlinear functions of the states [22, 4, 27, 15]. LDSs are models that are learned in a self-supervised manner and are therefore promising for data-driven applications. Consequently, with the availability of higher computational power and the wide applicability of data-driven modeling, there is renewed interest in learning LDSs from data. Examples include learning spatio-temporal data for dynamic texture classification [2, 10], 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. video action recognition [24, 37], robotic tactile sensing [25] and nonlinear control using Koopman operators [4, 3]. Although linear system identification is a well-studied subject [26, 29], algorithms that learn LDSs from data have often overlooked important properties, such as stability. Stability describes the long-term behavior of a system and is critical both for numerical computations to converge and to accurately represent the true properties of many physical systems. When stability is overlooked, the learned model may be unstable even when the underlying dynamics are stable [7], in which case the long-term prediction accuracy dramatically suffers. This is why there are increasing efforts to impose stability on data-driven models [2, 18, 21, 11, 1]. However, the available methods do not scale well or are not applicable for control. In this work, we present a novel method for learning stable LDSs for prediction and control. Using a recent characterization of matrix stability [14], we derive a gradient-descent algorithm that iteratively improves the reconstruction error of a projected stable model. Contrary to current top-performing methods that start from the least-squares (LS) solution and iteratively push the LDSs towards the stability region, our method enforces stability in each step. As a result, it returns a stable LDS even after one single iteration. This feature can become crucial in online applications and time-sensitive tasks where obtaining a stable state-transition matrix as early in the optimization process as possible becomes of central importance. Furthermore, whereas alternative methods terminate upon reaching stability, our method can iterate on already stable solutions to improve the reconstruction error. It can therefore be used to further improve the solutions of other methods. Our proposed method is provably more memory efficient, with an O(n2) space complexity—n being the state dimension—compared to O(n4) of the competing alternative schemes for stable LDS. For systems with inputs, we derive the gradient directions that update both state and control linear matrices. By doing so, we expand the space of possible solutions and enable the discovery of models achieving lower error metrics compared to searching only for a stable state matrix which, to the best of our knowledge, is what the current top-performing algorithms do. To demonstrate the superior performance of our method, we test it on the task of learning dynamic textures from videos (using benchmark datasets that have been used to assess models that learn stable LDSs), as well as learning and controlling (in simulation and experiment) the Franka Emika Panda robotic arm [12]. When compared to the current top-performing models, a constraint generation (CG) [2] and a weighted least squares (WLS) [18] approach, our method achieves an orders-ofmagnitude lower reconstruction error, robustness even in low-resource settings, and better control performance. Notably, our approach is the first that tests the control performance of stable LDS; CG has been formulated but not evaluated for control tasks and it is not straightforward that WLS can be implemented for such applications, as the results in this paper suggest. The paper is structured as follows. In Section II, we review linear systems and stability. In Section III, we introduce and derive the proposed algorithm for learning stable LDSs. In Section IV, we compare our method to competing alternative algorithms that learn stable LDSs in prediction and control. In Section V, we discuss our findings and point to areas for future research. 2 Linear Dynamical Systems We consider states x ∈ RN , controls u ∈ RM and discrete time LDSs modeled as yt ≡ xt+1 = Axt +But, (1) where A ∈ RN×N and B ∈ RN×M are the state and control matrices, respectively. For systems without inputs, one can simply set B = 0. We use SA,B = {(A,B) | xt+1 = Axt + But} to denote the solution space of the matrices A and B that describe a LDS of the form (1). Further, let {λi(A)}Ni=1 be the eigenvalues of an N ×N matrix A in decreasing order of magnitude, ρ(A) ≡ |λ1(A)| be the spectral radius of A, and S be the set of all stable matrices of size N ×N . 2.1 Learning Data-Driven LDSs Next, we provide an overview of data-driven learning of LDSs. First, we consider systems without control for which CG and WLS were developed. Later, in Section 3, we modify the learning objective to include control terms and learn stable representations for LDSs with inputs. Given p pairs of measurements (xt, yt), learning LDSs from data typically takes the form  = inf A 1 2 ‖Y −AX‖2F , (2) where Y = [y1 y2 . . . yp] ∈ RN×p, X = [x1 x2 . . . xp] ∈ RN×p, and || · ||F is the Frobenius norm. The LS solution is then computed as Als = Y X †. (3) where X† denotes the Moore-Penrose inverse of X . The optimization problem in (2) does not impose stability constraints on Â. To learn stable LDSs, the learning objective is typically formulated as  = inf A∈S 1 2 ‖Y −AX‖2F , (4) and is highly nonconvex. The current top-performing methods for computing stable LDSs are a constraint generation [2] and a weighted least squares [18] approach. CG formulates the optimization as a quadratic program without constraints, which is an approximation to the original problem. It then iterates on the solution to the approximate optimization by adding constraints and terminates when a stable solution is reached. WLS determines the components of the LS transition matrix that cause instability and uses a weight matrix to enforce stability, while minimizing the reconstruction error. Note that both methods consider an entire sequence of observations, sayD ∈ RN×p, such that X = D[0:p−1] and Y = D[1:p], thereby making the assumption that all measurements belong to a unique time-series dataset. In the case of the WLS method, this assumption is necessary and the method fails dramatically for datasets with disjoint windows of time, as we demonstrate later in Section 4.3. CG and our proposed method, on the other hand, do not require contiguous observations. 2.2 Subspace Methods For high-dimensional LDSs, as is the case with image reconstruction, it is computationally prohibitive to learn a state transition matrix. Even for small images of size 100× 100 pixels, the dimensionality of the state transition matrix A would be 1004. For such high-dimensional systems, models are obtained using subspace methods that reduce the dimensionality of the learning task. Subspace methods for learning LDSs typically apply singular value decomposition (SVD) on the original dataset [17] decomposing the observation matrix D ≈ UΣV T , where U ∈ RN×r, V ∈ Rp×r are orthonormal matrices, Σ = {σ1, . . . , σr} ∈ Rr×r contains the r largest singular values, and r < N is the subspace dimension. Then, the learning optimization is performed on the reduced observation matrix Dr = ΣV T , with Xr = Dr[0:p−1] and Yr = Dr[1:p]. U is used to project the solutions back to the original state space. For a more complete description of standard subspace methods, the reader can refer to [6, 30, 33, 36, 35]. 3 The Algorithm The optimization problem for finding stable LDSs has traditionally only considered solving for a stable matrix A that minimizes the reconstruction loss. In this work, we formulate the objective as [Â, B̂] = inf A∈S,B 1 2 ‖Y −AX −BU‖2F , (5) to expand the solution space and solve both for a stable state matrix A and a matrix B. We denote the least-square solution for the control system by [Als, Bls] = Y · [X;U ]†. 3.1 Optimization Objective and Gradient Descents The proposed algorithm uses a recent characterization of stable matrices [14]. Specifically, a matrix A is stable if and only if it can be written as A = S−1OCS, where S is invertible, O is orthogonal, and C is a positive semidefinite contraction (that is, C is a positive semidefinite matrix with norm less than or equal to 1). By constraining the norm of C, one bounds the eigenvalues of A and ensures stability. Using this property, we formulate the optimization problem as [Â, B̂] = inf S 0,O orthogonal,C 0,‖C‖≤1 1 2 ‖Y − S−1OCSX −BU‖2F , (6) where  ≡ S−1OCS. Then, for f(S,O,C,B) = 12‖Y − S −1OCSX − BU‖2F , we derive the gradient directions with respect to the four matrices S,O,C, and B as follows: ∇Sf(S,O,C,B) =S−TEXTSTCTOTS−T − CTOTS−TEXT (7) ∇Of(S,O,C,B) =− S−TEXTSTCT (8) ∇Cf(S,O,C,B) =−OTS−TEXTST (9) ∇Bf(S,O,C,B) =− EUT (10) where E = Y − S−1OCSX − BU . Due to space constraints, the derivation of the gradients is presented in the supplementary material. We then use the fast projected gradient descent optimization from [13] to reach a local minimum of the reconstruction cost. The algorithmic steps are presented in Algorithm 1. The proposed algorithm enforces stability in every iteration step by projecting the solution onto the feasible set. For more details, the reader can refer to [13] or the provided code. Henceforth, we refer to our proposed algorithm as SOC. Note that, contrary to CG and WLS that search stable LDSs in SA,Bls by iterating over only A, SOC updates both linear matrices A and B, thereby expanding the feasible solution space to SA,B , where SA,B ⊃ SA,Bls . Further, SOC does not assume time continuity of the training measurements, contrary to WLS. The novelty of SOC with respect to [14] is the derivation of new gradient directions that not only account for control inputs, but that are also calculated so as to best fit training measurements instead of finding the nearest stable solution to an initial unstable matrix. Algorithm 1 SOC Algorithm using Fast Gradient Method (FGM) with restart from [13] Input: X,Y, U . State and control measurements Output: A ∈ S, B . Stable LDS 1: Initialize Z , (S,O,C,B), kmax, γo, λ ∈ (0, 1), α1 ∈ (0, 1) 2: Ẑ = Z 3: while k < kmax do 4: Zk = P(Ẑ − γ∇f(Ẑ)); γ = γo . P is the projection to the feasible set 5: while f(Zk) > f(Z) and γ ≥ γmin do . Line search to find gradient step size 6: Zk = P(Ẑ − γ∇f(Ẑ)) 7: γ = λγ 8: end while 9: if γ < γmin then . If line search fails, FGM restarts 10: Ẑ = Z; ak = a1 11: else . If cost is decreased, the solution is stored 12: αk+1 = 1 2 ( √ α4k + 4α 2 k − α2k); βk = αk(1−αk) α2k+αk+1 13: Ẑ = Zk + βk(Zk − Z); Z = Zk 14: end if 15: end while 16: A = S−1OCS 17: return A ∈ S, B 4 Experiments We implement LS, CG, WLS, and the proposed SOC method for learning LDSs and compare their performance on dynamical systems with and without control inputs. We omit the seminal work of [23] in our comparisons as it has been outperformed in terms of error, scalability, and execution time by both CG and WLS. For systems without inputs, we focus on learning dynamic texture from frame sequences extracted from videos using standard benchmark datasets [32, 5, 31]. For systems with inputs, we use experimental data from the Franka Emika Panda robotic manipulator and illustrate the learning and control performance of all the methods considered. We split the results in three parts: memory requirements, reconstruction error performance, and control performance. For an impartial assessment, we perform all comparisons in MATLAB using the publicly available code of the CG and WLS algorithms1. All simulations are performed using MATLAB R2019b on a machine with a 14-core Intel E5-2680v4 2.4-GHz CPU with 20GB RAM. 4.1 Memory Usage First, we compare the three algorithms on their memory demands. For an objective comparison, we only measure the size of all MATLAB workspace variables created by the algorithms. That is, we consider a matrix with 4 double-precision cells to use 32 bytes. We compare the algorithms on a sequence of frames extracted from a coffee cup video downloaded from Youtube2. We use this video because it exhibits dynamical motion and has a sufficient number of frames to allow for relatively higher subspace dimensions (the SVD decomposition limits the subspace dimension to be no larger than the number of frames). The results are shown in Figure 1. SOC scales proportionately to r2, whereas both CG and WLS scale proportionately to r4. This is because CG and WLS both rely on solving a quadratic programming problem with a state dimension n2, which generates matrices of dimension n4, whereas SOC uses a gradient descent approach that employs only matrix inversion, transposition, multiplication and addition, all of which are operations of space complexity O(n2). At r = 150, SOC uses about 5.04 MB of memory; CG and WLS use about 3.78 GB of memory and fail to run at higher dimensions due to memory constraints. Though such high dimensions may perhaps seem out of scope for the image reconstruction examples demonstrated next, they can typically occur in the field of robotics. For example, a recent study [3] used a linear data-driven Koopman representation with dimensions r = 330 to identify and control a pneumatic soft robotic arm. For this dimension, WLS and CG would require about 88 GB of memory and SOC would need about 25 MB. As a result, only SOC would be able to successfully train a stable Koopman model on a standard personal laptop and, as we show in the control performance section, failing to impose stability on the learned model can lead to unsafe robot movements. 4.2 Error Performance To measure the predictive accuracy of the learned representations, we use three benchmark datasets: UCLA [32], UCSD [5], and DynTex [31]. The UCLA dataset consists of 200 gray-scale frame sequences that demonstrate 50 different categories of dynamic motion (e.g. flame flickering, wave motion, flowers in the wind), each captured from 4 different viewpoints. Every frame sequence contains 75 frames of size 48 × 48 pixels. The UCSD dataset consists of 254 frame sequences showing highway traffic in different environmental conditions. Each sequence contains between 42 and 52 frames of size 48× 48 pixels. For the DynTex dataset, we use 99 sequences from 5 groups of 1https://github.com/huangwb/LDS-toolbox 2https://www.youtube.com/watch?v=npkBC4GYodg UCLA UCSD DynTex dynamic texture (smoke and rotation from the Beta subset and foliage, escalator, and flags from the Gamma subset) that exhibit periodic motion. The frames are of size 352 × 288 pixels. We convert the frames to grayscale and use the bicubic interpolation algorithm implemented in the Python library pillow to scale down the frames without ratio distortion down to 48 × 39 pixels. Each DynTex sequence contains between 250 and 1576 frames. As explained in Section 2, the dimensionality of images can be prohibitively high and cause slow computations or memory failures: the transition matrix for an image of size as small as 48× 48 pixels would require hundreds of TBs for CG and WLS to run. For this reason, we use subspace methods to reduce the problem dimensionality. For each dataset, we consider a set of subspace dimensions r ∈ {3, 30}. Then, for each dimension, we use the four methods (LS, CG, WLS, and SOC) to obtain a LDS for each of the frame sequences. To compare the performance of the four algorithms, we use the reconstruction error relative to the LS solution: e(Â) = e(Â)−e(Als)e(Als) × 100. We report the results in Figure 2 and focus on three metrics: best error frequency, average reconstruction error, and execution time. The best error graphs plot the percentage of frame sequences for a given dimension for which an algorithm computes the best relative error (that is, lower than or equal to the other two methods). This metric credits all schemes that achieve the lowest error and so curves may add up to more than 100%. The average error and time graphs show the average reconstruction error and average execution time of all frame sequences for each dimension, respectively. Across the three datasets, SOC computes the best error for more frame sequences than the other methods across any dimension. In the UCLA and UCSD datasets, the SOC best error frequency reaches 100% for the majority of the dimensions contrary to less than 80% (for UCLA) and 40% (for UCSD) attained by CG and WLS. This means that, for the aforementioned datasets, CG and WLS only rarely find a better solution than SOC. While for the DynTex dataset the differences are not as pronounced, SOC still computes the best error for most of the frame sequences for any dimension f = 100 f = 500 f = 1000 Training Data LS and about 20% more often than the other methods. Second, SOC has orders-of-magnitude lower average relative error across all dimensions and datasets. Last, in terms of the execution time, SOC is slower than CG and WLS for low dimensions (r < 20). However, it scales better than the other two methods, such that it becomes faster than CG for r > 20. For the UCSD dataset, SOC and WLS become comparable in terms of average execution time near n = 30. This observation is in line with the fact that CG and WLS are high space-complexity algorithms that may even fail to perform at high dimensions due to memory limitations. Next, we compare the three methods on the steam sequence (composed of 120× 170 pixel images) and the fountain sequence (composed of 150× 90 pixel images) from the MIT temporal texture database [34], together with the coffee cup sequence used in Figure 1. Results are shown in Table 1. To show the effect on the predictive quality of the solutions, we plot the frames reconstructed from the learned LDS for each method in Figure 3. Note that the LS solution degrades over time and generates unrealistic frames. 4.3 Control In this section, we demonstrate the superior performance of our approach in control systems. Using experimental data gathered from the robotic arm Franka Emika Panda, we illustrate the improvement in both the reconstruction error of the learned model and the control performance. To use CG and WLS to compute a stable Â, we use the LS solution for the control matrix and modify the objective to  = inf A∈S 1 2 ‖Y ′ −AX‖2F , (11) where Y ′ = Y − BlsU . The learning performance is then measured as the % error increase when compared to the LS solution (Als, Bls). Note that this error depends both on  and B̂; for WLS and CG, we use the LS solution for the control matrix (B = Bls), whereas SOC computes both A and B. We collected training data on the experimental platform at 50 Hz, using a controller to manually move the robotic arm. We gathered 400 measurements (8 seconds) in eight separate runs. The training data, along with the experimental and simulation environments used in this section are shown in Figure 4. Table 2 compares the performance of the SOC, CG, and WLS algorithms on learning stable models for the Franka Emika Panda robotic manipulator using experimental data. The performance is compared for different numbers of measurements p. As the data show, SOC is the only algorithm that never fails to find stable solutions, regardless of the amount of training data. As more measurements are used, the LS solution itself becomes more stable and CG and WLS are both able to converge to stable solutions. Further, the quality of CG solutions improves with more training measurements; the performance of SOC remains robust throughout the testing cases. In Figure 5, we plot the reconstruction error for the three methods for different training data sizes. In this setting, however, measurement sets (xt, yt, ut) are randomly drawn from the training data such that the matrices Y and X have discontiguous measurements. Note how such a choice worsens the performance of WLS that assumes continuity in the observation matrices. On the other hand, CG and SOC are similar in learning performance. With regard to controlling the system, we use LQR control computed using the models from each algorithm and simulate tracking a figure-8 pattern. The states are the x, y, z coordinates of the end effector, the 7 joint angles of the arm, and the 7 joint angular velocities and the applied control is the joint velocities. The trajectory is generated in the y − z plane for the end effector; the desired angle configurations of the robotic arm are solved offline using inverse kinematics; the desired angular joint velocities are set to 0. LQR control is generated using Q = diag([ci]) ∈ R17×17, where ci = 1 for i ∈ {1, 10} and 0 elsewhere and R = 0.1× I7×7. The LS model is unstable and fails at the task. Similarly, WLS—despite the stable model—performs poorly, highlighting the need for both stability and fidelity of the learned representation. On the other hand, CG and SOC are similar in performance. To measure robustness across the initial conditions, we run 50 trials, varying both the y and z initial positions with displacements sampled uniformly in U(−0.1, 0.1). Across all trials, LS has an average error of 7556, WLS scores 38.73, CG scores 0.0810 and SOC scores 0.0799. Then, we test LQR control computed on the LDS obtained from the SOC algorithm in an experiment to demonstrate that the simulation results are indicative of the performance in a physical experiment. Figure 6 shows the control performance of three trials tracking a figure-8 pattern. Due to COVID-19 limitations, we were unable to extend the experimental tests. However, these results serve primarily to experimentally validate our approach and illustrate that the simulation results are an accurate prediction of the experimental behavior as well. 5 Conclusion In this work, we introduce a novel algorithm for computing stable LDSs. Compared to the current top-performing alternatives, the proposed scheme is significantly more memory efficient and, as a result, scales better for high-dimensional systems often encountered in image processing and robotic applications. Further, the suggested method outperforms the alternatives in terms of error and control performance, as demonstrated on three benchmark datasets and the Franka Emika Panda robotic arm experiments. These features make it a promising tool for compression and data-driven system identification tasks. Coupled with the ongoing research around Koopman-operator-based nonlinear control, this algorithm can be a promising candidate for high-dimensional nonlinear control and other machine learning applications, as well. Indeed, recent work in [9] uses Koopman operators to optimize training of neural network methods; also work in [38] learns deep neural network models for Koopman operators of nonlinear dynamical systems. Imposing stability on Koopman operators represented using basis functions learned via deep learning will combine the benefits of linear representations with the predictive power of neural networks. Broader Impact Our methods can improve robotic tasks that are safety-critical, particularly those that include a human-in-the-loop (such as rehabilitation devices and prosthetics) where the human-robot interaction dynamics are not known ahead of time. For such tasks, a robotic platform prioritizes stability and safety during operation. Unstable data-driven models may lead to catastrophic robotic behavior, as we demonstrate in our simulations with the Franka Emika Panda robotic arm. Our work provides a mechanism for online learning of models that satisfy stability constraints, improving the safety and reliability of closed-loop control of those systems. Acknowledgments and Disclosure of Funding First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.
1. What is the focus and contribution of the paper on learning stable linear models for dynamic systems? 2. What are the strengths of the proposed approach, particularly in terms of its space complexity and reconstruction error improvements? 3. What are the weaknesses of the paper, especially regarding its limitations in handling nonlinear dynamic systems? 4. Do you have any concerns about the applicability of the method when combined with other techniques like Koopman operator or Differential Dynamic Programming?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a new method to learn stable linear models for dynamic systems. It utilized a recent characterization of stable matrices and constructed the gradient descent method to improve the reconstruction error within the stable linear model subspace. The proposed method has O(n^2) space complexity and achieved reconstruction errors that were orders of magnitude better than the baselines. Strengths The proposed algorithm seems to be a significant improvement over the state-of-the-art: The space complexity is O(n^2) instead of O(n^4). The reconstruction error is orders of magnitude better than the baseline. Weaknesses Although the linear dynamic system is widely used because it is convenient in math, the majority of the dynamic systems in our world is nonlinear. This paper argues that the method can be used with the Koopman operator for nonlinear systems. However, no example is shown. Experiments of combining Koopman operator or Differential Dynamic Programming with the learned stable linear model would make the paper much stronger.
NIPS
Title Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control Abstract Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n) space complexity compared to O(n) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. N/A Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. 1 Introduction Linear dynamical systems arise in many areas of machine learning and time series modeling with active research applications in computer vision [2], robotics [28], and control [8, 19, 20]. Linear representations are often desirable because they admit closed-form solutions, simplify modeling, and are general enough to be useful in many applications (e.g. Kalman filters). Further, there are well-established tools for the analysis (e.g. investigating properties of a system, such as stability and dissipativity), prediction, estimation, and control of linear systems [16]. They are, in general, computationally more efficient than nonlinear systems and highly promising candidates for real-time applications or data-intensive tasks. Last but not least, linear dynamical models can also be used to capture nonlinear systems using Koopman operators, which linearly evolve nonlinear functions of the states [22, 4, 27, 15]. LDSs are models that are learned in a self-supervised manner and are therefore promising for data-driven applications. Consequently, with the availability of higher computational power and the wide applicability of data-driven modeling, there is renewed interest in learning LDSs from data. Examples include learning spatio-temporal data for dynamic texture classification [2, 10], 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. video action recognition [24, 37], robotic tactile sensing [25] and nonlinear control using Koopman operators [4, 3]. Although linear system identification is a well-studied subject [26, 29], algorithms that learn LDSs from data have often overlooked important properties, such as stability. Stability describes the long-term behavior of a system and is critical both for numerical computations to converge and to accurately represent the true properties of many physical systems. When stability is overlooked, the learned model may be unstable even when the underlying dynamics are stable [7], in which case the long-term prediction accuracy dramatically suffers. This is why there are increasing efforts to impose stability on data-driven models [2, 18, 21, 11, 1]. However, the available methods do not scale well or are not applicable for control. In this work, we present a novel method for learning stable LDSs for prediction and control. Using a recent characterization of matrix stability [14], we derive a gradient-descent algorithm that iteratively improves the reconstruction error of a projected stable model. Contrary to current top-performing methods that start from the least-squares (LS) solution and iteratively push the LDSs towards the stability region, our method enforces stability in each step. As a result, it returns a stable LDS even after one single iteration. This feature can become crucial in online applications and time-sensitive tasks where obtaining a stable state-transition matrix as early in the optimization process as possible becomes of central importance. Furthermore, whereas alternative methods terminate upon reaching stability, our method can iterate on already stable solutions to improve the reconstruction error. It can therefore be used to further improve the solutions of other methods. Our proposed method is provably more memory efficient, with an O(n2) space complexity—n being the state dimension—compared to O(n4) of the competing alternative schemes for stable LDS. For systems with inputs, we derive the gradient directions that update both state and control linear matrices. By doing so, we expand the space of possible solutions and enable the discovery of models achieving lower error metrics compared to searching only for a stable state matrix which, to the best of our knowledge, is what the current top-performing algorithms do. To demonstrate the superior performance of our method, we test it on the task of learning dynamic textures from videos (using benchmark datasets that have been used to assess models that learn stable LDSs), as well as learning and controlling (in simulation and experiment) the Franka Emika Panda robotic arm [12]. When compared to the current top-performing models, a constraint generation (CG) [2] and a weighted least squares (WLS) [18] approach, our method achieves an orders-ofmagnitude lower reconstruction error, robustness even in low-resource settings, and better control performance. Notably, our approach is the first that tests the control performance of stable LDS; CG has been formulated but not evaluated for control tasks and it is not straightforward that WLS can be implemented for such applications, as the results in this paper suggest. The paper is structured as follows. In Section II, we review linear systems and stability. In Section III, we introduce and derive the proposed algorithm for learning stable LDSs. In Section IV, we compare our method to competing alternative algorithms that learn stable LDSs in prediction and control. In Section V, we discuss our findings and point to areas for future research. 2 Linear Dynamical Systems We consider states x ∈ RN , controls u ∈ RM and discrete time LDSs modeled as yt ≡ xt+1 = Axt +But, (1) where A ∈ RN×N and B ∈ RN×M are the state and control matrices, respectively. For systems without inputs, one can simply set B = 0. We use SA,B = {(A,B) | xt+1 = Axt + But} to denote the solution space of the matrices A and B that describe a LDS of the form (1). Further, let {λi(A)}Ni=1 be the eigenvalues of an N ×N matrix A in decreasing order of magnitude, ρ(A) ≡ |λ1(A)| be the spectral radius of A, and S be the set of all stable matrices of size N ×N . 2.1 Learning Data-Driven LDSs Next, we provide an overview of data-driven learning of LDSs. First, we consider systems without control for which CG and WLS were developed. Later, in Section 3, we modify the learning objective to include control terms and learn stable representations for LDSs with inputs. Given p pairs of measurements (xt, yt), learning LDSs from data typically takes the form  = inf A 1 2 ‖Y −AX‖2F , (2) where Y = [y1 y2 . . . yp] ∈ RN×p, X = [x1 x2 . . . xp] ∈ RN×p, and || · ||F is the Frobenius norm. The LS solution is then computed as Als = Y X †. (3) where X† denotes the Moore-Penrose inverse of X . The optimization problem in (2) does not impose stability constraints on Â. To learn stable LDSs, the learning objective is typically formulated as  = inf A∈S 1 2 ‖Y −AX‖2F , (4) and is highly nonconvex. The current top-performing methods for computing stable LDSs are a constraint generation [2] and a weighted least squares [18] approach. CG formulates the optimization as a quadratic program without constraints, which is an approximation to the original problem. It then iterates on the solution to the approximate optimization by adding constraints and terminates when a stable solution is reached. WLS determines the components of the LS transition matrix that cause instability and uses a weight matrix to enforce stability, while minimizing the reconstruction error. Note that both methods consider an entire sequence of observations, sayD ∈ RN×p, such that X = D[0:p−1] and Y = D[1:p], thereby making the assumption that all measurements belong to a unique time-series dataset. In the case of the WLS method, this assumption is necessary and the method fails dramatically for datasets with disjoint windows of time, as we demonstrate later in Section 4.3. CG and our proposed method, on the other hand, do not require contiguous observations. 2.2 Subspace Methods For high-dimensional LDSs, as is the case with image reconstruction, it is computationally prohibitive to learn a state transition matrix. Even for small images of size 100× 100 pixels, the dimensionality of the state transition matrix A would be 1004. For such high-dimensional systems, models are obtained using subspace methods that reduce the dimensionality of the learning task. Subspace methods for learning LDSs typically apply singular value decomposition (SVD) on the original dataset [17] decomposing the observation matrix D ≈ UΣV T , where U ∈ RN×r, V ∈ Rp×r are orthonormal matrices, Σ = {σ1, . . . , σr} ∈ Rr×r contains the r largest singular values, and r < N is the subspace dimension. Then, the learning optimization is performed on the reduced observation matrix Dr = ΣV T , with Xr = Dr[0:p−1] and Yr = Dr[1:p]. U is used to project the solutions back to the original state space. For a more complete description of standard subspace methods, the reader can refer to [6, 30, 33, 36, 35]. 3 The Algorithm The optimization problem for finding stable LDSs has traditionally only considered solving for a stable matrix A that minimizes the reconstruction loss. In this work, we formulate the objective as [Â, B̂] = inf A∈S,B 1 2 ‖Y −AX −BU‖2F , (5) to expand the solution space and solve both for a stable state matrix A and a matrix B. We denote the least-square solution for the control system by [Als, Bls] = Y · [X;U ]†. 3.1 Optimization Objective and Gradient Descents The proposed algorithm uses a recent characterization of stable matrices [14]. Specifically, a matrix A is stable if and only if it can be written as A = S−1OCS, where S is invertible, O is orthogonal, and C is a positive semidefinite contraction (that is, C is a positive semidefinite matrix with norm less than or equal to 1). By constraining the norm of C, one bounds the eigenvalues of A and ensures stability. Using this property, we formulate the optimization problem as [Â, B̂] = inf S 0,O orthogonal,C 0,‖C‖≤1 1 2 ‖Y − S−1OCSX −BU‖2F , (6) where  ≡ S−1OCS. Then, for f(S,O,C,B) = 12‖Y − S −1OCSX − BU‖2F , we derive the gradient directions with respect to the four matrices S,O,C, and B as follows: ∇Sf(S,O,C,B) =S−TEXTSTCTOTS−T − CTOTS−TEXT (7) ∇Of(S,O,C,B) =− S−TEXTSTCT (8) ∇Cf(S,O,C,B) =−OTS−TEXTST (9) ∇Bf(S,O,C,B) =− EUT (10) where E = Y − S−1OCSX − BU . Due to space constraints, the derivation of the gradients is presented in the supplementary material. We then use the fast projected gradient descent optimization from [13] to reach a local minimum of the reconstruction cost. The algorithmic steps are presented in Algorithm 1. The proposed algorithm enforces stability in every iteration step by projecting the solution onto the feasible set. For more details, the reader can refer to [13] or the provided code. Henceforth, we refer to our proposed algorithm as SOC. Note that, contrary to CG and WLS that search stable LDSs in SA,Bls by iterating over only A, SOC updates both linear matrices A and B, thereby expanding the feasible solution space to SA,B , where SA,B ⊃ SA,Bls . Further, SOC does not assume time continuity of the training measurements, contrary to WLS. The novelty of SOC with respect to [14] is the derivation of new gradient directions that not only account for control inputs, but that are also calculated so as to best fit training measurements instead of finding the nearest stable solution to an initial unstable matrix. Algorithm 1 SOC Algorithm using Fast Gradient Method (FGM) with restart from [13] Input: X,Y, U . State and control measurements Output: A ∈ S, B . Stable LDS 1: Initialize Z , (S,O,C,B), kmax, γo, λ ∈ (0, 1), α1 ∈ (0, 1) 2: Ẑ = Z 3: while k < kmax do 4: Zk = P(Ẑ − γ∇f(Ẑ)); γ = γo . P is the projection to the feasible set 5: while f(Zk) > f(Z) and γ ≥ γmin do . Line search to find gradient step size 6: Zk = P(Ẑ − γ∇f(Ẑ)) 7: γ = λγ 8: end while 9: if γ < γmin then . If line search fails, FGM restarts 10: Ẑ = Z; ak = a1 11: else . If cost is decreased, the solution is stored 12: αk+1 = 1 2 ( √ α4k + 4α 2 k − α2k); βk = αk(1−αk) α2k+αk+1 13: Ẑ = Zk + βk(Zk − Z); Z = Zk 14: end if 15: end while 16: A = S−1OCS 17: return A ∈ S, B 4 Experiments We implement LS, CG, WLS, and the proposed SOC method for learning LDSs and compare their performance on dynamical systems with and without control inputs. We omit the seminal work of [23] in our comparisons as it has been outperformed in terms of error, scalability, and execution time by both CG and WLS. For systems without inputs, we focus on learning dynamic texture from frame sequences extracted from videos using standard benchmark datasets [32, 5, 31]. For systems with inputs, we use experimental data from the Franka Emika Panda robotic manipulator and illustrate the learning and control performance of all the methods considered. We split the results in three parts: memory requirements, reconstruction error performance, and control performance. For an impartial assessment, we perform all comparisons in MATLAB using the publicly available code of the CG and WLS algorithms1. All simulations are performed using MATLAB R2019b on a machine with a 14-core Intel E5-2680v4 2.4-GHz CPU with 20GB RAM. 4.1 Memory Usage First, we compare the three algorithms on their memory demands. For an objective comparison, we only measure the size of all MATLAB workspace variables created by the algorithms. That is, we consider a matrix with 4 double-precision cells to use 32 bytes. We compare the algorithms on a sequence of frames extracted from a coffee cup video downloaded from Youtube2. We use this video because it exhibits dynamical motion and has a sufficient number of frames to allow for relatively higher subspace dimensions (the SVD decomposition limits the subspace dimension to be no larger than the number of frames). The results are shown in Figure 1. SOC scales proportionately to r2, whereas both CG and WLS scale proportionately to r4. This is because CG and WLS both rely on solving a quadratic programming problem with a state dimension n2, which generates matrices of dimension n4, whereas SOC uses a gradient descent approach that employs only matrix inversion, transposition, multiplication and addition, all of which are operations of space complexity O(n2). At r = 150, SOC uses about 5.04 MB of memory; CG and WLS use about 3.78 GB of memory and fail to run at higher dimensions due to memory constraints. Though such high dimensions may perhaps seem out of scope for the image reconstruction examples demonstrated next, they can typically occur in the field of robotics. For example, a recent study [3] used a linear data-driven Koopman representation with dimensions r = 330 to identify and control a pneumatic soft robotic arm. For this dimension, WLS and CG would require about 88 GB of memory and SOC would need about 25 MB. As a result, only SOC would be able to successfully train a stable Koopman model on a standard personal laptop and, as we show in the control performance section, failing to impose stability on the learned model can lead to unsafe robot movements. 4.2 Error Performance To measure the predictive accuracy of the learned representations, we use three benchmark datasets: UCLA [32], UCSD [5], and DynTex [31]. The UCLA dataset consists of 200 gray-scale frame sequences that demonstrate 50 different categories of dynamic motion (e.g. flame flickering, wave motion, flowers in the wind), each captured from 4 different viewpoints. Every frame sequence contains 75 frames of size 48 × 48 pixels. The UCSD dataset consists of 254 frame sequences showing highway traffic in different environmental conditions. Each sequence contains between 42 and 52 frames of size 48× 48 pixels. For the DynTex dataset, we use 99 sequences from 5 groups of 1https://github.com/huangwb/LDS-toolbox 2https://www.youtube.com/watch?v=npkBC4GYodg UCLA UCSD DynTex dynamic texture (smoke and rotation from the Beta subset and foliage, escalator, and flags from the Gamma subset) that exhibit periodic motion. The frames are of size 352 × 288 pixels. We convert the frames to grayscale and use the bicubic interpolation algorithm implemented in the Python library pillow to scale down the frames without ratio distortion down to 48 × 39 pixels. Each DynTex sequence contains between 250 and 1576 frames. As explained in Section 2, the dimensionality of images can be prohibitively high and cause slow computations or memory failures: the transition matrix for an image of size as small as 48× 48 pixels would require hundreds of TBs for CG and WLS to run. For this reason, we use subspace methods to reduce the problem dimensionality. For each dataset, we consider a set of subspace dimensions r ∈ {3, 30}. Then, for each dimension, we use the four methods (LS, CG, WLS, and SOC) to obtain a LDS for each of the frame sequences. To compare the performance of the four algorithms, we use the reconstruction error relative to the LS solution: e(Â) = e(Â)−e(Als)e(Als) × 100. We report the results in Figure 2 and focus on three metrics: best error frequency, average reconstruction error, and execution time. The best error graphs plot the percentage of frame sequences for a given dimension for which an algorithm computes the best relative error (that is, lower than or equal to the other two methods). This metric credits all schemes that achieve the lowest error and so curves may add up to more than 100%. The average error and time graphs show the average reconstruction error and average execution time of all frame sequences for each dimension, respectively. Across the three datasets, SOC computes the best error for more frame sequences than the other methods across any dimension. In the UCLA and UCSD datasets, the SOC best error frequency reaches 100% for the majority of the dimensions contrary to less than 80% (for UCLA) and 40% (for UCSD) attained by CG and WLS. This means that, for the aforementioned datasets, CG and WLS only rarely find a better solution than SOC. While for the DynTex dataset the differences are not as pronounced, SOC still computes the best error for most of the frame sequences for any dimension f = 100 f = 500 f = 1000 Training Data LS and about 20% more often than the other methods. Second, SOC has orders-of-magnitude lower average relative error across all dimensions and datasets. Last, in terms of the execution time, SOC is slower than CG and WLS for low dimensions (r < 20). However, it scales better than the other two methods, such that it becomes faster than CG for r > 20. For the UCSD dataset, SOC and WLS become comparable in terms of average execution time near n = 30. This observation is in line with the fact that CG and WLS are high space-complexity algorithms that may even fail to perform at high dimensions due to memory limitations. Next, we compare the three methods on the steam sequence (composed of 120× 170 pixel images) and the fountain sequence (composed of 150× 90 pixel images) from the MIT temporal texture database [34], together with the coffee cup sequence used in Figure 1. Results are shown in Table 1. To show the effect on the predictive quality of the solutions, we plot the frames reconstructed from the learned LDS for each method in Figure 3. Note that the LS solution degrades over time and generates unrealistic frames. 4.3 Control In this section, we demonstrate the superior performance of our approach in control systems. Using experimental data gathered from the robotic arm Franka Emika Panda, we illustrate the improvement in both the reconstruction error of the learned model and the control performance. To use CG and WLS to compute a stable Â, we use the LS solution for the control matrix and modify the objective to  = inf A∈S 1 2 ‖Y ′ −AX‖2F , (11) where Y ′ = Y − BlsU . The learning performance is then measured as the % error increase when compared to the LS solution (Als, Bls). Note that this error depends both on  and B̂; for WLS and CG, we use the LS solution for the control matrix (B = Bls), whereas SOC computes both A and B. We collected training data on the experimental platform at 50 Hz, using a controller to manually move the robotic arm. We gathered 400 measurements (8 seconds) in eight separate runs. The training data, along with the experimental and simulation environments used in this section are shown in Figure 4. Table 2 compares the performance of the SOC, CG, and WLS algorithms on learning stable models for the Franka Emika Panda robotic manipulator using experimental data. The performance is compared for different numbers of measurements p. As the data show, SOC is the only algorithm that never fails to find stable solutions, regardless of the amount of training data. As more measurements are used, the LS solution itself becomes more stable and CG and WLS are both able to converge to stable solutions. Further, the quality of CG solutions improves with more training measurements; the performance of SOC remains robust throughout the testing cases. In Figure 5, we plot the reconstruction error for the three methods for different training data sizes. In this setting, however, measurement sets (xt, yt, ut) are randomly drawn from the training data such that the matrices Y and X have discontiguous measurements. Note how such a choice worsens the performance of WLS that assumes continuity in the observation matrices. On the other hand, CG and SOC are similar in learning performance. With regard to controlling the system, we use LQR control computed using the models from each algorithm and simulate tracking a figure-8 pattern. The states are the x, y, z coordinates of the end effector, the 7 joint angles of the arm, and the 7 joint angular velocities and the applied control is the joint velocities. The trajectory is generated in the y − z plane for the end effector; the desired angle configurations of the robotic arm are solved offline using inverse kinematics; the desired angular joint velocities are set to 0. LQR control is generated using Q = diag([ci]) ∈ R17×17, where ci = 1 for i ∈ {1, 10} and 0 elsewhere and R = 0.1× I7×7. The LS model is unstable and fails at the task. Similarly, WLS—despite the stable model—performs poorly, highlighting the need for both stability and fidelity of the learned representation. On the other hand, CG and SOC are similar in performance. To measure robustness across the initial conditions, we run 50 trials, varying both the y and z initial positions with displacements sampled uniformly in U(−0.1, 0.1). Across all trials, LS has an average error of 7556, WLS scores 38.73, CG scores 0.0810 and SOC scores 0.0799. Then, we test LQR control computed on the LDS obtained from the SOC algorithm in an experiment to demonstrate that the simulation results are indicative of the performance in a physical experiment. Figure 6 shows the control performance of three trials tracking a figure-8 pattern. Due to COVID-19 limitations, we were unable to extend the experimental tests. However, these results serve primarily to experimentally validate our approach and illustrate that the simulation results are an accurate prediction of the experimental behavior as well. 5 Conclusion In this work, we introduce a novel algorithm for computing stable LDSs. Compared to the current top-performing alternatives, the proposed scheme is significantly more memory efficient and, as a result, scales better for high-dimensional systems often encountered in image processing and robotic applications. Further, the suggested method outperforms the alternatives in terms of error and control performance, as demonstrated on three benchmark datasets and the Franka Emika Panda robotic arm experiments. These features make it a promising tool for compression and data-driven system identification tasks. Coupled with the ongoing research around Koopman-operator-based nonlinear control, this algorithm can be a promising candidate for high-dimensional nonlinear control and other machine learning applications, as well. Indeed, recent work in [9] uses Koopman operators to optimize training of neural network methods; also work in [38] learns deep neural network models for Koopman operators of nonlinear dynamical systems. Imposing stability on Koopman operators represented using basis functions learned via deep learning will combine the benefits of linear representations with the predictive power of neural networks. Broader Impact Our methods can improve robotic tasks that are safety-critical, particularly those that include a human-in-the-loop (such as rehabilitation devices and prosthetics) where the human-robot interaction dynamics are not known ahead of time. For such tasks, a robotic platform prioritizes stability and safety during operation. Unstable data-driven models may lead to catastrophic robotic behavior, as we demonstrate in our simulations with the Franka Emika Panda robotic arm. Our work provides a mechanism for online learning of models that satisfy stability constraints, improving the safety and reliability of closed-loop control of those systems. Acknowledgments and Disclosure of Funding First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.
1. What is the main contribution of the paper regarding learning stable linear dynamical systems? 2. What are the strengths of the proposed algorithm compared to prior methods, particularly in terms of computational efficiency and performance? 3. Do you have any concerns or suggestions regarding the novelty of the algorithm or its connection to previous work? 4. How does the reviewer assess the significance of stability in the context of linear dynamical systems, and how could the paper further support this aspect?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a new algorithm for learning stable linear dynamical systems. The algorithm is based on a recent characterization of stable matrices which shows that a matrix is stable if and only if it can be written as a product of certain psd and orthogonal matrices. The proposed algorithm uses O(n^2) space in contrast to O(n^4) space used by previous methods, where n is the state dimension. The authors show that the new algorithm gets lower reconstruction error compared to prior art on benchmark datasets. The paper also extends their approach to the control setting, where they jointly search over the state and control matrices. By doing this, they achieve superior performance on control tasks compared to an algorithm which uses prior work to estimate a stable state matrix and then does Least Squares to estimate the control matrix. Strengths The experimental evaluation is very thorough and quite convincing. Since this is mainly an experimental paper, I think this is probably the most important criterion for evaluating it. Within the experimental evaluation, the algorithm seems to do well on multiple fronts: 1) it uses much less memory, 2) it is more scalable as the dimensionality increases, 3) it gets lower reconstruction error, and 4) it does better on control tasks. The authors also evaluate the control task on a real robotic manipulator which shows the synthetic setup is an accurate representation of the real world task. The provided code in the supplementary is also very well-organized, kudos to the authors for doing a great job on this. Weaknesses One could argue that the algorithm is not very novel, given the previous characterization of stable matrices. But I don't think this is too much of a concern as the characterization has not been exploited for this purpose before, and the major contribution here is probably the empirical evaluation. I found that the paper could emphasize the importance of stability more. For instance, it is nice to note that the LS solution which does not enforce stability does get worse over time in Fig. 3. Since the utility of the algorithm rests on the premise that it is important for the linear system to be stable, I think this premise should be supported a bit more.
NIPS
Title Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control Abstract Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n) space complexity compared to O(n) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. N/A Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. 1 Introduction Linear dynamical systems arise in many areas of machine learning and time series modeling with active research applications in computer vision [2], robotics [28], and control [8, 19, 20]. Linear representations are often desirable because they admit closed-form solutions, simplify modeling, and are general enough to be useful in many applications (e.g. Kalman filters). Further, there are well-established tools for the analysis (e.g. investigating properties of a system, such as stability and dissipativity), prediction, estimation, and control of linear systems [16]. They are, in general, computationally more efficient than nonlinear systems and highly promising candidates for real-time applications or data-intensive tasks. Last but not least, linear dynamical models can also be used to capture nonlinear systems using Koopman operators, which linearly evolve nonlinear functions of the states [22, 4, 27, 15]. LDSs are models that are learned in a self-supervised manner and are therefore promising for data-driven applications. Consequently, with the availability of higher computational power and the wide applicability of data-driven modeling, there is renewed interest in learning LDSs from data. Examples include learning spatio-temporal data for dynamic texture classification [2, 10], 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. video action recognition [24, 37], robotic tactile sensing [25] and nonlinear control using Koopman operators [4, 3]. Although linear system identification is a well-studied subject [26, 29], algorithms that learn LDSs from data have often overlooked important properties, such as stability. Stability describes the long-term behavior of a system and is critical both for numerical computations to converge and to accurately represent the true properties of many physical systems. When stability is overlooked, the learned model may be unstable even when the underlying dynamics are stable [7], in which case the long-term prediction accuracy dramatically suffers. This is why there are increasing efforts to impose stability on data-driven models [2, 18, 21, 11, 1]. However, the available methods do not scale well or are not applicable for control. In this work, we present a novel method for learning stable LDSs for prediction and control. Using a recent characterization of matrix stability [14], we derive a gradient-descent algorithm that iteratively improves the reconstruction error of a projected stable model. Contrary to current top-performing methods that start from the least-squares (LS) solution and iteratively push the LDSs towards the stability region, our method enforces stability in each step. As a result, it returns a stable LDS even after one single iteration. This feature can become crucial in online applications and time-sensitive tasks where obtaining a stable state-transition matrix as early in the optimization process as possible becomes of central importance. Furthermore, whereas alternative methods terminate upon reaching stability, our method can iterate on already stable solutions to improve the reconstruction error. It can therefore be used to further improve the solutions of other methods. Our proposed method is provably more memory efficient, with an O(n2) space complexity—n being the state dimension—compared to O(n4) of the competing alternative schemes for stable LDS. For systems with inputs, we derive the gradient directions that update both state and control linear matrices. By doing so, we expand the space of possible solutions and enable the discovery of models achieving lower error metrics compared to searching only for a stable state matrix which, to the best of our knowledge, is what the current top-performing algorithms do. To demonstrate the superior performance of our method, we test it on the task of learning dynamic textures from videos (using benchmark datasets that have been used to assess models that learn stable LDSs), as well as learning and controlling (in simulation and experiment) the Franka Emika Panda robotic arm [12]. When compared to the current top-performing models, a constraint generation (CG) [2] and a weighted least squares (WLS) [18] approach, our method achieves an orders-ofmagnitude lower reconstruction error, robustness even in low-resource settings, and better control performance. Notably, our approach is the first that tests the control performance of stable LDS; CG has been formulated but not evaluated for control tasks and it is not straightforward that WLS can be implemented for such applications, as the results in this paper suggest. The paper is structured as follows. In Section II, we review linear systems and stability. In Section III, we introduce and derive the proposed algorithm for learning stable LDSs. In Section IV, we compare our method to competing alternative algorithms that learn stable LDSs in prediction and control. In Section V, we discuss our findings and point to areas for future research. 2 Linear Dynamical Systems We consider states x ∈ RN , controls u ∈ RM and discrete time LDSs modeled as yt ≡ xt+1 = Axt +But, (1) where A ∈ RN×N and B ∈ RN×M are the state and control matrices, respectively. For systems without inputs, one can simply set B = 0. We use SA,B = {(A,B) | xt+1 = Axt + But} to denote the solution space of the matrices A and B that describe a LDS of the form (1). Further, let {λi(A)}Ni=1 be the eigenvalues of an N ×N matrix A in decreasing order of magnitude, ρ(A) ≡ |λ1(A)| be the spectral radius of A, and S be the set of all stable matrices of size N ×N . 2.1 Learning Data-Driven LDSs Next, we provide an overview of data-driven learning of LDSs. First, we consider systems without control for which CG and WLS were developed. Later, in Section 3, we modify the learning objective to include control terms and learn stable representations for LDSs with inputs. Given p pairs of measurements (xt, yt), learning LDSs from data typically takes the form  = inf A 1 2 ‖Y −AX‖2F , (2) where Y = [y1 y2 . . . yp] ∈ RN×p, X = [x1 x2 . . . xp] ∈ RN×p, and || · ||F is the Frobenius norm. The LS solution is then computed as Als = Y X †. (3) where X† denotes the Moore-Penrose inverse of X . The optimization problem in (2) does not impose stability constraints on Â. To learn stable LDSs, the learning objective is typically formulated as  = inf A∈S 1 2 ‖Y −AX‖2F , (4) and is highly nonconvex. The current top-performing methods for computing stable LDSs are a constraint generation [2] and a weighted least squares [18] approach. CG formulates the optimization as a quadratic program without constraints, which is an approximation to the original problem. It then iterates on the solution to the approximate optimization by adding constraints and terminates when a stable solution is reached. WLS determines the components of the LS transition matrix that cause instability and uses a weight matrix to enforce stability, while minimizing the reconstruction error. Note that both methods consider an entire sequence of observations, sayD ∈ RN×p, such that X = D[0:p−1] and Y = D[1:p], thereby making the assumption that all measurements belong to a unique time-series dataset. In the case of the WLS method, this assumption is necessary and the method fails dramatically for datasets with disjoint windows of time, as we demonstrate later in Section 4.3. CG and our proposed method, on the other hand, do not require contiguous observations. 2.2 Subspace Methods For high-dimensional LDSs, as is the case with image reconstruction, it is computationally prohibitive to learn a state transition matrix. Even for small images of size 100× 100 pixels, the dimensionality of the state transition matrix A would be 1004. For such high-dimensional systems, models are obtained using subspace methods that reduce the dimensionality of the learning task. Subspace methods for learning LDSs typically apply singular value decomposition (SVD) on the original dataset [17] decomposing the observation matrix D ≈ UΣV T , where U ∈ RN×r, V ∈ Rp×r are orthonormal matrices, Σ = {σ1, . . . , σr} ∈ Rr×r contains the r largest singular values, and r < N is the subspace dimension. Then, the learning optimization is performed on the reduced observation matrix Dr = ΣV T , with Xr = Dr[0:p−1] and Yr = Dr[1:p]. U is used to project the solutions back to the original state space. For a more complete description of standard subspace methods, the reader can refer to [6, 30, 33, 36, 35]. 3 The Algorithm The optimization problem for finding stable LDSs has traditionally only considered solving for a stable matrix A that minimizes the reconstruction loss. In this work, we formulate the objective as [Â, B̂] = inf A∈S,B 1 2 ‖Y −AX −BU‖2F , (5) to expand the solution space and solve both for a stable state matrix A and a matrix B. We denote the least-square solution for the control system by [Als, Bls] = Y · [X;U ]†. 3.1 Optimization Objective and Gradient Descents The proposed algorithm uses a recent characterization of stable matrices [14]. Specifically, a matrix A is stable if and only if it can be written as A = S−1OCS, where S is invertible, O is orthogonal, and C is a positive semidefinite contraction (that is, C is a positive semidefinite matrix with norm less than or equal to 1). By constraining the norm of C, one bounds the eigenvalues of A and ensures stability. Using this property, we formulate the optimization problem as [Â, B̂] = inf S 0,O orthogonal,C 0,‖C‖≤1 1 2 ‖Y − S−1OCSX −BU‖2F , (6) where  ≡ S−1OCS. Then, for f(S,O,C,B) = 12‖Y − S −1OCSX − BU‖2F , we derive the gradient directions with respect to the four matrices S,O,C, and B as follows: ∇Sf(S,O,C,B) =S−TEXTSTCTOTS−T − CTOTS−TEXT (7) ∇Of(S,O,C,B) =− S−TEXTSTCT (8) ∇Cf(S,O,C,B) =−OTS−TEXTST (9) ∇Bf(S,O,C,B) =− EUT (10) where E = Y − S−1OCSX − BU . Due to space constraints, the derivation of the gradients is presented in the supplementary material. We then use the fast projected gradient descent optimization from [13] to reach a local minimum of the reconstruction cost. The algorithmic steps are presented in Algorithm 1. The proposed algorithm enforces stability in every iteration step by projecting the solution onto the feasible set. For more details, the reader can refer to [13] or the provided code. Henceforth, we refer to our proposed algorithm as SOC. Note that, contrary to CG and WLS that search stable LDSs in SA,Bls by iterating over only A, SOC updates both linear matrices A and B, thereby expanding the feasible solution space to SA,B , where SA,B ⊃ SA,Bls . Further, SOC does not assume time continuity of the training measurements, contrary to WLS. The novelty of SOC with respect to [14] is the derivation of new gradient directions that not only account for control inputs, but that are also calculated so as to best fit training measurements instead of finding the nearest stable solution to an initial unstable matrix. Algorithm 1 SOC Algorithm using Fast Gradient Method (FGM) with restart from [13] Input: X,Y, U . State and control measurements Output: A ∈ S, B . Stable LDS 1: Initialize Z , (S,O,C,B), kmax, γo, λ ∈ (0, 1), α1 ∈ (0, 1) 2: Ẑ = Z 3: while k < kmax do 4: Zk = P(Ẑ − γ∇f(Ẑ)); γ = γo . P is the projection to the feasible set 5: while f(Zk) > f(Z) and γ ≥ γmin do . Line search to find gradient step size 6: Zk = P(Ẑ − γ∇f(Ẑ)) 7: γ = λγ 8: end while 9: if γ < γmin then . If line search fails, FGM restarts 10: Ẑ = Z; ak = a1 11: else . If cost is decreased, the solution is stored 12: αk+1 = 1 2 ( √ α4k + 4α 2 k − α2k); βk = αk(1−αk) α2k+αk+1 13: Ẑ = Zk + βk(Zk − Z); Z = Zk 14: end if 15: end while 16: A = S−1OCS 17: return A ∈ S, B 4 Experiments We implement LS, CG, WLS, and the proposed SOC method for learning LDSs and compare their performance on dynamical systems with and without control inputs. We omit the seminal work of [23] in our comparisons as it has been outperformed in terms of error, scalability, and execution time by both CG and WLS. For systems without inputs, we focus on learning dynamic texture from frame sequences extracted from videos using standard benchmark datasets [32, 5, 31]. For systems with inputs, we use experimental data from the Franka Emika Panda robotic manipulator and illustrate the learning and control performance of all the methods considered. We split the results in three parts: memory requirements, reconstruction error performance, and control performance. For an impartial assessment, we perform all comparisons in MATLAB using the publicly available code of the CG and WLS algorithms1. All simulations are performed using MATLAB R2019b on a machine with a 14-core Intel E5-2680v4 2.4-GHz CPU with 20GB RAM. 4.1 Memory Usage First, we compare the three algorithms on their memory demands. For an objective comparison, we only measure the size of all MATLAB workspace variables created by the algorithms. That is, we consider a matrix with 4 double-precision cells to use 32 bytes. We compare the algorithms on a sequence of frames extracted from a coffee cup video downloaded from Youtube2. We use this video because it exhibits dynamical motion and has a sufficient number of frames to allow for relatively higher subspace dimensions (the SVD decomposition limits the subspace dimension to be no larger than the number of frames). The results are shown in Figure 1. SOC scales proportionately to r2, whereas both CG and WLS scale proportionately to r4. This is because CG and WLS both rely on solving a quadratic programming problem with a state dimension n2, which generates matrices of dimension n4, whereas SOC uses a gradient descent approach that employs only matrix inversion, transposition, multiplication and addition, all of which are operations of space complexity O(n2). At r = 150, SOC uses about 5.04 MB of memory; CG and WLS use about 3.78 GB of memory and fail to run at higher dimensions due to memory constraints. Though such high dimensions may perhaps seem out of scope for the image reconstruction examples demonstrated next, they can typically occur in the field of robotics. For example, a recent study [3] used a linear data-driven Koopman representation with dimensions r = 330 to identify and control a pneumatic soft robotic arm. For this dimension, WLS and CG would require about 88 GB of memory and SOC would need about 25 MB. As a result, only SOC would be able to successfully train a stable Koopman model on a standard personal laptop and, as we show in the control performance section, failing to impose stability on the learned model can lead to unsafe robot movements. 4.2 Error Performance To measure the predictive accuracy of the learned representations, we use three benchmark datasets: UCLA [32], UCSD [5], and DynTex [31]. The UCLA dataset consists of 200 gray-scale frame sequences that demonstrate 50 different categories of dynamic motion (e.g. flame flickering, wave motion, flowers in the wind), each captured from 4 different viewpoints. Every frame sequence contains 75 frames of size 48 × 48 pixels. The UCSD dataset consists of 254 frame sequences showing highway traffic in different environmental conditions. Each sequence contains between 42 and 52 frames of size 48× 48 pixels. For the DynTex dataset, we use 99 sequences from 5 groups of 1https://github.com/huangwb/LDS-toolbox 2https://www.youtube.com/watch?v=npkBC4GYodg UCLA UCSD DynTex dynamic texture (smoke and rotation from the Beta subset and foliage, escalator, and flags from the Gamma subset) that exhibit periodic motion. The frames are of size 352 × 288 pixels. We convert the frames to grayscale and use the bicubic interpolation algorithm implemented in the Python library pillow to scale down the frames without ratio distortion down to 48 × 39 pixels. Each DynTex sequence contains between 250 and 1576 frames. As explained in Section 2, the dimensionality of images can be prohibitively high and cause slow computations or memory failures: the transition matrix for an image of size as small as 48× 48 pixels would require hundreds of TBs for CG and WLS to run. For this reason, we use subspace methods to reduce the problem dimensionality. For each dataset, we consider a set of subspace dimensions r ∈ {3, 30}. Then, for each dimension, we use the four methods (LS, CG, WLS, and SOC) to obtain a LDS for each of the frame sequences. To compare the performance of the four algorithms, we use the reconstruction error relative to the LS solution: e(Â) = e(Â)−e(Als)e(Als) × 100. We report the results in Figure 2 and focus on three metrics: best error frequency, average reconstruction error, and execution time. The best error graphs plot the percentage of frame sequences for a given dimension for which an algorithm computes the best relative error (that is, lower than or equal to the other two methods). This metric credits all schemes that achieve the lowest error and so curves may add up to more than 100%. The average error and time graphs show the average reconstruction error and average execution time of all frame sequences for each dimension, respectively. Across the three datasets, SOC computes the best error for more frame sequences than the other methods across any dimension. In the UCLA and UCSD datasets, the SOC best error frequency reaches 100% for the majority of the dimensions contrary to less than 80% (for UCLA) and 40% (for UCSD) attained by CG and WLS. This means that, for the aforementioned datasets, CG and WLS only rarely find a better solution than SOC. While for the DynTex dataset the differences are not as pronounced, SOC still computes the best error for most of the frame sequences for any dimension f = 100 f = 500 f = 1000 Training Data LS and about 20% more often than the other methods. Second, SOC has orders-of-magnitude lower average relative error across all dimensions and datasets. Last, in terms of the execution time, SOC is slower than CG and WLS for low dimensions (r < 20). However, it scales better than the other two methods, such that it becomes faster than CG for r > 20. For the UCSD dataset, SOC and WLS become comparable in terms of average execution time near n = 30. This observation is in line with the fact that CG and WLS are high space-complexity algorithms that may even fail to perform at high dimensions due to memory limitations. Next, we compare the three methods on the steam sequence (composed of 120× 170 pixel images) and the fountain sequence (composed of 150× 90 pixel images) from the MIT temporal texture database [34], together with the coffee cup sequence used in Figure 1. Results are shown in Table 1. To show the effect on the predictive quality of the solutions, we plot the frames reconstructed from the learned LDS for each method in Figure 3. Note that the LS solution degrades over time and generates unrealistic frames. 4.3 Control In this section, we demonstrate the superior performance of our approach in control systems. Using experimental data gathered from the robotic arm Franka Emika Panda, we illustrate the improvement in both the reconstruction error of the learned model and the control performance. To use CG and WLS to compute a stable Â, we use the LS solution for the control matrix and modify the objective to  = inf A∈S 1 2 ‖Y ′ −AX‖2F , (11) where Y ′ = Y − BlsU . The learning performance is then measured as the % error increase when compared to the LS solution (Als, Bls). Note that this error depends both on  and B̂; for WLS and CG, we use the LS solution for the control matrix (B = Bls), whereas SOC computes both A and B. We collected training data on the experimental platform at 50 Hz, using a controller to manually move the robotic arm. We gathered 400 measurements (8 seconds) in eight separate runs. The training data, along with the experimental and simulation environments used in this section are shown in Figure 4. Table 2 compares the performance of the SOC, CG, and WLS algorithms on learning stable models for the Franka Emika Panda robotic manipulator using experimental data. The performance is compared for different numbers of measurements p. As the data show, SOC is the only algorithm that never fails to find stable solutions, regardless of the amount of training data. As more measurements are used, the LS solution itself becomes more stable and CG and WLS are both able to converge to stable solutions. Further, the quality of CG solutions improves with more training measurements; the performance of SOC remains robust throughout the testing cases. In Figure 5, we plot the reconstruction error for the three methods for different training data sizes. In this setting, however, measurement sets (xt, yt, ut) are randomly drawn from the training data such that the matrices Y and X have discontiguous measurements. Note how such a choice worsens the performance of WLS that assumes continuity in the observation matrices. On the other hand, CG and SOC are similar in learning performance. With regard to controlling the system, we use LQR control computed using the models from each algorithm and simulate tracking a figure-8 pattern. The states are the x, y, z coordinates of the end effector, the 7 joint angles of the arm, and the 7 joint angular velocities and the applied control is the joint velocities. The trajectory is generated in the y − z plane for the end effector; the desired angle configurations of the robotic arm are solved offline using inverse kinematics; the desired angular joint velocities are set to 0. LQR control is generated using Q = diag([ci]) ∈ R17×17, where ci = 1 for i ∈ {1, 10} and 0 elsewhere and R = 0.1× I7×7. The LS model is unstable and fails at the task. Similarly, WLS—despite the stable model—performs poorly, highlighting the need for both stability and fidelity of the learned representation. On the other hand, CG and SOC are similar in performance. To measure robustness across the initial conditions, we run 50 trials, varying both the y and z initial positions with displacements sampled uniformly in U(−0.1, 0.1). Across all trials, LS has an average error of 7556, WLS scores 38.73, CG scores 0.0810 and SOC scores 0.0799. Then, we test LQR control computed on the LDS obtained from the SOC algorithm in an experiment to demonstrate that the simulation results are indicative of the performance in a physical experiment. Figure 6 shows the control performance of three trials tracking a figure-8 pattern. Due to COVID-19 limitations, we were unable to extend the experimental tests. However, these results serve primarily to experimentally validate our approach and illustrate that the simulation results are an accurate prediction of the experimental behavior as well. 5 Conclusion In this work, we introduce a novel algorithm for computing stable LDSs. Compared to the current top-performing alternatives, the proposed scheme is significantly more memory efficient and, as a result, scales better for high-dimensional systems often encountered in image processing and robotic applications. Further, the suggested method outperforms the alternatives in terms of error and control performance, as demonstrated on three benchmark datasets and the Franka Emika Panda robotic arm experiments. These features make it a promising tool for compression and data-driven system identification tasks. Coupled with the ongoing research around Koopman-operator-based nonlinear control, this algorithm can be a promising candidate for high-dimensional nonlinear control and other machine learning applications, as well. Indeed, recent work in [9] uses Koopman operators to optimize training of neural network methods; also work in [38] learns deep neural network models for Koopman operators of nonlinear dynamical systems. Imposing stability on Koopman operators represented using basis functions learned via deep learning will combine the benefits of linear representations with the predictive power of neural networks. Broader Impact Our methods can improve robotic tasks that are safety-critical, particularly those that include a human-in-the-loop (such as rehabilitation devices and prosthetics) where the human-robot interaction dynamics are not known ahead of time. For such tasks, a robotic platform prioritizes stability and safety during operation. Unstable data-driven models may lead to catastrophic robotic behavior, as we demonstrate in our simulations with the Franka Emika Panda robotic arm. Our work provides a mechanism for online learning of models that satisfy stability constraints, improving the safety and reliability of closed-loop control of those systems. Acknowledgments and Disclosure of Funding First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.
1. What is the focus and contribution of the paper on learning stable dynamical systems? 2. What are the strengths of the proposed approach, particularly in terms of its ability to preserve stability and improve reconstruction error? 3. What are the weaknesses of the paper, especially regarding motivation and novelty compared to prior works? 4. Do you have any concerns or questions about the proposed method's application to control tasks and its comparison to existing algorithms?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a method to learn stable dynamical systems by an iterative optimisation method. The method preserves stability at every iteration update scheme. The proposed approach is evaluated in simulations to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, the proposed method achieves an orders-of-magnitude improvement in reconstruction error and superior results in terms of control performance Strengths Claims are sound and empirical evaluation well-conducted. The paper is well written, notation and claims are clear. Extensive experimentations and results help to understand the contribution of the paper. Authors clearly state their contributions and the gains they observed from their experimental results, both in term of performance scores and memory-efficiency. The proposed method compares favourably with respect to existing algorithms on certain datasets in terms of average error. Weaknesses The paper is at times ill-motivated. It is unclear why the dynamical systems estimated from the datasets considered should be stable. This is particularly true for the control experiments, with manipulation tasks leading inherently to unstable dynamics. The novelty with respect to existing system identification methods (such as stability preserving subspace identification methods and methods based on Riemannian optimisation) is unclear.
NIPS
Title Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control Abstract Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n) space complexity compared to O(n) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. N/A Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-ofmagnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https: //github.com/giorgosmamakoukas/MemoryEfficientStableLDS. 1 Introduction Linear dynamical systems arise in many areas of machine learning and time series modeling with active research applications in computer vision [2], robotics [28], and control [8, 19, 20]. Linear representations are often desirable because they admit closed-form solutions, simplify modeling, and are general enough to be useful in many applications (e.g. Kalman filters). Further, there are well-established tools for the analysis (e.g. investigating properties of a system, such as stability and dissipativity), prediction, estimation, and control of linear systems [16]. They are, in general, computationally more efficient than nonlinear systems and highly promising candidates for real-time applications or data-intensive tasks. Last but not least, linear dynamical models can also be used to capture nonlinear systems using Koopman operators, which linearly evolve nonlinear functions of the states [22, 4, 27, 15]. LDSs are models that are learned in a self-supervised manner and are therefore promising for data-driven applications. Consequently, with the availability of higher computational power and the wide applicability of data-driven modeling, there is renewed interest in learning LDSs from data. Examples include learning spatio-temporal data for dynamic texture classification [2, 10], 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. video action recognition [24, 37], robotic tactile sensing [25] and nonlinear control using Koopman operators [4, 3]. Although linear system identification is a well-studied subject [26, 29], algorithms that learn LDSs from data have often overlooked important properties, such as stability. Stability describes the long-term behavior of a system and is critical both for numerical computations to converge and to accurately represent the true properties of many physical systems. When stability is overlooked, the learned model may be unstable even when the underlying dynamics are stable [7], in which case the long-term prediction accuracy dramatically suffers. This is why there are increasing efforts to impose stability on data-driven models [2, 18, 21, 11, 1]. However, the available methods do not scale well or are not applicable for control. In this work, we present a novel method for learning stable LDSs for prediction and control. Using a recent characterization of matrix stability [14], we derive a gradient-descent algorithm that iteratively improves the reconstruction error of a projected stable model. Contrary to current top-performing methods that start from the least-squares (LS) solution and iteratively push the LDSs towards the stability region, our method enforces stability in each step. As a result, it returns a stable LDS even after one single iteration. This feature can become crucial in online applications and time-sensitive tasks where obtaining a stable state-transition matrix as early in the optimization process as possible becomes of central importance. Furthermore, whereas alternative methods terminate upon reaching stability, our method can iterate on already stable solutions to improve the reconstruction error. It can therefore be used to further improve the solutions of other methods. Our proposed method is provably more memory efficient, with an O(n2) space complexity—n being the state dimension—compared to O(n4) of the competing alternative schemes for stable LDS. For systems with inputs, we derive the gradient directions that update both state and control linear matrices. By doing so, we expand the space of possible solutions and enable the discovery of models achieving lower error metrics compared to searching only for a stable state matrix which, to the best of our knowledge, is what the current top-performing algorithms do. To demonstrate the superior performance of our method, we test it on the task of learning dynamic textures from videos (using benchmark datasets that have been used to assess models that learn stable LDSs), as well as learning and controlling (in simulation and experiment) the Franka Emika Panda robotic arm [12]. When compared to the current top-performing models, a constraint generation (CG) [2] and a weighted least squares (WLS) [18] approach, our method achieves an orders-ofmagnitude lower reconstruction error, robustness even in low-resource settings, and better control performance. Notably, our approach is the first that tests the control performance of stable LDS; CG has been formulated but not evaluated for control tasks and it is not straightforward that WLS can be implemented for such applications, as the results in this paper suggest. The paper is structured as follows. In Section II, we review linear systems and stability. In Section III, we introduce and derive the proposed algorithm for learning stable LDSs. In Section IV, we compare our method to competing alternative algorithms that learn stable LDSs in prediction and control. In Section V, we discuss our findings and point to areas for future research. 2 Linear Dynamical Systems We consider states x ∈ RN , controls u ∈ RM and discrete time LDSs modeled as yt ≡ xt+1 = Axt +But, (1) where A ∈ RN×N and B ∈ RN×M are the state and control matrices, respectively. For systems without inputs, one can simply set B = 0. We use SA,B = {(A,B) | xt+1 = Axt + But} to denote the solution space of the matrices A and B that describe a LDS of the form (1). Further, let {λi(A)}Ni=1 be the eigenvalues of an N ×N matrix A in decreasing order of magnitude, ρ(A) ≡ |λ1(A)| be the spectral radius of A, and S be the set of all stable matrices of size N ×N . 2.1 Learning Data-Driven LDSs Next, we provide an overview of data-driven learning of LDSs. First, we consider systems without control for which CG and WLS were developed. Later, in Section 3, we modify the learning objective to include control terms and learn stable representations for LDSs with inputs. Given p pairs of measurements (xt, yt), learning LDSs from data typically takes the form  = inf A 1 2 ‖Y −AX‖2F , (2) where Y = [y1 y2 . . . yp] ∈ RN×p, X = [x1 x2 . . . xp] ∈ RN×p, and || · ||F is the Frobenius norm. The LS solution is then computed as Als = Y X †. (3) where X† denotes the Moore-Penrose inverse of X . The optimization problem in (2) does not impose stability constraints on Â. To learn stable LDSs, the learning objective is typically formulated as  = inf A∈S 1 2 ‖Y −AX‖2F , (4) and is highly nonconvex. The current top-performing methods for computing stable LDSs are a constraint generation [2] and a weighted least squares [18] approach. CG formulates the optimization as a quadratic program without constraints, which is an approximation to the original problem. It then iterates on the solution to the approximate optimization by adding constraints and terminates when a stable solution is reached. WLS determines the components of the LS transition matrix that cause instability and uses a weight matrix to enforce stability, while minimizing the reconstruction error. Note that both methods consider an entire sequence of observations, sayD ∈ RN×p, such that X = D[0:p−1] and Y = D[1:p], thereby making the assumption that all measurements belong to a unique time-series dataset. In the case of the WLS method, this assumption is necessary and the method fails dramatically for datasets with disjoint windows of time, as we demonstrate later in Section 4.3. CG and our proposed method, on the other hand, do not require contiguous observations. 2.2 Subspace Methods For high-dimensional LDSs, as is the case with image reconstruction, it is computationally prohibitive to learn a state transition matrix. Even for small images of size 100× 100 pixels, the dimensionality of the state transition matrix A would be 1004. For such high-dimensional systems, models are obtained using subspace methods that reduce the dimensionality of the learning task. Subspace methods for learning LDSs typically apply singular value decomposition (SVD) on the original dataset [17] decomposing the observation matrix D ≈ UΣV T , where U ∈ RN×r, V ∈ Rp×r are orthonormal matrices, Σ = {σ1, . . . , σr} ∈ Rr×r contains the r largest singular values, and r < N is the subspace dimension. Then, the learning optimization is performed on the reduced observation matrix Dr = ΣV T , with Xr = Dr[0:p−1] and Yr = Dr[1:p]. U is used to project the solutions back to the original state space. For a more complete description of standard subspace methods, the reader can refer to [6, 30, 33, 36, 35]. 3 The Algorithm The optimization problem for finding stable LDSs has traditionally only considered solving for a stable matrix A that minimizes the reconstruction loss. In this work, we formulate the objective as [Â, B̂] = inf A∈S,B 1 2 ‖Y −AX −BU‖2F , (5) to expand the solution space and solve both for a stable state matrix A and a matrix B. We denote the least-square solution for the control system by [Als, Bls] = Y · [X;U ]†. 3.1 Optimization Objective and Gradient Descents The proposed algorithm uses a recent characterization of stable matrices [14]. Specifically, a matrix A is stable if and only if it can be written as A = S−1OCS, where S is invertible, O is orthogonal, and C is a positive semidefinite contraction (that is, C is a positive semidefinite matrix with norm less than or equal to 1). By constraining the norm of C, one bounds the eigenvalues of A and ensures stability. Using this property, we formulate the optimization problem as [Â, B̂] = inf S 0,O orthogonal,C 0,‖C‖≤1 1 2 ‖Y − S−1OCSX −BU‖2F , (6) where  ≡ S−1OCS. Then, for f(S,O,C,B) = 12‖Y − S −1OCSX − BU‖2F , we derive the gradient directions with respect to the four matrices S,O,C, and B as follows: ∇Sf(S,O,C,B) =S−TEXTSTCTOTS−T − CTOTS−TEXT (7) ∇Of(S,O,C,B) =− S−TEXTSTCT (8) ∇Cf(S,O,C,B) =−OTS−TEXTST (9) ∇Bf(S,O,C,B) =− EUT (10) where E = Y − S−1OCSX − BU . Due to space constraints, the derivation of the gradients is presented in the supplementary material. We then use the fast projected gradient descent optimization from [13] to reach a local minimum of the reconstruction cost. The algorithmic steps are presented in Algorithm 1. The proposed algorithm enforces stability in every iteration step by projecting the solution onto the feasible set. For more details, the reader can refer to [13] or the provided code. Henceforth, we refer to our proposed algorithm as SOC. Note that, contrary to CG and WLS that search stable LDSs in SA,Bls by iterating over only A, SOC updates both linear matrices A and B, thereby expanding the feasible solution space to SA,B , where SA,B ⊃ SA,Bls . Further, SOC does not assume time continuity of the training measurements, contrary to WLS. The novelty of SOC with respect to [14] is the derivation of new gradient directions that not only account for control inputs, but that are also calculated so as to best fit training measurements instead of finding the nearest stable solution to an initial unstable matrix. Algorithm 1 SOC Algorithm using Fast Gradient Method (FGM) with restart from [13] Input: X,Y, U . State and control measurements Output: A ∈ S, B . Stable LDS 1: Initialize Z , (S,O,C,B), kmax, γo, λ ∈ (0, 1), α1 ∈ (0, 1) 2: Ẑ = Z 3: while k < kmax do 4: Zk = P(Ẑ − γ∇f(Ẑ)); γ = γo . P is the projection to the feasible set 5: while f(Zk) > f(Z) and γ ≥ γmin do . Line search to find gradient step size 6: Zk = P(Ẑ − γ∇f(Ẑ)) 7: γ = λγ 8: end while 9: if γ < γmin then . If line search fails, FGM restarts 10: Ẑ = Z; ak = a1 11: else . If cost is decreased, the solution is stored 12: αk+1 = 1 2 ( √ α4k + 4α 2 k − α2k); βk = αk(1−αk) α2k+αk+1 13: Ẑ = Zk + βk(Zk − Z); Z = Zk 14: end if 15: end while 16: A = S−1OCS 17: return A ∈ S, B 4 Experiments We implement LS, CG, WLS, and the proposed SOC method for learning LDSs and compare their performance on dynamical systems with and without control inputs. We omit the seminal work of [23] in our comparisons as it has been outperformed in terms of error, scalability, and execution time by both CG and WLS. For systems without inputs, we focus on learning dynamic texture from frame sequences extracted from videos using standard benchmark datasets [32, 5, 31]. For systems with inputs, we use experimental data from the Franka Emika Panda robotic manipulator and illustrate the learning and control performance of all the methods considered. We split the results in three parts: memory requirements, reconstruction error performance, and control performance. For an impartial assessment, we perform all comparisons in MATLAB using the publicly available code of the CG and WLS algorithms1. All simulations are performed using MATLAB R2019b on a machine with a 14-core Intel E5-2680v4 2.4-GHz CPU with 20GB RAM. 4.1 Memory Usage First, we compare the three algorithms on their memory demands. For an objective comparison, we only measure the size of all MATLAB workspace variables created by the algorithms. That is, we consider a matrix with 4 double-precision cells to use 32 bytes. We compare the algorithms on a sequence of frames extracted from a coffee cup video downloaded from Youtube2. We use this video because it exhibits dynamical motion and has a sufficient number of frames to allow for relatively higher subspace dimensions (the SVD decomposition limits the subspace dimension to be no larger than the number of frames). The results are shown in Figure 1. SOC scales proportionately to r2, whereas both CG and WLS scale proportionately to r4. This is because CG and WLS both rely on solving a quadratic programming problem with a state dimension n2, which generates matrices of dimension n4, whereas SOC uses a gradient descent approach that employs only matrix inversion, transposition, multiplication and addition, all of which are operations of space complexity O(n2). At r = 150, SOC uses about 5.04 MB of memory; CG and WLS use about 3.78 GB of memory and fail to run at higher dimensions due to memory constraints. Though such high dimensions may perhaps seem out of scope for the image reconstruction examples demonstrated next, they can typically occur in the field of robotics. For example, a recent study [3] used a linear data-driven Koopman representation with dimensions r = 330 to identify and control a pneumatic soft robotic arm. For this dimension, WLS and CG would require about 88 GB of memory and SOC would need about 25 MB. As a result, only SOC would be able to successfully train a stable Koopman model on a standard personal laptop and, as we show in the control performance section, failing to impose stability on the learned model can lead to unsafe robot movements. 4.2 Error Performance To measure the predictive accuracy of the learned representations, we use three benchmark datasets: UCLA [32], UCSD [5], and DynTex [31]. The UCLA dataset consists of 200 gray-scale frame sequences that demonstrate 50 different categories of dynamic motion (e.g. flame flickering, wave motion, flowers in the wind), each captured from 4 different viewpoints. Every frame sequence contains 75 frames of size 48 × 48 pixels. The UCSD dataset consists of 254 frame sequences showing highway traffic in different environmental conditions. Each sequence contains between 42 and 52 frames of size 48× 48 pixels. For the DynTex dataset, we use 99 sequences from 5 groups of 1https://github.com/huangwb/LDS-toolbox 2https://www.youtube.com/watch?v=npkBC4GYodg UCLA UCSD DynTex dynamic texture (smoke and rotation from the Beta subset and foliage, escalator, and flags from the Gamma subset) that exhibit periodic motion. The frames are of size 352 × 288 pixels. We convert the frames to grayscale and use the bicubic interpolation algorithm implemented in the Python library pillow to scale down the frames without ratio distortion down to 48 × 39 pixels. Each DynTex sequence contains between 250 and 1576 frames. As explained in Section 2, the dimensionality of images can be prohibitively high and cause slow computations or memory failures: the transition matrix for an image of size as small as 48× 48 pixels would require hundreds of TBs for CG and WLS to run. For this reason, we use subspace methods to reduce the problem dimensionality. For each dataset, we consider a set of subspace dimensions r ∈ {3, 30}. Then, for each dimension, we use the four methods (LS, CG, WLS, and SOC) to obtain a LDS for each of the frame sequences. To compare the performance of the four algorithms, we use the reconstruction error relative to the LS solution: e(Â) = e(Â)−e(Als)e(Als) × 100. We report the results in Figure 2 and focus on three metrics: best error frequency, average reconstruction error, and execution time. The best error graphs plot the percentage of frame sequences for a given dimension for which an algorithm computes the best relative error (that is, lower than or equal to the other two methods). This metric credits all schemes that achieve the lowest error and so curves may add up to more than 100%. The average error and time graphs show the average reconstruction error and average execution time of all frame sequences for each dimension, respectively. Across the three datasets, SOC computes the best error for more frame sequences than the other methods across any dimension. In the UCLA and UCSD datasets, the SOC best error frequency reaches 100% for the majority of the dimensions contrary to less than 80% (for UCLA) and 40% (for UCSD) attained by CG and WLS. This means that, for the aforementioned datasets, CG and WLS only rarely find a better solution than SOC. While for the DynTex dataset the differences are not as pronounced, SOC still computes the best error for most of the frame sequences for any dimension f = 100 f = 500 f = 1000 Training Data LS and about 20% more often than the other methods. Second, SOC has orders-of-magnitude lower average relative error across all dimensions and datasets. Last, in terms of the execution time, SOC is slower than CG and WLS for low dimensions (r < 20). However, it scales better than the other two methods, such that it becomes faster than CG for r > 20. For the UCSD dataset, SOC and WLS become comparable in terms of average execution time near n = 30. This observation is in line with the fact that CG and WLS are high space-complexity algorithms that may even fail to perform at high dimensions due to memory limitations. Next, we compare the three methods on the steam sequence (composed of 120× 170 pixel images) and the fountain sequence (composed of 150× 90 pixel images) from the MIT temporal texture database [34], together with the coffee cup sequence used in Figure 1. Results are shown in Table 1. To show the effect on the predictive quality of the solutions, we plot the frames reconstructed from the learned LDS for each method in Figure 3. Note that the LS solution degrades over time and generates unrealistic frames. 4.3 Control In this section, we demonstrate the superior performance of our approach in control systems. Using experimental data gathered from the robotic arm Franka Emika Panda, we illustrate the improvement in both the reconstruction error of the learned model and the control performance. To use CG and WLS to compute a stable Â, we use the LS solution for the control matrix and modify the objective to  = inf A∈S 1 2 ‖Y ′ −AX‖2F , (11) where Y ′ = Y − BlsU . The learning performance is then measured as the % error increase when compared to the LS solution (Als, Bls). Note that this error depends both on  and B̂; for WLS and CG, we use the LS solution for the control matrix (B = Bls), whereas SOC computes both A and B. We collected training data on the experimental platform at 50 Hz, using a controller to manually move the robotic arm. We gathered 400 measurements (8 seconds) in eight separate runs. The training data, along with the experimental and simulation environments used in this section are shown in Figure 4. Table 2 compares the performance of the SOC, CG, and WLS algorithms on learning stable models for the Franka Emika Panda robotic manipulator using experimental data. The performance is compared for different numbers of measurements p. As the data show, SOC is the only algorithm that never fails to find stable solutions, regardless of the amount of training data. As more measurements are used, the LS solution itself becomes more stable and CG and WLS are both able to converge to stable solutions. Further, the quality of CG solutions improves with more training measurements; the performance of SOC remains robust throughout the testing cases. In Figure 5, we plot the reconstruction error for the three methods for different training data sizes. In this setting, however, measurement sets (xt, yt, ut) are randomly drawn from the training data such that the matrices Y and X have discontiguous measurements. Note how such a choice worsens the performance of WLS that assumes continuity in the observation matrices. On the other hand, CG and SOC are similar in learning performance. With regard to controlling the system, we use LQR control computed using the models from each algorithm and simulate tracking a figure-8 pattern. The states are the x, y, z coordinates of the end effector, the 7 joint angles of the arm, and the 7 joint angular velocities and the applied control is the joint velocities. The trajectory is generated in the y − z plane for the end effector; the desired angle configurations of the robotic arm are solved offline using inverse kinematics; the desired angular joint velocities are set to 0. LQR control is generated using Q = diag([ci]) ∈ R17×17, where ci = 1 for i ∈ {1, 10} and 0 elsewhere and R = 0.1× I7×7. The LS model is unstable and fails at the task. Similarly, WLS—despite the stable model—performs poorly, highlighting the need for both stability and fidelity of the learned representation. On the other hand, CG and SOC are similar in performance. To measure robustness across the initial conditions, we run 50 trials, varying both the y and z initial positions with displacements sampled uniformly in U(−0.1, 0.1). Across all trials, LS has an average error of 7556, WLS scores 38.73, CG scores 0.0810 and SOC scores 0.0799. Then, we test LQR control computed on the LDS obtained from the SOC algorithm in an experiment to demonstrate that the simulation results are indicative of the performance in a physical experiment. Figure 6 shows the control performance of three trials tracking a figure-8 pattern. Due to COVID-19 limitations, we were unable to extend the experimental tests. However, these results serve primarily to experimentally validate our approach and illustrate that the simulation results are an accurate prediction of the experimental behavior as well. 5 Conclusion In this work, we introduce a novel algorithm for computing stable LDSs. Compared to the current top-performing alternatives, the proposed scheme is significantly more memory efficient and, as a result, scales better for high-dimensional systems often encountered in image processing and robotic applications. Further, the suggested method outperforms the alternatives in terms of error and control performance, as demonstrated on three benchmark datasets and the Franka Emika Panda robotic arm experiments. These features make it a promising tool for compression and data-driven system identification tasks. Coupled with the ongoing research around Koopman-operator-based nonlinear control, this algorithm can be a promising candidate for high-dimensional nonlinear control and other machine learning applications, as well. Indeed, recent work in [9] uses Koopman operators to optimize training of neural network methods; also work in [38] learns deep neural network models for Koopman operators of nonlinear dynamical systems. Imposing stability on Koopman operators represented using basis functions learned via deep learning will combine the benefits of linear representations with the predictive power of neural networks. Broader Impact Our methods can improve robotic tasks that are safety-critical, particularly those that include a human-in-the-loop (such as rehabilitation devices and prosthetics) where the human-robot interaction dynamics are not known ahead of time. For such tasks, a robotic platform prioritizes stability and safety during operation. Unstable data-driven models may lead to catastrophic robotic behavior, as we demonstrate in our simulations with the Franka Emika Panda robotic arm. Our work provides a mechanism for online learning of models that satisfy stability constraints, improving the safety and reliability of closed-loop control of those systems. Acknowledgments and Disclosure of Funding First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.
1. What is the focus and contribution of the paper on learning stable linear dynamical systems? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper regarding the need for further explanation and discussion of assumptions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a novel algorithm to learn stable linear dynamical system from data. Based on the characterization of matrix stability method proposed by [22], they derive a gradient-descent algorithm to optimize the learned model to satisfy constraints by minimizing the constructed error. Extensive experiments including on common benchmarks and robot arm control, demonstrate that their proposed SOC outperforms existing important baselines. Strengths 1. I quite like the presentation of this paper, which is easy to understand. Their key idea, i.e. introduction of the characterization of matrix stability into learning model is demonstrated and evaluated well. 2. Learning stable LDSs is useful and relatively underexplored compared with other learning based control areas. Their proposed method is simple to implement and effective in ensuring the stability in the learning based LDSs. 3. Their SOC outperforms the competing baselines, CG and WLS by a large margin. The robot arm experiment they conducted is interesting, and can reflect effectiveness of the learned control model. 4. SOC also has the advantage of efficient memory, which is important in many robot applications. 5. Their code is provided. Overall, I think this is a good work that is useful in learning based control. Weaknesses 1. Can you briefly and intuitively introduce the reason why the characterization of stable matrices can ensure the stability of LDSs in your main paper? Although this is not your main contribution, it would be good for readers to better understand your idea, and more convincing, given a short introduction of this technique. 2. Can you talk about the assumption of the characterization of stable matrices and your derived gradient descent algorithm for it? It will help to understand the applicability of your method.
NIPS
Title A Catalyst Framework for Minimax Optimization Abstract We introduce a generic two-loop scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated dual problem and takes full advantage of existing gradient-based algorithms to solve a sequence of well-balanced strongly-convex-strongly-concave minimax problems. Despite its simplicity, this leads to a family of near-optimal algorithms with improved complexity over all existing methods designed for strongly-convex-concave minimax problems. Additionally, we obtain the first variance-reduced algorithms for this class of minimax problems with finite-sum structure and establish faster convergence rate than batch algorithms. Furthermore, when extended to the nonconvex-concave minimax optimization, our algorithm again achieves the state-of-the-art complexity for finding a stationary point. We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice. 1 Introduction Minimax optimization has been extensively studied in past decades in the communities of mathematics, economics, and operations research. Recent years have witnessed a surge of its applications in machine learning, including generative adversarial networks [16], adversarial training [47, 28], distributionally robust optimization [31, 1], reinforcement learning [8, 9], and many others. The problem of interest in such applications is often a smooth minimax optimization problem (also referred to as saddle point problems): min x∈X max y∈Y f(x, y), (1) where the function f : Rd1 × Rd2 → R is smooth (i.e., gradient Lipschitz), X is a convex set in Rm, and Y is a convex and compact set in Rn. In many machine learning applications, f has a finite sum structure, that is f(x, y) = 1n ∑n i=1 fi(x, y), where each component corresponds to a loss associated with single observation. A significant body of first-order algorithms for minimax optimization exists in the literature, ranging from the classical projection method [42], Korpelevich’s extragradient method [20], Nemirovski’s Mirror Prox algorithm [32], Nesterov’s dual extrapolation method [34], Tseng’s accelerated proximal gradient algorithm [46], to many recent hybrid or randomized algorithms, e.g., [30, 17, 38, 19, 6, 25], just to name a few. Most of these existing work and theoretical analyses are limited to the following settings (i) the strongly-convex-strongly-concave setting (e.g., [45, 29, 15]), (ii) the general convexconcave setting (e.g., [32, 34]), and (iii) the special bilinear convex-concave setting (e.g., [5, 48, 7]. The lower complexity bounds for these three settings established in [50], [33], [37], respectively, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. can be attained by some existing algorithms. For example, extragradient method (EG) achieves the optimal O(1/ ) complexity for smooth convex-concave minimax problems, and the optimal O(κ log(1/ )) complexity for well-balanced strongly-convex-strongly-concave minimax problems, where the x-component and y-component of the objective share the same condition number κ [50]. However, there are relatively few results outside of these settings. Of particular interests are the following two settings: f(x, ·) is concave but not strongly-concave for any x ∈ X , while f(·, y) could be strongly-convex or even nonconvex. Strongly-convex-concave minimax optimization covers broad applications in game theory, imaging, distributionally robust optimization, etc. While the special bilinear case of this setting has been studied extensively in the literature, the general case is less explored. In fact, strongly-convex-concave minimax optimization has also been routinely used as a building block for solving nonconvex-concave minimax problems [40, 44]. Hence, we mainly focus on the strongly-convex-concave setting. For strongly-convex-concave minimax problems, the lower complexity bound of first-order algorithms is Ω ( `/ √ µ ) for achieving an -duality-gap [37], where ` is the smoothness constant and µ is the strong convexity constant. Recently, [44] proposed the so-called dual implicit accelerated gradient algorithm (DIAG) that achieves the first-order oracle complexity of O ( `3/2/(µ √ ) log2(1/ ) ) . A similar complexity bound was obtained from the primal-dual smoothing method in [51]. More recently, [24] introduced the MINIMAX-APPA algorithm that further improves the complexity by shaving off a factor of O( √ `/µ), yielding a near-optimal convergence rate up to the logarithmic factor. However, these algorithms are fairly complicated as they stack several procedures including accelerated gradient descent on x, accelerated gradient ascent on y, and accelerated proximal point algorithm, in different manners, thus requiring at least three loops. In addition to the complicated procedure, the latter two algorithms require an additional layer of smoothing, and solve the surrogate problem minx∈X maxy∈Y f(x, y)+O( )‖y‖2. In practice, how to select a good smoothing parameter of order O( ) remains elusive. Meanwhile, it is unclear how these sophisticated algorithms can be integrated with variance-reduction techniques to solve strongly-convex-concave minimax problems with finite-sum structure efficiently. Most existing variance-reduced algorithms in minimax optimization focus on strongly-convexstrongly-concave setting, e.g., SVRG and SAGA [38], SPD1-VR [43], SVRE [6], Point-SAGA [26], primal-dual SVRG [11], variance reduced prox-method [4], etc. These algorithms typically preserve the linear convergence of batch algorithms, yet with cheaper per-iteration cost and improved complexity. Outside of this regime, few results are known [27, 49]. To the best of our knowledge, the design of efficient variance reduction methods for finite-sum structured minimax problems under the strongly-convex-concave or nonconvex-concave settings remains largely unexplored. This raises the question: can we simply leverage the rich off-the-shelf methods designed for stronglyconvex-strongly-concave minimax problems to these unexplored settings of interest? Inspired by the success of the Catalyst framework and accelerated APPA that use gradient-based algorithms originally designed for strongly convex minimization problems to minimize convex/nonconvex objectives [22, 21, 39, 13], we introduce a generic Catalyst framework for minimax optimization. Rooted in an inexact accelerated proximal point framework, the idea is to repeatedly solve the following auxiliary strongly-convex-strongly-concave problem using an existing methodM: minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 − τy2 ‖y − zt‖ 2. (2) While the algorithmic extension looks straightforward, selecting appropriate proximal parameters τx, τy, the prox centers x̄t, zt, and the methodM for solving the auxiliary problems, are critical and make a huge difference in the overall complexity. Our key insight is that when the condition numbers of the auxiliary problems are well balanced, they become relatively easy to solve and simply applying existing algorithms such as extragradient method asM would suffice. For instance, in the strongly-convex-concave setting, we set τx = 0, τy = µ. In sharp contrast, the MINIMAX-APPA algorithm [24] uses τx = 1` and τy = O( ), which results in extra complications (i.e., a two-loop algorithm) in solving the auxiliary problems. Based on the generic Catalyst framework, we establish a number of interesting results: (i) For strongly-convex-concave minimax optimization, we develop a family of two-loop algorithms with near-optimal complexity and reduced order of the logarithmic factor. In fact, simply combing Catalyst with extragradient method yields the complexity,O ( `/ √ µ log(1/ ) ) , which improves over all existing methods, as shown in Table 1. (ii) For nonconvex-concave minimax optimization, we provide a simple two-time-scale inexact proximal point algorithm for finding an -stationary point that matches the state-of-the-art complexity of Õ ( `2 −3 ) . (iii) For minimax problems with finite-sum structure, we provide a family of variance-reduced algorithms for the strongly-convex-concave setting, improving the Õ ( n¯̀/ √ µ ) complexity of the best batch algorithm to Õ ( ¯̀2/ √ µ3 ∨n 34 ¯̀12 / √ ) , and to Õ ( ¯̀/ √ µ ∨n 12 ¯̀12 / √ ) with additional assumption on cocoercive gradient. When extending to the nonconvex-concave setting, we improve the Õ ( n¯̀2 −3 ) complexity of the best batch algorithm to Õ ( n 3 4 ¯̀2 −3 ) , and to Õ ( n 1 2 ¯̀2 −3 ) with cocoercive gradient. Here ¯̀is the average of smoothness constants of the components. For the ease of notation, we refer to the strongly-convex-strongly-concave setting as SC-SC for short, or (µ1, µ2)-SC-SC if the strong convexity and strong concavity constants are given by µ1, µ2. Similarly, SC-C or µ-SC-C refers to the strongly-convex-concave setting, and NC-C to the nonconvexconcave setting. Throughout the paper, ‖ · ‖ stands for the standard `2-norm. 2 A Catalyst Framework for SC-C Minimax Optimization In this section, we focus on solving strongly-convex-concave minimax problems and introduce a general Catalyst scheme. We formally make the following assumptions. Assumption 1 (SC-C). f(·, y) is µ-strongly-convex for any y in Y , i.e., f(x1, y) ≥ f(x2, y) +∇xf(x2, y)T (x1 − x2) + µ 2 ‖x1 − x2‖2, ∀x1, x2 ∈ X . and f(x, ·) is concave for all x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y− y′‖. There exists at least one saddle point (x∗, y∗) ∈ X ×Y , which satisfies maxy∈Y f(x∗, y) ≤ f(x∗, y∗) ≤ minx∈X f(x, y∗) for all (x, y) ∈ X × Y . Assumption 2 (Lipschitz gradient). There exists a positive constant ` such that max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ `[‖x1 − x2‖+‖y1 − y2‖], holds for all x1, x2 ∈ X , y1, y2 ∈ Y . The goal is to find an -saddle point (x̄, ȳ) such that gapf (x̄, ȳ) := maxy∈Y f(x̄, y) − minx∈X f(x, ȳ) ≤ . We call gapf (x̄, ȳ) the primal-dual gap, which implies both primal optimality gap and dual optimality gap. If = 0, then (x̄, ȳ) is a saddle point. We present a generic Catalyst scheme in Algorithm 1. Analogous to its prototype [22, 39], this scheme consists of several important components: an inexact accelerated proximal point step as the wrapper, a linearly-convergent first-order methodM as the workhorse, as well as carefully chosen parameters and stopping criteria. Algorithm 1 Catalyst for SC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τ > 0 2: Initialization: α1 = 1, v0 = y0 3: for all t = 1, 2, ..., T do 4: Set zt = αtvt−1 + (1− αt)yt−1. 5: Find an inexact solution (xt, yt) to the following problem with algorithmM min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] (?) such that f(xt, yt)−minx∈X f(x, yt) ≤ (t) and ∇y f̃t(xt, yt)T (y − yt) ≤ (t),∀y ∈ Y (3) 6: vt = yt−1 + 1 αt (yt − yt−1); 7: Choose αt+1 ∈ [0, 1] such that 1−αt+1α2t+1 = 1 α2t . 8: end for 9: Output: (x̄T , yT ) with x̄T = ∑T t=1 1/αt∑T m=1 1/αm xt. Inexact accelerated proximal point step. The main idea is to repeatedly solve a series of regularized problems by adding a quadratic term in y to the original problem: min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] , (?) where τ > 0 is a regularization parameter (to be specified later) and zt is the prox-center. The prox-centers {zt}t are built on extrapolation steps of Nesterov [35]. Noticeably, this step can also be viewed as applying the original Catalyst scheme [22] to the dual function h(y) := minx∈X f(x, y). The major distinction is that we do not have access to the closed-form dual function, which causes difficulty in measuring the inexactness of auxiliary problems and evaluating the solution performance in terms of the primal-dual gap, instead of dual optimality. Linearly-convergent algorithm M. By construction, the series of auxiliary problems (?) are (µ, τ)-SC-SC. Thus, they can be solved by a wide spectrum of first-order algorithms established in the literature, at a linear convergence rate, including gradient descent ascent (GDA), extra-gradient method (EG), optimistic gradient descent ascent (OGDA), SVRG, to name a few. Yet, the dependence on the condition number may vary across different algorithms. We assume that any deterministic algorithmM when solving the (µ, τ)-SC-SC minimax problem has a linear convergence rate such that ‖xk − x∗‖2 + ‖yk − y∗‖2 ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (4) and any stochastic algorithmM satisfies E[‖xk − x∗‖2 + ‖yk − y∗‖2] ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (5) where ∆M,τ depends on τ and algorithmM. For instance, when EG or OGDA is adopted, ∆M,τ = `+τ 4 min{µ,τ} [45, 15, 2]; when SVRG or SAGA is adopted, ∆M,τ ∝ n+ ( `+τ min{µ,τ} )2 , provided that the objective has the finite-sum structure and each component is `-smooth [38]. Stopping criteria. To guarantee the overall convergence in terms of primal-dual gap, it is necessary to approximately solve the auxiliary problem (?) to moderate accuracy and ensure the entire pair (x, y) converges properly. For the sake of generalization, we adopt the criterion specified in (3) in our generic scheme. The stopping criterion can be achieved by most existing minimax optimization algorithms after sufficient iterations. Yet, it could still be hard to check in practice because minx∈X f(x, yt) and maxy∈Y ∇y f̃t(xt, yt)T (y − yt) are not always computable. The following lemma shows that this issue can be alleviated, at the minor cost of a full gradient evaluation and a projection step. Lemma 2.1. Consider a function f̃(x, y) that is (µ1, µ2)-SC-SC and has ˜̀-Lipschitz gradient on X × Y . Let z∗ = (x∗, y∗) be the saddle point, i.e, the solution to the minimax optimization minx∈X maxy∈Y f̃(x, y). For any point z = (x, y) in X × Y , we define [z]β = ([x]β , [y]β) with β > 2˜̀ to be the point after one step of projected gradient descent ascent: [x]β = PX ( x− 1β∇xf̃(x, y) ) , [y]β = PY ( y + 1β∇y f̃(x, y) ) , then we have 1. gapf̃ ([z]β) ≤ A‖z − z∗‖2, ∇f̃([x]β , [y]β)T (ȳ − [y]β) ≤ A‖z − z∗‖2 + 2βDY‖z − z∗‖; 2. ‖z − z∗‖ ≤ β+˜̀µ̃ ‖z − [z]β‖, ‖z − [z]β‖ 2 ≤ 2 (1−˜̀/β)3 ‖z − z ∗‖2, where A = β + 2β ˜̀2 µ̃2 + 4β ˜̀2 µ̃2(1−˜̀/β)3 , µ̃ = min{µ1, µ2}. Based on this observation, we can therefore use the following easy-to-check criterion: ‖x− [x]β‖2 + ‖y − [y]β‖2 ≤ min { µ̃2 (t) 2A(β + ˜̀)2 , ( µ̃ (t) 4βDY(β + ˜̀) )2} . (6) Note that many algorithms such as EG or GDA, already compute ([x]β , [y]β) with β being the stepsize, so there is no additional computation cost to check criterion (6). Choice of regularization parameter. As we can see, the smaller τ is, the auxiliary problem is closer to the original problem. However, smaller τ will give rise to worse conditions of the auxiliary problems, making them harder to solve. We will discuss the dependence of the inner and outer loop complexities on τ and provide a guideline for choosing τ for differentM. As a final remark, we stress that the idea of using (accelerated) proximal point algorithm for minimax optimization is by no means new. Similar ideas have appeared in different contexts. However, they differ from our scheme in one way or the other. To list a few: [41, 30, 23, 38] considered the inexact PPA for C-C or NC-NC minimax problems by adding quadratic terms in both x and y; [40, 44] considered the inexact PPA for NC-C minimax problems, by adding a quadratic term in x; [24] considered the inexact accelerated PPA for SC-SC minimax problems by adding a quadratic term in x. On the other hand, a number of work, e.g., [19, 24, 51] also add a quadratic term in y to the minimax optimization, but in the form O( )‖y‖2, which is completely different from PPA. Besides these differences, the subroutines used to solve the auxiliary minimax problems and choices of regularization parameters in these work are quite distinct from ours. Lastly, we point out that the proposed framework is closely related to the inexact accelerated augmented Lagrangian method designed for linearly constrained optimization problems [18], which can be viewed as a special case by setting f(x, y) as the Lagrangian dual. In spite of this, approaches for solving the auxiliary problems are completely different, as is theoretical analysis. 3 Main Results 3.1 Convergence Analysis In order to derive the total complexity, we first establish the complexity of the outer loop and then combine it with the inner loop complexity from algorithmM. We then discuss the optimal choice of the regularization parameter τ for different settings. Theorem 3.1 (Outer-loop complexity). Suppose function f satisfies Assumptions 1 and 2. The output (x̄T , yT ) from Algorithm 1 satisfies gapf (x̄T , yT ) ≤ α2T [ τ 2D 2 Y + 2 ∑T t=1 1 α2t (t) ] , (7) where DY = maxy,y′∈Y ‖y − y′‖ is the diameter of Y . If we further choose, (t) = 3τDYα 2 t 2πt2 , then gapf (x̄T , yT ) ≤ α2T τD2Y . (8) Remark 1. The above result is true without requiring strong convexity in x; only convexity-concavity of f(x, y) is sufficient. In addition, the regularization parameter τ can be any positive value. Hence, Algorithm 1 is quite flexible. Because 2/(t+ 2)2 ≤ α2t ≤ 4/(t+ 1)2 [39], Theorem 3.1 implies that the algorithm finds a point with primal-dual gap within O( √ τ/ DY) outer-loop iterations. Notice that the outer-loop complexity decreases as τ decreases. We now turn to the inner loop complexity. By construction, the auxiliary problem (?) is (µ, τ)-SC-SC and ˜̀smooth with ˜̀= `+ τ , which can be solved by many existing first-order algorithms at a linear convergence rate. Below we present the complexity of the inner loop with warm start. Proposition 3.1 (Inner-loop complexity). Suppose we apply a linearly convergent algorithmM described by (4) or (5) to solve the auxiliary problem (?) and set the initial point to be (xt−1, zt) at iteration t. Let K( (t)) denote the number of iterations (expected number of iterations ifM is stochastic) forM to find a point satisfying (6). Then K( (t)) is O ( ∆M,τ log ( ˜̀·DY min{1,µ,τ}· (t) )) . In practice, choosing a good initial point to warm start algorithmM can be helpful in accelerating the convergence. The above proposition shows that in theory, using a simple warm start strategy helps alleviate the logarithmic dependence on the distance from the initial point to the optimal point. Without the warm start strategy, one would require X to be bounded and K( (t)) = O ( ∆M,τ log( DX+DY (t) ) ) . Here we do not require boundedness on X . As we can see, the choice of τ plays a crucial role since it affects both inner-loop and outer-loop complexities. Combining the above two results immediately leads to the total complexity: Corollary 3.2 (Total complexity). Suppose Assumptions 1, 2 hold, and the subproblems are solved by a linearly convergent algorithmM to satisfy the stopping criterion (3) or (6) with accuracy (t) as specified in Theorem 3.1. For Algorithm 1 to find an -saddle point, the total number of gradient evaluations (expected number ifM is stochastic) is O ( ∆M,τ √ τ/ DY log ( ` · DY min{1, µ, τ} · )) . For any choice of linearly-convergent methodM and any regularization parameter τ , the oracle complexity is guaranteed to be O (DY/ √ log(DY/ )), which is optimal both in and DY up to a logarithmic factor [37]. The dependence on the condition number will solely be determined by the term ∆M,τ √ τ , which we analyze in detail below for specific algorithms. 3.2 Specific Algorithms and Complexities In order to minimize the total complexity, we should choose the regularization parameter τ that minτ>0 ∆M,τ √ τ . Below we derive the choice of optimal τ for different algorithmsM and present the corresponding total complexity. Table 2 summarizes this for several algorithms we consider. Deterministic first-order algorithms. If we adopt the simplest gradient descent ascent (GDA) as M for solving the subproblem, then ∆M,τ = ( `+τ 2 min{µ,τ} )2 [12]. IfM is extra-gradient method (EG) or optimistic gradient descent ascent (OGDA), then ∆M,τ = `+τ4 min{µ,τ} [45, 15, 2]. Minimizing ∆M,τ √ τ for both cases yields that the optimal choice for τ is µ. In particular, when using EG or OGDA, the total complexity becomes O ( ` · DY√ µ log ( ` · DY min{1, µ} · )) . Remark 2. This complexity matches the lower complexity bound for this class of problems [37] in , `, µ and DY , up to a logarithmic factor. In addition, it improves over the best-known result, which was recently established in [24], which has a cubic order on the logarithmic factor and requires boundedness of X . A key observation is that by setting τ = µ, the auxiliary problem (?) becomes (µ, µ)-SC-SC, and it is known that simple EG or OGDA achieves the optimal complexity for solving this class of well-balanced SC-SC problems [50]. Unlike [44, 24] , their subproblems are harder to solve because of ill-balanced condition numbers, thus leading to an inferior complexity. Besides the complexity improvement, our algorithm is significantly simpler and easier to implement than the current state-of-the-arts. The DIAG algorithm in [44] applies Nesterov’s accelerated gradient ascent to the dual function and an additional two-loop algorithm to solve their subproblems. The MINIMAX-APPA algorithm in [24] adds a smoothing term in y and applies a triple-loop algorithm to solve the auxiliary SC-SC problem. In contrast, our algorithm only requires two loops, does not require to prefix accuracy , and has fewer tuning parameters. Results are summarized in Table 1. Stochastic variance-reduced algorithms. We now consider finite-sum-structure minimax problems, minx∈X maxy∈Y f(x, y) , 1n ∑n i=1 fi(x, y), where each component fi has `i-Lipschitz gradients. Denote ¯̀ = 1n ∑n i=1 `i as the average of smoothness constants. The resulting SC-SC subproblem (?) also has the finite-sum structure and can be solved by a number of linearly-convergent variance-reduced algorithms, such as SVRG, SAGA [38], and SVRE [6]. If using SVRG or SAGA asM, we have ∆M,τ ∝ n+ ( ¯̀+τ min{µ,τ} )2 [38]. When using SVRE asM, ∆M,τ ∝ n + ¯̀+τ min{µ,τ} , assuming that the gradients are also `i-cocoercive [6]. Particularly, when using SVRE, the optimal τ is µ if ¯̀/µ ≥ n and ¯̀/n otherwise. Therefore, the total complexity is Õ ( ¯̀ √ µ ) if ¯̀/µ ≥ n; and Õ ( n 1 2 ¯̀ 1 2 √ ) otherwise. Remark 3. In either case, our result improves over the complexity Õ ( n¯̀√ µ ) when using the batch extra-gradient method asM. To the best of our knowledge, this is the best complexity established so far for this class of SC-C minimax optimization problems. Results are summarized in Table 2. 4 Nonconvex-Concave Minimax Optimization We now turn to nonconvex-concave minimax problems (1), and formally make Assumption 3. Denote g(x) = maxy∈Y f(x, y) as the primal function, which is `-weakly-convex [44]. The goal is to find an -stationary point of g(x). For any x̄, consider the Moreau envelop of g: ψ1/τx(x̄) := minx∈X { gτx(x; x̄) := g(x) + τx 2 ‖x− x̄‖ 2 } . The norm of the gradient ‖∇ψ1/τx(x̄)‖ is commonly used to measure the quality of a solution x̄ [10]. We call x̄ -stationary point of g if ‖∇ψ1/τx(x̄)‖ ≤ . Assumption 3. f(x, ·) is concave for any x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y − y′‖. Our modified Catalyst framework is described in 2, which further applies the proximal point algorithm to the primal function g(x), by adding a quadratic term in x, in the same spirit as [40, 44, 24]. The main difference lies in that we use Algorithm 1 to solve subproblems in form of minx∈X gτx(x;xt). Now we use τy to denote the parameter in Algorithm 1 in order to distinguish from τx. Algorithm 2 can be considered as a two-time-scale inexact proximal point algorithm, which repeatedly solves the subproblem minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 + τy 2 ‖y − zt‖ 2. (9) We call it two-time-scale, not only because τx and τy differ, but also because the prox center of y comes from the extrapolation step of acceleration and is updated more frequently than the prox center of x. The subproblem (9) is (τx − `, τy)-SC-SC if τx > `, thus can be efficiently solved. 1 SVRE requires assuming each component has `i-cocoercive gradient, which is a stronger assumption than assuming `i-Lipschitz gradient. Algorithm 2 Catalyst for NC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τx > ` 2: for all t = 0, 1, ..., T − 1 do 3: use Algorithm 1 to find xt+1 such that gτx(xt+1;xt) ≤ min x∈X gτx(x;xt) + ̄ 4: end for 5: Output: x̂T which is uniformly sampled from x0, ..., xT−1. Theorem 4.1 (Outer-loop complexity). Suppose f satisfies Assumption 2 and 3. The output from Algorithm 2 satisfies E‖∇ψ1/τx(x̂T )‖ 2 ≤ 2τ 2 x τx − ` [ g(x0)− g∗ T + ̄ ] , where g∗ = minx∈X g(x). If T = 4τ2x(g(x0)−g ∗) (τx−`) 2 and ̄ = (τx−`) 2 2τ2x , then E‖∇ψ1/τx(x̂T )‖ ≤ . Theorem 4.1 implies that the outer-loop complexity is O( −2). In the following corollaries, we specify the choices of τx, τy , andM for solving subproblems and the total complexity. Corollary 4.2. Suppose f satisfies Assumption 2 and 3. If we choose τx = 2`, τy = ` and use EG/OGDA/GDA to solve subproblems, then Algorithm 2 finds an -stationary point with the total number of gradient evaluations of Õ ( `2 −3 ) . Corollary 4.3. Suppose f(x, y) = 1n ∑n i=1 fi(x, y) satisfies Assmption 3 and each component fi has `i-Lipschitz gradient with ¯̀= 1n ∑n i=1 `i. If we choose τx = 2¯̀, τy = ¯̀√ n and use SVRG/SAGA to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 3 4 ¯̀2 −3 ) . If we further assume fi has `i-cocoercive gradient, choose τx = 2¯̀, τy = ¯̀ n and use SVRE to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 1 2 ¯̀2 −3 ) . Corollary 4.2 shows that simply using Catalyst-EG/OGDA achieves the complexity Õ ( `2 −3 ) . This matches with the current state-of-the-art complexity for nonconvex-concave minimization [24, 44, 51, 36]. Note that our algorithm is much simpler than the existing algorithms, e.g., Prox-DIAG [44] requires a four-loop procedure, whereas MINIMAX-APPA [24] requires a smoothing step. For problems with finite-sum structure, as shown in Corollary 4.3, using Catalyst-SVRG attains the overall complexity Õ ( n 3 4 ¯̀2 −3 ) , improving over all existing results. For instance, PG-SVRG proposed in [40] gives Õ ( n −2 + −6 ) , which has a much worse dependence on and n. 5 Numerical Experiments We consider the wireless communication problem in [3]. Given n communications channels with signal power p ∈ Rn and noise power σ ∈ Rn, the capacity of channel i is proportional to log(1 + βipi/(σ 0 i + σi)), where βi > 0 and σ 0 i are known constants. We would like to maximize the channel capacity under the adversarially chosen noise [14]. This can be formulated as an SC-C minimax problem: min p max σ f(p, σ) := − n∑ i=1 log ( 1 + βipi σ0i + σi ) + λ 2 ‖p‖2, such that 1>σ = N, p ≥ 0, σ ≥ 0. We generate two datasets with (1) β = 1 and σ0 ∈ R1000 uniformly from [0, 100]1000, (2) β = 1 and σ0 ∈ R500 uniformly from [0, 10]500. In Figure 1, we apply the same stepsizes to EG and subroutine in Catalyst-EG, and we compare their convergence results with stepsizes from small to large. In Figure 2, we compare four algorithms: extragradient (EG), SVRG, Catalyst-EG, Catalyst-SVRG with besttuned stepsizes, and evaluate their errors based on (a) distance to the limit point: ‖pt−p∗‖+‖σt−σ∗‖; (b) norm of gradient mapping: ‖∇pf(pt, σt))‖ + ‖σt − PΣ(σt + β∇σf(pt, σt))‖/β. In Figure 3, we compare EG, Catalyst-EG and DIAG with best-tuned stepsizes. Although EG with average iterates has an optimal complexity of O(1/ ) for solving convex-concave minimax problems [32], its convergence behavior for SC-C minimax optimization remains unknown. Both Catalyst-EG and DIAG are designed for SC-C minimax optimization: Catalyst EG has a complexity of Õ(`/√µ ) and DIAG has a complexity of Õ ( ` 3 2 /(µ √ ) ) . Here we use the same stepsize for primal and dual variables in EG and its counterpart with Catalyst. In Catalyst, we use ‖xt − PX (xt − β∇xf(xt, yt))‖/β + ‖yt − PY(yt + β∇yf(xt, yt))‖/β as stopping criterion for subproblem, which is discussed in Section 2. We control the subroutine accuracy (t) as max{c/t8, ̃}, where c is a constant and ̃ is a prefixed threshold. In contrast, DIAG does not provide a easy-toverify stopping criterion for subroutine. We stop the subroutine of DIAG based on the criterion: ‖xk − xk−1‖2 + ‖yk − yk−1‖2, where k indexes the subroutine iterations. We note that there is no theoretical convergence analysis for SVRG under SC-C setting. To form a fair comprison with SVRG, we report last iterate error in Catalyst-SVRG rather than averaged iterates. We observe that Catalyst-EG performs better than EG and DIAG. Under the same stepsize, Catalyst framework significantly speed up EG. SVRG, albeit without theoretical guarantee in the SC-C setting, converges much faster than batch algorithms. Catalyst-SVRG also greatly improves over SVRG and outperforms all other algorithms. Acknowledgments and Disclosure of Funding This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699. Broader Impact Our work provides a family of simple and efficient algorithms for some classes of minimax optimization. We believe our theoretical results advance many applications in ML which requires minimax optimization. Of particular interests are deep learning and fair machine learning. Deep learning is used in many safety-critical environments, including self-driving car, biometric authentication, and so on. There is growing evidence that shows deep neural networks are vulnerable to adversarial attacks. Since adversarial attacks and defenses are often considered as two-player games, progress in minimax optimization will definitely empower both. Furthermore, minimax optimization problems provide insights and understanding into the balance and equilibrium between attacks and defenses. As a consequence, making good use of those techniques will boost the robustness of deep learning models and strengthen the security of its applications. Fairness in machine learning has attracted much attention, because it is directly relevant to policy design and social welfare. For example, courts use COMPAS for recidivism prediction. Researchers have shown that bias is introduced into many machine learning systems through skewed data, limited features, etc. One approach to mitigate this is adding constraints into the system, which naturally gives rise to minimax problems.
1. What is the focus and contribution of the paper on minimax optimization? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. Do you have any concerns or questions regarding the paper, especially regarding its weaknesses? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions ****** After rebuttal ********************************** I am mostly satisfied by the answers of the authors, and wish to keep my score unchanged. In particular, would we allow a clear round of revisions, I would insist on adding: - at least a few numerical comparisons with smoothing techniques (which, imho, should be somehow presented, at least in appendix). - Complete comparisons/discussions of the results in terms of dependence on the diameters. - Cost of checking the stopping criterion for the stochastic case, for which implementation details matter (e.g., for SAGA). Therefore, the method is not totally generic and require at least a bit of thoughts about this (as an extreme case: paying n gradient evaluations per iteration would clearly not be acceptable). Beyond that, I wish to thank the authors for their answers! ******************************************************** The paper presents a "Catalyst" framework for strongly convex-concave minimax optimization, together with nearly optimal performance guarantees for this class of problems. In other words, the authors present an accelerated inexact proximal point method whose subproblems can be solved efficiently by any method that converge linearly on strongly convex-strongly concave problems. The main contribution is threefold: - improved worst-case complexity bounds for the class of strongly convex-concave problems - simplified ("two-loop") algorithm for doing it - using the strategy for developing a algorithm for nonconvex-concave minimax optimization. Strengths As far as I see, the contributions indeed seem new. - Theory seems OK (up to the typos) - The theory has the same flaws (e.g., "optimality up to the logarithmic factor") as similar works for optimization without saddle points, and therefore I believe (i) it can only slightly be improved, and (ii) it is more than reasonable to accept/trust. In addition, there is apparently no algorithm reaching the lower bound for this setting, making the results even nicer. - I believe the current interest for saddle point problems in ML makes such papers relevant for the community. Weaknesses On the other hand: - I believe more details should be provided for some parts of the proofs, at least in the appendix. I found a few typos, and therefore cannot exclude typos in the results themselves. - The approach critically requires knowledge of some of the problem's parameters (strong convexity, smoothness); I believe this should be discussed. - I believe some of the claims by the author(s) could be toned down (see detailed comments below). - The results themselves are probably not very surprising, nor very challenging, given the background results; although the idea is nice!
NIPS
Title A Catalyst Framework for Minimax Optimization Abstract We introduce a generic two-loop scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated dual problem and takes full advantage of existing gradient-based algorithms to solve a sequence of well-balanced strongly-convex-strongly-concave minimax problems. Despite its simplicity, this leads to a family of near-optimal algorithms with improved complexity over all existing methods designed for strongly-convex-concave minimax problems. Additionally, we obtain the first variance-reduced algorithms for this class of minimax problems with finite-sum structure and establish faster convergence rate than batch algorithms. Furthermore, when extended to the nonconvex-concave minimax optimization, our algorithm again achieves the state-of-the-art complexity for finding a stationary point. We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice. 1 Introduction Minimax optimization has been extensively studied in past decades in the communities of mathematics, economics, and operations research. Recent years have witnessed a surge of its applications in machine learning, including generative adversarial networks [16], adversarial training [47, 28], distributionally robust optimization [31, 1], reinforcement learning [8, 9], and many others. The problem of interest in such applications is often a smooth minimax optimization problem (also referred to as saddle point problems): min x∈X max y∈Y f(x, y), (1) where the function f : Rd1 × Rd2 → R is smooth (i.e., gradient Lipschitz), X is a convex set in Rm, and Y is a convex and compact set in Rn. In many machine learning applications, f has a finite sum structure, that is f(x, y) = 1n ∑n i=1 fi(x, y), where each component corresponds to a loss associated with single observation. A significant body of first-order algorithms for minimax optimization exists in the literature, ranging from the classical projection method [42], Korpelevich’s extragradient method [20], Nemirovski’s Mirror Prox algorithm [32], Nesterov’s dual extrapolation method [34], Tseng’s accelerated proximal gradient algorithm [46], to many recent hybrid or randomized algorithms, e.g., [30, 17, 38, 19, 6, 25], just to name a few. Most of these existing work and theoretical analyses are limited to the following settings (i) the strongly-convex-strongly-concave setting (e.g., [45, 29, 15]), (ii) the general convexconcave setting (e.g., [32, 34]), and (iii) the special bilinear convex-concave setting (e.g., [5, 48, 7]. The lower complexity bounds for these three settings established in [50], [33], [37], respectively, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. can be attained by some existing algorithms. For example, extragradient method (EG) achieves the optimal O(1/ ) complexity for smooth convex-concave minimax problems, and the optimal O(κ log(1/ )) complexity for well-balanced strongly-convex-strongly-concave minimax problems, where the x-component and y-component of the objective share the same condition number κ [50]. However, there are relatively few results outside of these settings. Of particular interests are the following two settings: f(x, ·) is concave but not strongly-concave for any x ∈ X , while f(·, y) could be strongly-convex or even nonconvex. Strongly-convex-concave minimax optimization covers broad applications in game theory, imaging, distributionally robust optimization, etc. While the special bilinear case of this setting has been studied extensively in the literature, the general case is less explored. In fact, strongly-convex-concave minimax optimization has also been routinely used as a building block for solving nonconvex-concave minimax problems [40, 44]. Hence, we mainly focus on the strongly-convex-concave setting. For strongly-convex-concave minimax problems, the lower complexity bound of first-order algorithms is Ω ( `/ √ µ ) for achieving an -duality-gap [37], where ` is the smoothness constant and µ is the strong convexity constant. Recently, [44] proposed the so-called dual implicit accelerated gradient algorithm (DIAG) that achieves the first-order oracle complexity of O ( `3/2/(µ √ ) log2(1/ ) ) . A similar complexity bound was obtained from the primal-dual smoothing method in [51]. More recently, [24] introduced the MINIMAX-APPA algorithm that further improves the complexity by shaving off a factor of O( √ `/µ), yielding a near-optimal convergence rate up to the logarithmic factor. However, these algorithms are fairly complicated as they stack several procedures including accelerated gradient descent on x, accelerated gradient ascent on y, and accelerated proximal point algorithm, in different manners, thus requiring at least three loops. In addition to the complicated procedure, the latter two algorithms require an additional layer of smoothing, and solve the surrogate problem minx∈X maxy∈Y f(x, y)+O( )‖y‖2. In practice, how to select a good smoothing parameter of order O( ) remains elusive. Meanwhile, it is unclear how these sophisticated algorithms can be integrated with variance-reduction techniques to solve strongly-convex-concave minimax problems with finite-sum structure efficiently. Most existing variance-reduced algorithms in minimax optimization focus on strongly-convexstrongly-concave setting, e.g., SVRG and SAGA [38], SPD1-VR [43], SVRE [6], Point-SAGA [26], primal-dual SVRG [11], variance reduced prox-method [4], etc. These algorithms typically preserve the linear convergence of batch algorithms, yet with cheaper per-iteration cost and improved complexity. Outside of this regime, few results are known [27, 49]. To the best of our knowledge, the design of efficient variance reduction methods for finite-sum structured minimax problems under the strongly-convex-concave or nonconvex-concave settings remains largely unexplored. This raises the question: can we simply leverage the rich off-the-shelf methods designed for stronglyconvex-strongly-concave minimax problems to these unexplored settings of interest? Inspired by the success of the Catalyst framework and accelerated APPA that use gradient-based algorithms originally designed for strongly convex minimization problems to minimize convex/nonconvex objectives [22, 21, 39, 13], we introduce a generic Catalyst framework for minimax optimization. Rooted in an inexact accelerated proximal point framework, the idea is to repeatedly solve the following auxiliary strongly-convex-strongly-concave problem using an existing methodM: minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 − τy2 ‖y − zt‖ 2. (2) While the algorithmic extension looks straightforward, selecting appropriate proximal parameters τx, τy, the prox centers x̄t, zt, and the methodM for solving the auxiliary problems, are critical and make a huge difference in the overall complexity. Our key insight is that when the condition numbers of the auxiliary problems are well balanced, they become relatively easy to solve and simply applying existing algorithms such as extragradient method asM would suffice. For instance, in the strongly-convex-concave setting, we set τx = 0, τy = µ. In sharp contrast, the MINIMAX-APPA algorithm [24] uses τx = 1` and τy = O( ), which results in extra complications (i.e., a two-loop algorithm) in solving the auxiliary problems. Based on the generic Catalyst framework, we establish a number of interesting results: (i) For strongly-convex-concave minimax optimization, we develop a family of two-loop algorithms with near-optimal complexity and reduced order of the logarithmic factor. In fact, simply combing Catalyst with extragradient method yields the complexity,O ( `/ √ µ log(1/ ) ) , which improves over all existing methods, as shown in Table 1. (ii) For nonconvex-concave minimax optimization, we provide a simple two-time-scale inexact proximal point algorithm for finding an -stationary point that matches the state-of-the-art complexity of Õ ( `2 −3 ) . (iii) For minimax problems with finite-sum structure, we provide a family of variance-reduced algorithms for the strongly-convex-concave setting, improving the Õ ( n¯̀/ √ µ ) complexity of the best batch algorithm to Õ ( ¯̀2/ √ µ3 ∨n 34 ¯̀12 / √ ) , and to Õ ( ¯̀/ √ µ ∨n 12 ¯̀12 / √ ) with additional assumption on cocoercive gradient. When extending to the nonconvex-concave setting, we improve the Õ ( n¯̀2 −3 ) complexity of the best batch algorithm to Õ ( n 3 4 ¯̀2 −3 ) , and to Õ ( n 1 2 ¯̀2 −3 ) with cocoercive gradient. Here ¯̀is the average of smoothness constants of the components. For the ease of notation, we refer to the strongly-convex-strongly-concave setting as SC-SC for short, or (µ1, µ2)-SC-SC if the strong convexity and strong concavity constants are given by µ1, µ2. Similarly, SC-C or µ-SC-C refers to the strongly-convex-concave setting, and NC-C to the nonconvexconcave setting. Throughout the paper, ‖ · ‖ stands for the standard `2-norm. 2 A Catalyst Framework for SC-C Minimax Optimization In this section, we focus on solving strongly-convex-concave minimax problems and introduce a general Catalyst scheme. We formally make the following assumptions. Assumption 1 (SC-C). f(·, y) is µ-strongly-convex for any y in Y , i.e., f(x1, y) ≥ f(x2, y) +∇xf(x2, y)T (x1 − x2) + µ 2 ‖x1 − x2‖2, ∀x1, x2 ∈ X . and f(x, ·) is concave for all x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y− y′‖. There exists at least one saddle point (x∗, y∗) ∈ X ×Y , which satisfies maxy∈Y f(x∗, y) ≤ f(x∗, y∗) ≤ minx∈X f(x, y∗) for all (x, y) ∈ X × Y . Assumption 2 (Lipschitz gradient). There exists a positive constant ` such that max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ `[‖x1 − x2‖+‖y1 − y2‖], holds for all x1, x2 ∈ X , y1, y2 ∈ Y . The goal is to find an -saddle point (x̄, ȳ) such that gapf (x̄, ȳ) := maxy∈Y f(x̄, y) − minx∈X f(x, ȳ) ≤ . We call gapf (x̄, ȳ) the primal-dual gap, which implies both primal optimality gap and dual optimality gap. If = 0, then (x̄, ȳ) is a saddle point. We present a generic Catalyst scheme in Algorithm 1. Analogous to its prototype [22, 39], this scheme consists of several important components: an inexact accelerated proximal point step as the wrapper, a linearly-convergent first-order methodM as the workhorse, as well as carefully chosen parameters and stopping criteria. Algorithm 1 Catalyst for SC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τ > 0 2: Initialization: α1 = 1, v0 = y0 3: for all t = 1, 2, ..., T do 4: Set zt = αtvt−1 + (1− αt)yt−1. 5: Find an inexact solution (xt, yt) to the following problem with algorithmM min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] (?) such that f(xt, yt)−minx∈X f(x, yt) ≤ (t) and ∇y f̃t(xt, yt)T (y − yt) ≤ (t),∀y ∈ Y (3) 6: vt = yt−1 + 1 αt (yt − yt−1); 7: Choose αt+1 ∈ [0, 1] such that 1−αt+1α2t+1 = 1 α2t . 8: end for 9: Output: (x̄T , yT ) with x̄T = ∑T t=1 1/αt∑T m=1 1/αm xt. Inexact accelerated proximal point step. The main idea is to repeatedly solve a series of regularized problems by adding a quadratic term in y to the original problem: min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] , (?) where τ > 0 is a regularization parameter (to be specified later) and zt is the prox-center. The prox-centers {zt}t are built on extrapolation steps of Nesterov [35]. Noticeably, this step can also be viewed as applying the original Catalyst scheme [22] to the dual function h(y) := minx∈X f(x, y). The major distinction is that we do not have access to the closed-form dual function, which causes difficulty in measuring the inexactness of auxiliary problems and evaluating the solution performance in terms of the primal-dual gap, instead of dual optimality. Linearly-convergent algorithm M. By construction, the series of auxiliary problems (?) are (µ, τ)-SC-SC. Thus, they can be solved by a wide spectrum of first-order algorithms established in the literature, at a linear convergence rate, including gradient descent ascent (GDA), extra-gradient method (EG), optimistic gradient descent ascent (OGDA), SVRG, to name a few. Yet, the dependence on the condition number may vary across different algorithms. We assume that any deterministic algorithmM when solving the (µ, τ)-SC-SC minimax problem has a linear convergence rate such that ‖xk − x∗‖2 + ‖yk − y∗‖2 ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (4) and any stochastic algorithmM satisfies E[‖xk − x∗‖2 + ‖yk − y∗‖2] ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (5) where ∆M,τ depends on τ and algorithmM. For instance, when EG or OGDA is adopted, ∆M,τ = `+τ 4 min{µ,τ} [45, 15, 2]; when SVRG or SAGA is adopted, ∆M,τ ∝ n+ ( `+τ min{µ,τ} )2 , provided that the objective has the finite-sum structure and each component is `-smooth [38]. Stopping criteria. To guarantee the overall convergence in terms of primal-dual gap, it is necessary to approximately solve the auxiliary problem (?) to moderate accuracy and ensure the entire pair (x, y) converges properly. For the sake of generalization, we adopt the criterion specified in (3) in our generic scheme. The stopping criterion can be achieved by most existing minimax optimization algorithms after sufficient iterations. Yet, it could still be hard to check in practice because minx∈X f(x, yt) and maxy∈Y ∇y f̃t(xt, yt)T (y − yt) are not always computable. The following lemma shows that this issue can be alleviated, at the minor cost of a full gradient evaluation and a projection step. Lemma 2.1. Consider a function f̃(x, y) that is (µ1, µ2)-SC-SC and has ˜̀-Lipschitz gradient on X × Y . Let z∗ = (x∗, y∗) be the saddle point, i.e, the solution to the minimax optimization minx∈X maxy∈Y f̃(x, y). For any point z = (x, y) in X × Y , we define [z]β = ([x]β , [y]β) with β > 2˜̀ to be the point after one step of projected gradient descent ascent: [x]β = PX ( x− 1β∇xf̃(x, y) ) , [y]β = PY ( y + 1β∇y f̃(x, y) ) , then we have 1. gapf̃ ([z]β) ≤ A‖z − z∗‖2, ∇f̃([x]β , [y]β)T (ȳ − [y]β) ≤ A‖z − z∗‖2 + 2βDY‖z − z∗‖; 2. ‖z − z∗‖ ≤ β+˜̀µ̃ ‖z − [z]β‖, ‖z − [z]β‖ 2 ≤ 2 (1−˜̀/β)3 ‖z − z ∗‖2, where A = β + 2β ˜̀2 µ̃2 + 4β ˜̀2 µ̃2(1−˜̀/β)3 , µ̃ = min{µ1, µ2}. Based on this observation, we can therefore use the following easy-to-check criterion: ‖x− [x]β‖2 + ‖y − [y]β‖2 ≤ min { µ̃2 (t) 2A(β + ˜̀)2 , ( µ̃ (t) 4βDY(β + ˜̀) )2} . (6) Note that many algorithms such as EG or GDA, already compute ([x]β , [y]β) with β being the stepsize, so there is no additional computation cost to check criterion (6). Choice of regularization parameter. As we can see, the smaller τ is, the auxiliary problem is closer to the original problem. However, smaller τ will give rise to worse conditions of the auxiliary problems, making them harder to solve. We will discuss the dependence of the inner and outer loop complexities on τ and provide a guideline for choosing τ for differentM. As a final remark, we stress that the idea of using (accelerated) proximal point algorithm for minimax optimization is by no means new. Similar ideas have appeared in different contexts. However, they differ from our scheme in one way or the other. To list a few: [41, 30, 23, 38] considered the inexact PPA for C-C or NC-NC minimax problems by adding quadratic terms in both x and y; [40, 44] considered the inexact PPA for NC-C minimax problems, by adding a quadratic term in x; [24] considered the inexact accelerated PPA for SC-SC minimax problems by adding a quadratic term in x. On the other hand, a number of work, e.g., [19, 24, 51] also add a quadratic term in y to the minimax optimization, but in the form O( )‖y‖2, which is completely different from PPA. Besides these differences, the subroutines used to solve the auxiliary minimax problems and choices of regularization parameters in these work are quite distinct from ours. Lastly, we point out that the proposed framework is closely related to the inexact accelerated augmented Lagrangian method designed for linearly constrained optimization problems [18], which can be viewed as a special case by setting f(x, y) as the Lagrangian dual. In spite of this, approaches for solving the auxiliary problems are completely different, as is theoretical analysis. 3 Main Results 3.1 Convergence Analysis In order to derive the total complexity, we first establish the complexity of the outer loop and then combine it with the inner loop complexity from algorithmM. We then discuss the optimal choice of the regularization parameter τ for different settings. Theorem 3.1 (Outer-loop complexity). Suppose function f satisfies Assumptions 1 and 2. The output (x̄T , yT ) from Algorithm 1 satisfies gapf (x̄T , yT ) ≤ α2T [ τ 2D 2 Y + 2 ∑T t=1 1 α2t (t) ] , (7) where DY = maxy,y′∈Y ‖y − y′‖ is the diameter of Y . If we further choose, (t) = 3τDYα 2 t 2πt2 , then gapf (x̄T , yT ) ≤ α2T τD2Y . (8) Remark 1. The above result is true without requiring strong convexity in x; only convexity-concavity of f(x, y) is sufficient. In addition, the regularization parameter τ can be any positive value. Hence, Algorithm 1 is quite flexible. Because 2/(t+ 2)2 ≤ α2t ≤ 4/(t+ 1)2 [39], Theorem 3.1 implies that the algorithm finds a point with primal-dual gap within O( √ τ/ DY) outer-loop iterations. Notice that the outer-loop complexity decreases as τ decreases. We now turn to the inner loop complexity. By construction, the auxiliary problem (?) is (µ, τ)-SC-SC and ˜̀smooth with ˜̀= `+ τ , which can be solved by many existing first-order algorithms at a linear convergence rate. Below we present the complexity of the inner loop with warm start. Proposition 3.1 (Inner-loop complexity). Suppose we apply a linearly convergent algorithmM described by (4) or (5) to solve the auxiliary problem (?) and set the initial point to be (xt−1, zt) at iteration t. Let K( (t)) denote the number of iterations (expected number of iterations ifM is stochastic) forM to find a point satisfying (6). Then K( (t)) is O ( ∆M,τ log ( ˜̀·DY min{1,µ,τ}· (t) )) . In practice, choosing a good initial point to warm start algorithmM can be helpful in accelerating the convergence. The above proposition shows that in theory, using a simple warm start strategy helps alleviate the logarithmic dependence on the distance from the initial point to the optimal point. Without the warm start strategy, one would require X to be bounded and K( (t)) = O ( ∆M,τ log( DX+DY (t) ) ) . Here we do not require boundedness on X . As we can see, the choice of τ plays a crucial role since it affects both inner-loop and outer-loop complexities. Combining the above two results immediately leads to the total complexity: Corollary 3.2 (Total complexity). Suppose Assumptions 1, 2 hold, and the subproblems are solved by a linearly convergent algorithmM to satisfy the stopping criterion (3) or (6) with accuracy (t) as specified in Theorem 3.1. For Algorithm 1 to find an -saddle point, the total number of gradient evaluations (expected number ifM is stochastic) is O ( ∆M,τ √ τ/ DY log ( ` · DY min{1, µ, τ} · )) . For any choice of linearly-convergent methodM and any regularization parameter τ , the oracle complexity is guaranteed to be O (DY/ √ log(DY/ )), which is optimal both in and DY up to a logarithmic factor [37]. The dependence on the condition number will solely be determined by the term ∆M,τ √ τ , which we analyze in detail below for specific algorithms. 3.2 Specific Algorithms and Complexities In order to minimize the total complexity, we should choose the regularization parameter τ that minτ>0 ∆M,τ √ τ . Below we derive the choice of optimal τ for different algorithmsM and present the corresponding total complexity. Table 2 summarizes this for several algorithms we consider. Deterministic first-order algorithms. If we adopt the simplest gradient descent ascent (GDA) as M for solving the subproblem, then ∆M,τ = ( `+τ 2 min{µ,τ} )2 [12]. IfM is extra-gradient method (EG) or optimistic gradient descent ascent (OGDA), then ∆M,τ = `+τ4 min{µ,τ} [45, 15, 2]. Minimizing ∆M,τ √ τ for both cases yields that the optimal choice for τ is µ. In particular, when using EG or OGDA, the total complexity becomes O ( ` · DY√ µ log ( ` · DY min{1, µ} · )) . Remark 2. This complexity matches the lower complexity bound for this class of problems [37] in , `, µ and DY , up to a logarithmic factor. In addition, it improves over the best-known result, which was recently established in [24], which has a cubic order on the logarithmic factor and requires boundedness of X . A key observation is that by setting τ = µ, the auxiliary problem (?) becomes (µ, µ)-SC-SC, and it is known that simple EG or OGDA achieves the optimal complexity for solving this class of well-balanced SC-SC problems [50]. Unlike [44, 24] , their subproblems are harder to solve because of ill-balanced condition numbers, thus leading to an inferior complexity. Besides the complexity improvement, our algorithm is significantly simpler and easier to implement than the current state-of-the-arts. The DIAG algorithm in [44] applies Nesterov’s accelerated gradient ascent to the dual function and an additional two-loop algorithm to solve their subproblems. The MINIMAX-APPA algorithm in [24] adds a smoothing term in y and applies a triple-loop algorithm to solve the auxiliary SC-SC problem. In contrast, our algorithm only requires two loops, does not require to prefix accuracy , and has fewer tuning parameters. Results are summarized in Table 1. Stochastic variance-reduced algorithms. We now consider finite-sum-structure minimax problems, minx∈X maxy∈Y f(x, y) , 1n ∑n i=1 fi(x, y), where each component fi has `i-Lipschitz gradients. Denote ¯̀ = 1n ∑n i=1 `i as the average of smoothness constants. The resulting SC-SC subproblem (?) also has the finite-sum structure and can be solved by a number of linearly-convergent variance-reduced algorithms, such as SVRG, SAGA [38], and SVRE [6]. If using SVRG or SAGA asM, we have ∆M,τ ∝ n+ ( ¯̀+τ min{µ,τ} )2 [38]. When using SVRE asM, ∆M,τ ∝ n + ¯̀+τ min{µ,τ} , assuming that the gradients are also `i-cocoercive [6]. Particularly, when using SVRE, the optimal τ is µ if ¯̀/µ ≥ n and ¯̀/n otherwise. Therefore, the total complexity is Õ ( ¯̀ √ µ ) if ¯̀/µ ≥ n; and Õ ( n 1 2 ¯̀ 1 2 √ ) otherwise. Remark 3. In either case, our result improves over the complexity Õ ( n¯̀√ µ ) when using the batch extra-gradient method asM. To the best of our knowledge, this is the best complexity established so far for this class of SC-C minimax optimization problems. Results are summarized in Table 2. 4 Nonconvex-Concave Minimax Optimization We now turn to nonconvex-concave minimax problems (1), and formally make Assumption 3. Denote g(x) = maxy∈Y f(x, y) as the primal function, which is `-weakly-convex [44]. The goal is to find an -stationary point of g(x). For any x̄, consider the Moreau envelop of g: ψ1/τx(x̄) := minx∈X { gτx(x; x̄) := g(x) + τx 2 ‖x− x̄‖ 2 } . The norm of the gradient ‖∇ψ1/τx(x̄)‖ is commonly used to measure the quality of a solution x̄ [10]. We call x̄ -stationary point of g if ‖∇ψ1/τx(x̄)‖ ≤ . Assumption 3. f(x, ·) is concave for any x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y − y′‖. Our modified Catalyst framework is described in 2, which further applies the proximal point algorithm to the primal function g(x), by adding a quadratic term in x, in the same spirit as [40, 44, 24]. The main difference lies in that we use Algorithm 1 to solve subproblems in form of minx∈X gτx(x;xt). Now we use τy to denote the parameter in Algorithm 1 in order to distinguish from τx. Algorithm 2 can be considered as a two-time-scale inexact proximal point algorithm, which repeatedly solves the subproblem minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 + τy 2 ‖y − zt‖ 2. (9) We call it two-time-scale, not only because τx and τy differ, but also because the prox center of y comes from the extrapolation step of acceleration and is updated more frequently than the prox center of x. The subproblem (9) is (τx − `, τy)-SC-SC if τx > `, thus can be efficiently solved. 1 SVRE requires assuming each component has `i-cocoercive gradient, which is a stronger assumption than assuming `i-Lipschitz gradient. Algorithm 2 Catalyst for NC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τx > ` 2: for all t = 0, 1, ..., T − 1 do 3: use Algorithm 1 to find xt+1 such that gτx(xt+1;xt) ≤ min x∈X gτx(x;xt) + ̄ 4: end for 5: Output: x̂T which is uniformly sampled from x0, ..., xT−1. Theorem 4.1 (Outer-loop complexity). Suppose f satisfies Assumption 2 and 3. The output from Algorithm 2 satisfies E‖∇ψ1/τx(x̂T )‖ 2 ≤ 2τ 2 x τx − ` [ g(x0)− g∗ T + ̄ ] , where g∗ = minx∈X g(x). If T = 4τ2x(g(x0)−g ∗) (τx−`) 2 and ̄ = (τx−`) 2 2τ2x , then E‖∇ψ1/τx(x̂T )‖ ≤ . Theorem 4.1 implies that the outer-loop complexity is O( −2). In the following corollaries, we specify the choices of τx, τy , andM for solving subproblems and the total complexity. Corollary 4.2. Suppose f satisfies Assumption 2 and 3. If we choose τx = 2`, τy = ` and use EG/OGDA/GDA to solve subproblems, then Algorithm 2 finds an -stationary point with the total number of gradient evaluations of Õ ( `2 −3 ) . Corollary 4.3. Suppose f(x, y) = 1n ∑n i=1 fi(x, y) satisfies Assmption 3 and each component fi has `i-Lipschitz gradient with ¯̀= 1n ∑n i=1 `i. If we choose τx = 2¯̀, τy = ¯̀√ n and use SVRG/SAGA to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 3 4 ¯̀2 −3 ) . If we further assume fi has `i-cocoercive gradient, choose τx = 2¯̀, τy = ¯̀ n and use SVRE to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 1 2 ¯̀2 −3 ) . Corollary 4.2 shows that simply using Catalyst-EG/OGDA achieves the complexity Õ ( `2 −3 ) . This matches with the current state-of-the-art complexity for nonconvex-concave minimization [24, 44, 51, 36]. Note that our algorithm is much simpler than the existing algorithms, e.g., Prox-DIAG [44] requires a four-loop procedure, whereas MINIMAX-APPA [24] requires a smoothing step. For problems with finite-sum structure, as shown in Corollary 4.3, using Catalyst-SVRG attains the overall complexity Õ ( n 3 4 ¯̀2 −3 ) , improving over all existing results. For instance, PG-SVRG proposed in [40] gives Õ ( n −2 + −6 ) , which has a much worse dependence on and n. 5 Numerical Experiments We consider the wireless communication problem in [3]. Given n communications channels with signal power p ∈ Rn and noise power σ ∈ Rn, the capacity of channel i is proportional to log(1 + βipi/(σ 0 i + σi)), where βi > 0 and σ 0 i are known constants. We would like to maximize the channel capacity under the adversarially chosen noise [14]. This can be formulated as an SC-C minimax problem: min p max σ f(p, σ) := − n∑ i=1 log ( 1 + βipi σ0i + σi ) + λ 2 ‖p‖2, such that 1>σ = N, p ≥ 0, σ ≥ 0. We generate two datasets with (1) β = 1 and σ0 ∈ R1000 uniformly from [0, 100]1000, (2) β = 1 and σ0 ∈ R500 uniformly from [0, 10]500. In Figure 1, we apply the same stepsizes to EG and subroutine in Catalyst-EG, and we compare their convergence results with stepsizes from small to large. In Figure 2, we compare four algorithms: extragradient (EG), SVRG, Catalyst-EG, Catalyst-SVRG with besttuned stepsizes, and evaluate their errors based on (a) distance to the limit point: ‖pt−p∗‖+‖σt−σ∗‖; (b) norm of gradient mapping: ‖∇pf(pt, σt))‖ + ‖σt − PΣ(σt + β∇σf(pt, σt))‖/β. In Figure 3, we compare EG, Catalyst-EG and DIAG with best-tuned stepsizes. Although EG with average iterates has an optimal complexity of O(1/ ) for solving convex-concave minimax problems [32], its convergence behavior for SC-C minimax optimization remains unknown. Both Catalyst-EG and DIAG are designed for SC-C minimax optimization: Catalyst EG has a complexity of Õ(`/√µ ) and DIAG has a complexity of Õ ( ` 3 2 /(µ √ ) ) . Here we use the same stepsize for primal and dual variables in EG and its counterpart with Catalyst. In Catalyst, we use ‖xt − PX (xt − β∇xf(xt, yt))‖/β + ‖yt − PY(yt + β∇yf(xt, yt))‖/β as stopping criterion for subproblem, which is discussed in Section 2. We control the subroutine accuracy (t) as max{c/t8, ̃}, where c is a constant and ̃ is a prefixed threshold. In contrast, DIAG does not provide a easy-toverify stopping criterion for subroutine. We stop the subroutine of DIAG based on the criterion: ‖xk − xk−1‖2 + ‖yk − yk−1‖2, where k indexes the subroutine iterations. We note that there is no theoretical convergence analysis for SVRG under SC-C setting. To form a fair comprison with SVRG, we report last iterate error in Catalyst-SVRG rather than averaged iterates. We observe that Catalyst-EG performs better than EG and DIAG. Under the same stepsize, Catalyst framework significantly speed up EG. SVRG, albeit without theoretical guarantee in the SC-C setting, converges much faster than batch algorithms. Catalyst-SVRG also greatly improves over SVRG and outperforms all other algorithms. Acknowledgments and Disclosure of Funding This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699. Broader Impact Our work provides a family of simple and efficient algorithms for some classes of minimax optimization. We believe our theoretical results advance many applications in ML which requires minimax optimization. Of particular interests are deep learning and fair machine learning. Deep learning is used in many safety-critical environments, including self-driving car, biometric authentication, and so on. There is growing evidence that shows deep neural networks are vulnerable to adversarial attacks. Since adversarial attacks and defenses are often considered as two-player games, progress in minimax optimization will definitely empower both. Furthermore, minimax optimization problems provide insights and understanding into the balance and equilibrium between attacks and defenses. As a consequence, making good use of those techniques will boost the robustness of deep learning models and strengthen the security of its applications. Fairness in machine learning has attracted much attention, because it is directly relevant to policy design and social welfare. For example, courts use COMPAS for recidivism prediction. Researchers have shown that bias is introduced into many machine learning systems through skewed data, limited features, etc. One approach to mitigate this is adding constraints into the system, which naturally gives rise to minimax problems.
1. What is the main contribution of the paper regarding strongly convex-weakly concave minimax problems? 2. What are the strengths of the proposed framework, particularly in terms of flexibility and convergence behavior? 3. What are the weaknesses of the paper regarding its narrow niche application and treatment of related work? 4. How could the authors improve their method's comparison with other recent methods involving efficient Hessian-vector products? 5. Are there any concerns about the choice of hyperparameters and their impact on the experimental results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The present work proposes to solve strongly convex - weakly concave minimax problems by solving a series of proximal problems that are regularized with a quadratic term in the concave problem, resulting in strongly convex and strongly concave proximal problems. ****************************************************************************************** I thank the authors for addressing the my questions in their rebuttal. I will keep my score as-is since I am not fully convinced how impactful their method will be in practice but I believe that the authors have done all that can be expected in a NeurIPS paper to provide evidence for this claim. ****************************************************************************************** The inner loop strongly convex-concave problems can be solved efficiently using methods such as extragradient and allow for the simple incorporation of off-the shelf variance reduction methods such as the extragradient method. Strengths The authors propose a framework for using existing algorithms for strongly convex-concave minimax problems to solve problems that are only weakly concave. The availability of algorithms for these types of problems provides for considerable flexibility and both theoretical and empirical results support its superior convergence behavior. Weaknesses The first point of criticism is that the setting of strongly convex-weakly concave setting seems to be a narrow niche. This impression is fortified by the fact that the motivation given in the introduction advertises generic minimax problems. The authors mention that "In fact, strongly-convex-concave minimax optimization has been routinely used as a building block for solving nonconvex-concave minimax problems [43, 47]." but I believe this point should be made stronger to adquately motivate the paper. The second point of criticism pertains to the treatment of related work. My understanding is that competing methods are analyzed as using the same step size for both min and max player, whereas the proposed method has an additional hyperparameter (the regularization of the weakly concave maximizing player) that can induce behavior quite similar to the choice of seperate learning rate for min an max player. Therefore, it seems that a more fair comparison would allow for competing methods to pick seperate step sizes for the two players. Related to the above, it is unclear how the hyperparameters were choosen in the experiment. Finally, it seems like a comparison to recently proposed methods involving efficient hessian-vector products such as https://arxiv.org/pdf/1905.04926.pdf https://arxiv.org/abs/1808.01531 https://arxiv.org/abs/1709.04326 https://arxiv.org/abs/1905.12103 https://arxiv.org/abs/1705.10461 would be appropriate.
NIPS
Title A Catalyst Framework for Minimax Optimization Abstract We introduce a generic two-loop scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated dual problem and takes full advantage of existing gradient-based algorithms to solve a sequence of well-balanced strongly-convex-strongly-concave minimax problems. Despite its simplicity, this leads to a family of near-optimal algorithms with improved complexity over all existing methods designed for strongly-convex-concave minimax problems. Additionally, we obtain the first variance-reduced algorithms for this class of minimax problems with finite-sum structure and establish faster convergence rate than batch algorithms. Furthermore, when extended to the nonconvex-concave minimax optimization, our algorithm again achieves the state-of-the-art complexity for finding a stationary point. We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice. 1 Introduction Minimax optimization has been extensively studied in past decades in the communities of mathematics, economics, and operations research. Recent years have witnessed a surge of its applications in machine learning, including generative adversarial networks [16], adversarial training [47, 28], distributionally robust optimization [31, 1], reinforcement learning [8, 9], and many others. The problem of interest in such applications is often a smooth minimax optimization problem (also referred to as saddle point problems): min x∈X max y∈Y f(x, y), (1) where the function f : Rd1 × Rd2 → R is smooth (i.e., gradient Lipschitz), X is a convex set in Rm, and Y is a convex and compact set in Rn. In many machine learning applications, f has a finite sum structure, that is f(x, y) = 1n ∑n i=1 fi(x, y), where each component corresponds to a loss associated with single observation. A significant body of first-order algorithms for minimax optimization exists in the literature, ranging from the classical projection method [42], Korpelevich’s extragradient method [20], Nemirovski’s Mirror Prox algorithm [32], Nesterov’s dual extrapolation method [34], Tseng’s accelerated proximal gradient algorithm [46], to many recent hybrid or randomized algorithms, e.g., [30, 17, 38, 19, 6, 25], just to name a few. Most of these existing work and theoretical analyses are limited to the following settings (i) the strongly-convex-strongly-concave setting (e.g., [45, 29, 15]), (ii) the general convexconcave setting (e.g., [32, 34]), and (iii) the special bilinear convex-concave setting (e.g., [5, 48, 7]. The lower complexity bounds for these three settings established in [50], [33], [37], respectively, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. can be attained by some existing algorithms. For example, extragradient method (EG) achieves the optimal O(1/ ) complexity for smooth convex-concave minimax problems, and the optimal O(κ log(1/ )) complexity for well-balanced strongly-convex-strongly-concave minimax problems, where the x-component and y-component of the objective share the same condition number κ [50]. However, there are relatively few results outside of these settings. Of particular interests are the following two settings: f(x, ·) is concave but not strongly-concave for any x ∈ X , while f(·, y) could be strongly-convex or even nonconvex. Strongly-convex-concave minimax optimization covers broad applications in game theory, imaging, distributionally robust optimization, etc. While the special bilinear case of this setting has been studied extensively in the literature, the general case is less explored. In fact, strongly-convex-concave minimax optimization has also been routinely used as a building block for solving nonconvex-concave minimax problems [40, 44]. Hence, we mainly focus on the strongly-convex-concave setting. For strongly-convex-concave minimax problems, the lower complexity bound of first-order algorithms is Ω ( `/ √ µ ) for achieving an -duality-gap [37], where ` is the smoothness constant and µ is the strong convexity constant. Recently, [44] proposed the so-called dual implicit accelerated gradient algorithm (DIAG) that achieves the first-order oracle complexity of O ( `3/2/(µ √ ) log2(1/ ) ) . A similar complexity bound was obtained from the primal-dual smoothing method in [51]. More recently, [24] introduced the MINIMAX-APPA algorithm that further improves the complexity by shaving off a factor of O( √ `/µ), yielding a near-optimal convergence rate up to the logarithmic factor. However, these algorithms are fairly complicated as they stack several procedures including accelerated gradient descent on x, accelerated gradient ascent on y, and accelerated proximal point algorithm, in different manners, thus requiring at least three loops. In addition to the complicated procedure, the latter two algorithms require an additional layer of smoothing, and solve the surrogate problem minx∈X maxy∈Y f(x, y)+O( )‖y‖2. In practice, how to select a good smoothing parameter of order O( ) remains elusive. Meanwhile, it is unclear how these sophisticated algorithms can be integrated with variance-reduction techniques to solve strongly-convex-concave minimax problems with finite-sum structure efficiently. Most existing variance-reduced algorithms in minimax optimization focus on strongly-convexstrongly-concave setting, e.g., SVRG and SAGA [38], SPD1-VR [43], SVRE [6], Point-SAGA [26], primal-dual SVRG [11], variance reduced prox-method [4], etc. These algorithms typically preserve the linear convergence of batch algorithms, yet with cheaper per-iteration cost and improved complexity. Outside of this regime, few results are known [27, 49]. To the best of our knowledge, the design of efficient variance reduction methods for finite-sum structured minimax problems under the strongly-convex-concave or nonconvex-concave settings remains largely unexplored. This raises the question: can we simply leverage the rich off-the-shelf methods designed for stronglyconvex-strongly-concave minimax problems to these unexplored settings of interest? Inspired by the success of the Catalyst framework and accelerated APPA that use gradient-based algorithms originally designed for strongly convex minimization problems to minimize convex/nonconvex objectives [22, 21, 39, 13], we introduce a generic Catalyst framework for minimax optimization. Rooted in an inexact accelerated proximal point framework, the idea is to repeatedly solve the following auxiliary strongly-convex-strongly-concave problem using an existing methodM: minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 − τy2 ‖y − zt‖ 2. (2) While the algorithmic extension looks straightforward, selecting appropriate proximal parameters τx, τy, the prox centers x̄t, zt, and the methodM for solving the auxiliary problems, are critical and make a huge difference in the overall complexity. Our key insight is that when the condition numbers of the auxiliary problems are well balanced, they become relatively easy to solve and simply applying existing algorithms such as extragradient method asM would suffice. For instance, in the strongly-convex-concave setting, we set τx = 0, τy = µ. In sharp contrast, the MINIMAX-APPA algorithm [24] uses τx = 1` and τy = O( ), which results in extra complications (i.e., a two-loop algorithm) in solving the auxiliary problems. Based on the generic Catalyst framework, we establish a number of interesting results: (i) For strongly-convex-concave minimax optimization, we develop a family of two-loop algorithms with near-optimal complexity and reduced order of the logarithmic factor. In fact, simply combing Catalyst with extragradient method yields the complexity,O ( `/ √ µ log(1/ ) ) , which improves over all existing methods, as shown in Table 1. (ii) For nonconvex-concave minimax optimization, we provide a simple two-time-scale inexact proximal point algorithm for finding an -stationary point that matches the state-of-the-art complexity of Õ ( `2 −3 ) . (iii) For minimax problems with finite-sum structure, we provide a family of variance-reduced algorithms for the strongly-convex-concave setting, improving the Õ ( n¯̀/ √ µ ) complexity of the best batch algorithm to Õ ( ¯̀2/ √ µ3 ∨n 34 ¯̀12 / √ ) , and to Õ ( ¯̀/ √ µ ∨n 12 ¯̀12 / √ ) with additional assumption on cocoercive gradient. When extending to the nonconvex-concave setting, we improve the Õ ( n¯̀2 −3 ) complexity of the best batch algorithm to Õ ( n 3 4 ¯̀2 −3 ) , and to Õ ( n 1 2 ¯̀2 −3 ) with cocoercive gradient. Here ¯̀is the average of smoothness constants of the components. For the ease of notation, we refer to the strongly-convex-strongly-concave setting as SC-SC for short, or (µ1, µ2)-SC-SC if the strong convexity and strong concavity constants are given by µ1, µ2. Similarly, SC-C or µ-SC-C refers to the strongly-convex-concave setting, and NC-C to the nonconvexconcave setting. Throughout the paper, ‖ · ‖ stands for the standard `2-norm. 2 A Catalyst Framework for SC-C Minimax Optimization In this section, we focus on solving strongly-convex-concave minimax problems and introduce a general Catalyst scheme. We formally make the following assumptions. Assumption 1 (SC-C). f(·, y) is µ-strongly-convex for any y in Y , i.e., f(x1, y) ≥ f(x2, y) +∇xf(x2, y)T (x1 − x2) + µ 2 ‖x1 − x2‖2, ∀x1, x2 ∈ X . and f(x, ·) is concave for all x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y− y′‖. There exists at least one saddle point (x∗, y∗) ∈ X ×Y , which satisfies maxy∈Y f(x∗, y) ≤ f(x∗, y∗) ≤ minx∈X f(x, y∗) for all (x, y) ∈ X × Y . Assumption 2 (Lipschitz gradient). There exists a positive constant ` such that max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ `[‖x1 − x2‖+‖y1 − y2‖], holds for all x1, x2 ∈ X , y1, y2 ∈ Y . The goal is to find an -saddle point (x̄, ȳ) such that gapf (x̄, ȳ) := maxy∈Y f(x̄, y) − minx∈X f(x, ȳ) ≤ . We call gapf (x̄, ȳ) the primal-dual gap, which implies both primal optimality gap and dual optimality gap. If = 0, then (x̄, ȳ) is a saddle point. We present a generic Catalyst scheme in Algorithm 1. Analogous to its prototype [22, 39], this scheme consists of several important components: an inexact accelerated proximal point step as the wrapper, a linearly-convergent first-order methodM as the workhorse, as well as carefully chosen parameters and stopping criteria. Algorithm 1 Catalyst for SC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τ > 0 2: Initialization: α1 = 1, v0 = y0 3: for all t = 1, 2, ..., T do 4: Set zt = αtvt−1 + (1− αt)yt−1. 5: Find an inexact solution (xt, yt) to the following problem with algorithmM min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] (?) such that f(xt, yt)−minx∈X f(x, yt) ≤ (t) and ∇y f̃t(xt, yt)T (y − yt) ≤ (t),∀y ∈ Y (3) 6: vt = yt−1 + 1 αt (yt − yt−1); 7: Choose αt+1 ∈ [0, 1] such that 1−αt+1α2t+1 = 1 α2t . 8: end for 9: Output: (x̄T , yT ) with x̄T = ∑T t=1 1/αt∑T m=1 1/αm xt. Inexact accelerated proximal point step. The main idea is to repeatedly solve a series of regularized problems by adding a quadratic term in y to the original problem: min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] , (?) where τ > 0 is a regularization parameter (to be specified later) and zt is the prox-center. The prox-centers {zt}t are built on extrapolation steps of Nesterov [35]. Noticeably, this step can also be viewed as applying the original Catalyst scheme [22] to the dual function h(y) := minx∈X f(x, y). The major distinction is that we do not have access to the closed-form dual function, which causes difficulty in measuring the inexactness of auxiliary problems and evaluating the solution performance in terms of the primal-dual gap, instead of dual optimality. Linearly-convergent algorithm M. By construction, the series of auxiliary problems (?) are (µ, τ)-SC-SC. Thus, they can be solved by a wide spectrum of first-order algorithms established in the literature, at a linear convergence rate, including gradient descent ascent (GDA), extra-gradient method (EG), optimistic gradient descent ascent (OGDA), SVRG, to name a few. Yet, the dependence on the condition number may vary across different algorithms. We assume that any deterministic algorithmM when solving the (µ, τ)-SC-SC minimax problem has a linear convergence rate such that ‖xk − x∗‖2 + ‖yk − y∗‖2 ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (4) and any stochastic algorithmM satisfies E[‖xk − x∗‖2 + ‖yk − y∗‖2] ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (5) where ∆M,τ depends on τ and algorithmM. For instance, when EG or OGDA is adopted, ∆M,τ = `+τ 4 min{µ,τ} [45, 15, 2]; when SVRG or SAGA is adopted, ∆M,τ ∝ n+ ( `+τ min{µ,τ} )2 , provided that the objective has the finite-sum structure and each component is `-smooth [38]. Stopping criteria. To guarantee the overall convergence in terms of primal-dual gap, it is necessary to approximately solve the auxiliary problem (?) to moderate accuracy and ensure the entire pair (x, y) converges properly. For the sake of generalization, we adopt the criterion specified in (3) in our generic scheme. The stopping criterion can be achieved by most existing minimax optimization algorithms after sufficient iterations. Yet, it could still be hard to check in practice because minx∈X f(x, yt) and maxy∈Y ∇y f̃t(xt, yt)T (y − yt) are not always computable. The following lemma shows that this issue can be alleviated, at the minor cost of a full gradient evaluation and a projection step. Lemma 2.1. Consider a function f̃(x, y) that is (µ1, µ2)-SC-SC and has ˜̀-Lipschitz gradient on X × Y . Let z∗ = (x∗, y∗) be the saddle point, i.e, the solution to the minimax optimization minx∈X maxy∈Y f̃(x, y). For any point z = (x, y) in X × Y , we define [z]β = ([x]β , [y]β) with β > 2˜̀ to be the point after one step of projected gradient descent ascent: [x]β = PX ( x− 1β∇xf̃(x, y) ) , [y]β = PY ( y + 1β∇y f̃(x, y) ) , then we have 1. gapf̃ ([z]β) ≤ A‖z − z∗‖2, ∇f̃([x]β , [y]β)T (ȳ − [y]β) ≤ A‖z − z∗‖2 + 2βDY‖z − z∗‖; 2. ‖z − z∗‖ ≤ β+˜̀µ̃ ‖z − [z]β‖, ‖z − [z]β‖ 2 ≤ 2 (1−˜̀/β)3 ‖z − z ∗‖2, where A = β + 2β ˜̀2 µ̃2 + 4β ˜̀2 µ̃2(1−˜̀/β)3 , µ̃ = min{µ1, µ2}. Based on this observation, we can therefore use the following easy-to-check criterion: ‖x− [x]β‖2 + ‖y − [y]β‖2 ≤ min { µ̃2 (t) 2A(β + ˜̀)2 , ( µ̃ (t) 4βDY(β + ˜̀) )2} . (6) Note that many algorithms such as EG or GDA, already compute ([x]β , [y]β) with β being the stepsize, so there is no additional computation cost to check criterion (6). Choice of regularization parameter. As we can see, the smaller τ is, the auxiliary problem is closer to the original problem. However, smaller τ will give rise to worse conditions of the auxiliary problems, making them harder to solve. We will discuss the dependence of the inner and outer loop complexities on τ and provide a guideline for choosing τ for differentM. As a final remark, we stress that the idea of using (accelerated) proximal point algorithm for minimax optimization is by no means new. Similar ideas have appeared in different contexts. However, they differ from our scheme in one way or the other. To list a few: [41, 30, 23, 38] considered the inexact PPA for C-C or NC-NC minimax problems by adding quadratic terms in both x and y; [40, 44] considered the inexact PPA for NC-C minimax problems, by adding a quadratic term in x; [24] considered the inexact accelerated PPA for SC-SC minimax problems by adding a quadratic term in x. On the other hand, a number of work, e.g., [19, 24, 51] also add a quadratic term in y to the minimax optimization, but in the form O( )‖y‖2, which is completely different from PPA. Besides these differences, the subroutines used to solve the auxiliary minimax problems and choices of regularization parameters in these work are quite distinct from ours. Lastly, we point out that the proposed framework is closely related to the inexact accelerated augmented Lagrangian method designed for linearly constrained optimization problems [18], which can be viewed as a special case by setting f(x, y) as the Lagrangian dual. In spite of this, approaches for solving the auxiliary problems are completely different, as is theoretical analysis. 3 Main Results 3.1 Convergence Analysis In order to derive the total complexity, we first establish the complexity of the outer loop and then combine it with the inner loop complexity from algorithmM. We then discuss the optimal choice of the regularization parameter τ for different settings. Theorem 3.1 (Outer-loop complexity). Suppose function f satisfies Assumptions 1 and 2. The output (x̄T , yT ) from Algorithm 1 satisfies gapf (x̄T , yT ) ≤ α2T [ τ 2D 2 Y + 2 ∑T t=1 1 α2t (t) ] , (7) where DY = maxy,y′∈Y ‖y − y′‖ is the diameter of Y . If we further choose, (t) = 3τDYα 2 t 2πt2 , then gapf (x̄T , yT ) ≤ α2T τD2Y . (8) Remark 1. The above result is true without requiring strong convexity in x; only convexity-concavity of f(x, y) is sufficient. In addition, the regularization parameter τ can be any positive value. Hence, Algorithm 1 is quite flexible. Because 2/(t+ 2)2 ≤ α2t ≤ 4/(t+ 1)2 [39], Theorem 3.1 implies that the algorithm finds a point with primal-dual gap within O( √ τ/ DY) outer-loop iterations. Notice that the outer-loop complexity decreases as τ decreases. We now turn to the inner loop complexity. By construction, the auxiliary problem (?) is (µ, τ)-SC-SC and ˜̀smooth with ˜̀= `+ τ , which can be solved by many existing first-order algorithms at a linear convergence rate. Below we present the complexity of the inner loop with warm start. Proposition 3.1 (Inner-loop complexity). Suppose we apply a linearly convergent algorithmM described by (4) or (5) to solve the auxiliary problem (?) and set the initial point to be (xt−1, zt) at iteration t. Let K( (t)) denote the number of iterations (expected number of iterations ifM is stochastic) forM to find a point satisfying (6). Then K( (t)) is O ( ∆M,τ log ( ˜̀·DY min{1,µ,τ}· (t) )) . In practice, choosing a good initial point to warm start algorithmM can be helpful in accelerating the convergence. The above proposition shows that in theory, using a simple warm start strategy helps alleviate the logarithmic dependence on the distance from the initial point to the optimal point. Without the warm start strategy, one would require X to be bounded and K( (t)) = O ( ∆M,τ log( DX+DY (t) ) ) . Here we do not require boundedness on X . As we can see, the choice of τ plays a crucial role since it affects both inner-loop and outer-loop complexities. Combining the above two results immediately leads to the total complexity: Corollary 3.2 (Total complexity). Suppose Assumptions 1, 2 hold, and the subproblems are solved by a linearly convergent algorithmM to satisfy the stopping criterion (3) or (6) with accuracy (t) as specified in Theorem 3.1. For Algorithm 1 to find an -saddle point, the total number of gradient evaluations (expected number ifM is stochastic) is O ( ∆M,τ √ τ/ DY log ( ` · DY min{1, µ, τ} · )) . For any choice of linearly-convergent methodM and any regularization parameter τ , the oracle complexity is guaranteed to be O (DY/ √ log(DY/ )), which is optimal both in and DY up to a logarithmic factor [37]. The dependence on the condition number will solely be determined by the term ∆M,τ √ τ , which we analyze in detail below for specific algorithms. 3.2 Specific Algorithms and Complexities In order to minimize the total complexity, we should choose the regularization parameter τ that minτ>0 ∆M,τ √ τ . Below we derive the choice of optimal τ for different algorithmsM and present the corresponding total complexity. Table 2 summarizes this for several algorithms we consider. Deterministic first-order algorithms. If we adopt the simplest gradient descent ascent (GDA) as M for solving the subproblem, then ∆M,τ = ( `+τ 2 min{µ,τ} )2 [12]. IfM is extra-gradient method (EG) or optimistic gradient descent ascent (OGDA), then ∆M,τ = `+τ4 min{µ,τ} [45, 15, 2]. Minimizing ∆M,τ √ τ for both cases yields that the optimal choice for τ is µ. In particular, when using EG or OGDA, the total complexity becomes O ( ` · DY√ µ log ( ` · DY min{1, µ} · )) . Remark 2. This complexity matches the lower complexity bound for this class of problems [37] in , `, µ and DY , up to a logarithmic factor. In addition, it improves over the best-known result, which was recently established in [24], which has a cubic order on the logarithmic factor and requires boundedness of X . A key observation is that by setting τ = µ, the auxiliary problem (?) becomes (µ, µ)-SC-SC, and it is known that simple EG or OGDA achieves the optimal complexity for solving this class of well-balanced SC-SC problems [50]. Unlike [44, 24] , their subproblems are harder to solve because of ill-balanced condition numbers, thus leading to an inferior complexity. Besides the complexity improvement, our algorithm is significantly simpler and easier to implement than the current state-of-the-arts. The DIAG algorithm in [44] applies Nesterov’s accelerated gradient ascent to the dual function and an additional two-loop algorithm to solve their subproblems. The MINIMAX-APPA algorithm in [24] adds a smoothing term in y and applies a triple-loop algorithm to solve the auxiliary SC-SC problem. In contrast, our algorithm only requires two loops, does not require to prefix accuracy , and has fewer tuning parameters. Results are summarized in Table 1. Stochastic variance-reduced algorithms. We now consider finite-sum-structure minimax problems, minx∈X maxy∈Y f(x, y) , 1n ∑n i=1 fi(x, y), where each component fi has `i-Lipschitz gradients. Denote ¯̀ = 1n ∑n i=1 `i as the average of smoothness constants. The resulting SC-SC subproblem (?) also has the finite-sum structure and can be solved by a number of linearly-convergent variance-reduced algorithms, such as SVRG, SAGA [38], and SVRE [6]. If using SVRG or SAGA asM, we have ∆M,τ ∝ n+ ( ¯̀+τ min{µ,τ} )2 [38]. When using SVRE asM, ∆M,τ ∝ n + ¯̀+τ min{µ,τ} , assuming that the gradients are also `i-cocoercive [6]. Particularly, when using SVRE, the optimal τ is µ if ¯̀/µ ≥ n and ¯̀/n otherwise. Therefore, the total complexity is Õ ( ¯̀ √ µ ) if ¯̀/µ ≥ n; and Õ ( n 1 2 ¯̀ 1 2 √ ) otherwise. Remark 3. In either case, our result improves over the complexity Õ ( n¯̀√ µ ) when using the batch extra-gradient method asM. To the best of our knowledge, this is the best complexity established so far for this class of SC-C minimax optimization problems. Results are summarized in Table 2. 4 Nonconvex-Concave Minimax Optimization We now turn to nonconvex-concave minimax problems (1), and formally make Assumption 3. Denote g(x) = maxy∈Y f(x, y) as the primal function, which is `-weakly-convex [44]. The goal is to find an -stationary point of g(x). For any x̄, consider the Moreau envelop of g: ψ1/τx(x̄) := minx∈X { gτx(x; x̄) := g(x) + τx 2 ‖x− x̄‖ 2 } . The norm of the gradient ‖∇ψ1/τx(x̄)‖ is commonly used to measure the quality of a solution x̄ [10]. We call x̄ -stationary point of g if ‖∇ψ1/τx(x̄)‖ ≤ . Assumption 3. f(x, ·) is concave for any x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y − y′‖. Our modified Catalyst framework is described in 2, which further applies the proximal point algorithm to the primal function g(x), by adding a quadratic term in x, in the same spirit as [40, 44, 24]. The main difference lies in that we use Algorithm 1 to solve subproblems in form of minx∈X gτx(x;xt). Now we use τy to denote the parameter in Algorithm 1 in order to distinguish from τx. Algorithm 2 can be considered as a two-time-scale inexact proximal point algorithm, which repeatedly solves the subproblem minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 + τy 2 ‖y − zt‖ 2. (9) We call it two-time-scale, not only because τx and τy differ, but also because the prox center of y comes from the extrapolation step of acceleration and is updated more frequently than the prox center of x. The subproblem (9) is (τx − `, τy)-SC-SC if τx > `, thus can be efficiently solved. 1 SVRE requires assuming each component has `i-cocoercive gradient, which is a stronger assumption than assuming `i-Lipschitz gradient. Algorithm 2 Catalyst for NC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τx > ` 2: for all t = 0, 1, ..., T − 1 do 3: use Algorithm 1 to find xt+1 such that gτx(xt+1;xt) ≤ min x∈X gτx(x;xt) + ̄ 4: end for 5: Output: x̂T which is uniformly sampled from x0, ..., xT−1. Theorem 4.1 (Outer-loop complexity). Suppose f satisfies Assumption 2 and 3. The output from Algorithm 2 satisfies E‖∇ψ1/τx(x̂T )‖ 2 ≤ 2τ 2 x τx − ` [ g(x0)− g∗ T + ̄ ] , where g∗ = minx∈X g(x). If T = 4τ2x(g(x0)−g ∗) (τx−`) 2 and ̄ = (τx−`) 2 2τ2x , then E‖∇ψ1/τx(x̂T )‖ ≤ . Theorem 4.1 implies that the outer-loop complexity is O( −2). In the following corollaries, we specify the choices of τx, τy , andM for solving subproblems and the total complexity. Corollary 4.2. Suppose f satisfies Assumption 2 and 3. If we choose τx = 2`, τy = ` and use EG/OGDA/GDA to solve subproblems, then Algorithm 2 finds an -stationary point with the total number of gradient evaluations of Õ ( `2 −3 ) . Corollary 4.3. Suppose f(x, y) = 1n ∑n i=1 fi(x, y) satisfies Assmption 3 and each component fi has `i-Lipschitz gradient with ¯̀= 1n ∑n i=1 `i. If we choose τx = 2¯̀, τy = ¯̀√ n and use SVRG/SAGA to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 3 4 ¯̀2 −3 ) . If we further assume fi has `i-cocoercive gradient, choose τx = 2¯̀, τy = ¯̀ n and use SVRE to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 1 2 ¯̀2 −3 ) . Corollary 4.2 shows that simply using Catalyst-EG/OGDA achieves the complexity Õ ( `2 −3 ) . This matches with the current state-of-the-art complexity for nonconvex-concave minimization [24, 44, 51, 36]. Note that our algorithm is much simpler than the existing algorithms, e.g., Prox-DIAG [44] requires a four-loop procedure, whereas MINIMAX-APPA [24] requires a smoothing step. For problems with finite-sum structure, as shown in Corollary 4.3, using Catalyst-SVRG attains the overall complexity Õ ( n 3 4 ¯̀2 −3 ) , improving over all existing results. For instance, PG-SVRG proposed in [40] gives Õ ( n −2 + −6 ) , which has a much worse dependence on and n. 5 Numerical Experiments We consider the wireless communication problem in [3]. Given n communications channels with signal power p ∈ Rn and noise power σ ∈ Rn, the capacity of channel i is proportional to log(1 + βipi/(σ 0 i + σi)), where βi > 0 and σ 0 i are known constants. We would like to maximize the channel capacity under the adversarially chosen noise [14]. This can be formulated as an SC-C minimax problem: min p max σ f(p, σ) := − n∑ i=1 log ( 1 + βipi σ0i + σi ) + λ 2 ‖p‖2, such that 1>σ = N, p ≥ 0, σ ≥ 0. We generate two datasets with (1) β = 1 and σ0 ∈ R1000 uniformly from [0, 100]1000, (2) β = 1 and σ0 ∈ R500 uniformly from [0, 10]500. In Figure 1, we apply the same stepsizes to EG and subroutine in Catalyst-EG, and we compare their convergence results with stepsizes from small to large. In Figure 2, we compare four algorithms: extragradient (EG), SVRG, Catalyst-EG, Catalyst-SVRG with besttuned stepsizes, and evaluate their errors based on (a) distance to the limit point: ‖pt−p∗‖+‖σt−σ∗‖; (b) norm of gradient mapping: ‖∇pf(pt, σt))‖ + ‖σt − PΣ(σt + β∇σf(pt, σt))‖/β. In Figure 3, we compare EG, Catalyst-EG and DIAG with best-tuned stepsizes. Although EG with average iterates has an optimal complexity of O(1/ ) for solving convex-concave minimax problems [32], its convergence behavior for SC-C minimax optimization remains unknown. Both Catalyst-EG and DIAG are designed for SC-C minimax optimization: Catalyst EG has a complexity of Õ(`/√µ ) and DIAG has a complexity of Õ ( ` 3 2 /(µ √ ) ) . Here we use the same stepsize for primal and dual variables in EG and its counterpart with Catalyst. In Catalyst, we use ‖xt − PX (xt − β∇xf(xt, yt))‖/β + ‖yt − PY(yt + β∇yf(xt, yt))‖/β as stopping criterion for subproblem, which is discussed in Section 2. We control the subroutine accuracy (t) as max{c/t8, ̃}, where c is a constant and ̃ is a prefixed threshold. In contrast, DIAG does not provide a easy-toverify stopping criterion for subroutine. We stop the subroutine of DIAG based on the criterion: ‖xk − xk−1‖2 + ‖yk − yk−1‖2, where k indexes the subroutine iterations. We note that there is no theoretical convergence analysis for SVRG under SC-C setting. To form a fair comprison with SVRG, we report last iterate error in Catalyst-SVRG rather than averaged iterates. We observe that Catalyst-EG performs better than EG and DIAG. Under the same stepsize, Catalyst framework significantly speed up EG. SVRG, albeit without theoretical guarantee in the SC-C setting, converges much faster than batch algorithms. Catalyst-SVRG also greatly improves over SVRG and outperforms all other algorithms. Acknowledgments and Disclosure of Funding This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699. Broader Impact Our work provides a family of simple and efficient algorithms for some classes of minimax optimization. We believe our theoretical results advance many applications in ML which requires minimax optimization. Of particular interests are deep learning and fair machine learning. Deep learning is used in many safety-critical environments, including self-driving car, biometric authentication, and so on. There is growing evidence that shows deep neural networks are vulnerable to adversarial attacks. Since adversarial attacks and defenses are often considered as two-player games, progress in minimax optimization will definitely empower both. Furthermore, minimax optimization problems provide insights and understanding into the balance and equilibrium between attacks and defenses. As a consequence, making good use of those techniques will boost the robustness of deep learning models and strengthen the security of its applications. Fairness in machine learning has attracted much attention, because it is directly relevant to policy design and social welfare. For example, courts use COMPAS for recidivism prediction. Researchers have shown that bias is introduced into many machine learning systems through skewed data, limited features, etc. One approach to mitigate this is adding constraints into the system, which naturally gives rise to minimax problems.
1. What is the main contribution of the paper, and how does it extend the existing Catalyst framework? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and ease of understanding? 3. What are the weaknesses of the paper, especially regarding the choice of parameters and the need for prior knowledge of the accuracy? 4. How does the reviewer assess the novelty and practicality of the proposed method compared to other existing methods in the field? 5. Are there any suggestions or ideas for improving the proposed approach, such as removing the need to predefine the number of iterations or finding a way to automatically determine the parameters?
Summary and Contributions Strengths Weaknesses
Summary and Contributions After Rebuttal I am satisfied with the response from the authors as well as reading the other reviewers comments. I leave my score unchanged. %%%%%%%%%%%%%%%%%%%%%%% The authors extend the Catalyst framework of [Lin et al] to the smooth minimax optimization. Their contribution is threefold: (1). Strongly-convex-concave: By carefully choosing the parameter in the "proximal point" inner loop of Catalyst, the authors can improve all exciting methods including extra gradient yielding a complexity of O(ell/sqrt(mu epsilon) log(1/epsilon)) (2). Nonconvex-concave setting: the authors use Catalyst to accelerate existing methods to match the state-of-the-art complexity of O(ell^2/epsilon^(-3)) (3). Finite sum structure: The authors use Catalyst to construct the first variance reduction algorithm for strongly-convex-concave setting. Strengths The claims of the paper seem to the best of my knowledge accurate and theoretically grounded. Although I did not go through the details of the proofs, I am fairly confident that the results are true. Although using Catalyst is not new, the way the authors used Catalyst to solve minmax optimization problems is novel. Their approach is quite simple and easy to understand if one knows Catalyst already and it gives an easy way to accelerate known algorithms in the field. I also like that the algorithm does not require prior knowledge of the accuracy. Weaknesses (1). One big weakness with Catalyst in general is how to choose the parameters. In order for Catalyst to work, the knowledge of the ell and mu is essential. Have the authors considered a way to find these parameters so that the algorithm is more practical? I realize this is also a problem in Catalyst. (2). It is very similar to Catalyst and in that regard it isn't so surprising that this approach works. However, I do still like it. (3). One downside of NC-C setting is that you have to predefine the number of iterations to run the inner loop. Is there a way to remove this approach? This is very similar to predefining the accuracy and using this accuracy in the algorithm. (2).
NIPS
Title A Catalyst Framework for Minimax Optimization Abstract We introduce a generic two-loop scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated dual problem and takes full advantage of existing gradient-based algorithms to solve a sequence of well-balanced strongly-convex-strongly-concave minimax problems. Despite its simplicity, this leads to a family of near-optimal algorithms with improved complexity over all existing methods designed for strongly-convex-concave minimax problems. Additionally, we obtain the first variance-reduced algorithms for this class of minimax problems with finite-sum structure and establish faster convergence rate than batch algorithms. Furthermore, when extended to the nonconvex-concave minimax optimization, our algorithm again achieves the state-of-the-art complexity for finding a stationary point. We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice. 1 Introduction Minimax optimization has been extensively studied in past decades in the communities of mathematics, economics, and operations research. Recent years have witnessed a surge of its applications in machine learning, including generative adversarial networks [16], adversarial training [47, 28], distributionally robust optimization [31, 1], reinforcement learning [8, 9], and many others. The problem of interest in such applications is often a smooth minimax optimization problem (also referred to as saddle point problems): min x∈X max y∈Y f(x, y), (1) where the function f : Rd1 × Rd2 → R is smooth (i.e., gradient Lipschitz), X is a convex set in Rm, and Y is a convex and compact set in Rn. In many machine learning applications, f has a finite sum structure, that is f(x, y) = 1n ∑n i=1 fi(x, y), where each component corresponds to a loss associated with single observation. A significant body of first-order algorithms for minimax optimization exists in the literature, ranging from the classical projection method [42], Korpelevich’s extragradient method [20], Nemirovski’s Mirror Prox algorithm [32], Nesterov’s dual extrapolation method [34], Tseng’s accelerated proximal gradient algorithm [46], to many recent hybrid or randomized algorithms, e.g., [30, 17, 38, 19, 6, 25], just to name a few. Most of these existing work and theoretical analyses are limited to the following settings (i) the strongly-convex-strongly-concave setting (e.g., [45, 29, 15]), (ii) the general convexconcave setting (e.g., [32, 34]), and (iii) the special bilinear convex-concave setting (e.g., [5, 48, 7]. The lower complexity bounds for these three settings established in [50], [33], [37], respectively, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. can be attained by some existing algorithms. For example, extragradient method (EG) achieves the optimal O(1/ ) complexity for smooth convex-concave minimax problems, and the optimal O(κ log(1/ )) complexity for well-balanced strongly-convex-strongly-concave minimax problems, where the x-component and y-component of the objective share the same condition number κ [50]. However, there are relatively few results outside of these settings. Of particular interests are the following two settings: f(x, ·) is concave but not strongly-concave for any x ∈ X , while f(·, y) could be strongly-convex or even nonconvex. Strongly-convex-concave minimax optimization covers broad applications in game theory, imaging, distributionally robust optimization, etc. While the special bilinear case of this setting has been studied extensively in the literature, the general case is less explored. In fact, strongly-convex-concave minimax optimization has also been routinely used as a building block for solving nonconvex-concave minimax problems [40, 44]. Hence, we mainly focus on the strongly-convex-concave setting. For strongly-convex-concave minimax problems, the lower complexity bound of first-order algorithms is Ω ( `/ √ µ ) for achieving an -duality-gap [37], where ` is the smoothness constant and µ is the strong convexity constant. Recently, [44] proposed the so-called dual implicit accelerated gradient algorithm (DIAG) that achieves the first-order oracle complexity of O ( `3/2/(µ √ ) log2(1/ ) ) . A similar complexity bound was obtained from the primal-dual smoothing method in [51]. More recently, [24] introduced the MINIMAX-APPA algorithm that further improves the complexity by shaving off a factor of O( √ `/µ), yielding a near-optimal convergence rate up to the logarithmic factor. However, these algorithms are fairly complicated as they stack several procedures including accelerated gradient descent on x, accelerated gradient ascent on y, and accelerated proximal point algorithm, in different manners, thus requiring at least three loops. In addition to the complicated procedure, the latter two algorithms require an additional layer of smoothing, and solve the surrogate problem minx∈X maxy∈Y f(x, y)+O( )‖y‖2. In practice, how to select a good smoothing parameter of order O( ) remains elusive. Meanwhile, it is unclear how these sophisticated algorithms can be integrated with variance-reduction techniques to solve strongly-convex-concave minimax problems with finite-sum structure efficiently. Most existing variance-reduced algorithms in minimax optimization focus on strongly-convexstrongly-concave setting, e.g., SVRG and SAGA [38], SPD1-VR [43], SVRE [6], Point-SAGA [26], primal-dual SVRG [11], variance reduced prox-method [4], etc. These algorithms typically preserve the linear convergence of batch algorithms, yet with cheaper per-iteration cost and improved complexity. Outside of this regime, few results are known [27, 49]. To the best of our knowledge, the design of efficient variance reduction methods for finite-sum structured minimax problems under the strongly-convex-concave or nonconvex-concave settings remains largely unexplored. This raises the question: can we simply leverage the rich off-the-shelf methods designed for stronglyconvex-strongly-concave minimax problems to these unexplored settings of interest? Inspired by the success of the Catalyst framework and accelerated APPA that use gradient-based algorithms originally designed for strongly convex minimization problems to minimize convex/nonconvex objectives [22, 21, 39, 13], we introduce a generic Catalyst framework for minimax optimization. Rooted in an inexact accelerated proximal point framework, the idea is to repeatedly solve the following auxiliary strongly-convex-strongly-concave problem using an existing methodM: minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 − τy2 ‖y − zt‖ 2. (2) While the algorithmic extension looks straightforward, selecting appropriate proximal parameters τx, τy, the prox centers x̄t, zt, and the methodM for solving the auxiliary problems, are critical and make a huge difference in the overall complexity. Our key insight is that when the condition numbers of the auxiliary problems are well balanced, they become relatively easy to solve and simply applying existing algorithms such as extragradient method asM would suffice. For instance, in the strongly-convex-concave setting, we set τx = 0, τy = µ. In sharp contrast, the MINIMAX-APPA algorithm [24] uses τx = 1` and τy = O( ), which results in extra complications (i.e., a two-loop algorithm) in solving the auxiliary problems. Based on the generic Catalyst framework, we establish a number of interesting results: (i) For strongly-convex-concave minimax optimization, we develop a family of two-loop algorithms with near-optimal complexity and reduced order of the logarithmic factor. In fact, simply combing Catalyst with extragradient method yields the complexity,O ( `/ √ µ log(1/ ) ) , which improves over all existing methods, as shown in Table 1. (ii) For nonconvex-concave minimax optimization, we provide a simple two-time-scale inexact proximal point algorithm for finding an -stationary point that matches the state-of-the-art complexity of Õ ( `2 −3 ) . (iii) For minimax problems with finite-sum structure, we provide a family of variance-reduced algorithms for the strongly-convex-concave setting, improving the Õ ( n¯̀/ √ µ ) complexity of the best batch algorithm to Õ ( ¯̀2/ √ µ3 ∨n 34 ¯̀12 / √ ) , and to Õ ( ¯̀/ √ µ ∨n 12 ¯̀12 / √ ) with additional assumption on cocoercive gradient. When extending to the nonconvex-concave setting, we improve the Õ ( n¯̀2 −3 ) complexity of the best batch algorithm to Õ ( n 3 4 ¯̀2 −3 ) , and to Õ ( n 1 2 ¯̀2 −3 ) with cocoercive gradient. Here ¯̀is the average of smoothness constants of the components. For the ease of notation, we refer to the strongly-convex-strongly-concave setting as SC-SC for short, or (µ1, µ2)-SC-SC if the strong convexity and strong concavity constants are given by µ1, µ2. Similarly, SC-C or µ-SC-C refers to the strongly-convex-concave setting, and NC-C to the nonconvexconcave setting. Throughout the paper, ‖ · ‖ stands for the standard `2-norm. 2 A Catalyst Framework for SC-C Minimax Optimization In this section, we focus on solving strongly-convex-concave minimax problems and introduce a general Catalyst scheme. We formally make the following assumptions. Assumption 1 (SC-C). f(·, y) is µ-strongly-convex for any y in Y , i.e., f(x1, y) ≥ f(x2, y) +∇xf(x2, y)T (x1 − x2) + µ 2 ‖x1 − x2‖2, ∀x1, x2 ∈ X . and f(x, ·) is concave for all x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y− y′‖. There exists at least one saddle point (x∗, y∗) ∈ X ×Y , which satisfies maxy∈Y f(x∗, y) ≤ f(x∗, y∗) ≤ minx∈X f(x, y∗) for all (x, y) ∈ X × Y . Assumption 2 (Lipschitz gradient). There exists a positive constant ` such that max{‖∇yf (x1, y1)−∇yf (x2, y2)‖ , ‖∇xf (x1, y1)−∇xf (x2, y2)‖} ≤ `[‖x1 − x2‖+‖y1 − y2‖], holds for all x1, x2 ∈ X , y1, y2 ∈ Y . The goal is to find an -saddle point (x̄, ȳ) such that gapf (x̄, ȳ) := maxy∈Y f(x̄, y) − minx∈X f(x, ȳ) ≤ . We call gapf (x̄, ȳ) the primal-dual gap, which implies both primal optimality gap and dual optimality gap. If = 0, then (x̄, ȳ) is a saddle point. We present a generic Catalyst scheme in Algorithm 1. Analogous to its prototype [22, 39], this scheme consists of several important components: an inexact accelerated proximal point step as the wrapper, a linearly-convergent first-order methodM as the workhorse, as well as carefully chosen parameters and stopping criteria. Algorithm 1 Catalyst for SC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τ > 0 2: Initialization: α1 = 1, v0 = y0 3: for all t = 1, 2, ..., T do 4: Set zt = αtvt−1 + (1− αt)yt−1. 5: Find an inexact solution (xt, yt) to the following problem with algorithmM min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] (?) such that f(xt, yt)−minx∈X f(x, yt) ≤ (t) and ∇y f̃t(xt, yt)T (y − yt) ≤ (t),∀y ∈ Y (3) 6: vt = yt−1 + 1 αt (yt − yt−1); 7: Choose αt+1 ∈ [0, 1] such that 1−αt+1α2t+1 = 1 α2t . 8: end for 9: Output: (x̄T , yT ) with x̄T = ∑T t=1 1/αt∑T m=1 1/αm xt. Inexact accelerated proximal point step. The main idea is to repeatedly solve a series of regularized problems by adding a quadratic term in y to the original problem: min x∈X max y∈Y [ f̃t(x, y) := f(x, y)− τ 2 ‖y − zt‖2 ] , (?) where τ > 0 is a regularization parameter (to be specified later) and zt is the prox-center. The prox-centers {zt}t are built on extrapolation steps of Nesterov [35]. Noticeably, this step can also be viewed as applying the original Catalyst scheme [22] to the dual function h(y) := minx∈X f(x, y). The major distinction is that we do not have access to the closed-form dual function, which causes difficulty in measuring the inexactness of auxiliary problems and evaluating the solution performance in terms of the primal-dual gap, instead of dual optimality. Linearly-convergent algorithm M. By construction, the series of auxiliary problems (?) are (µ, τ)-SC-SC. Thus, they can be solved by a wide spectrum of first-order algorithms established in the literature, at a linear convergence rate, including gradient descent ascent (GDA), extra-gradient method (EG), optimistic gradient descent ascent (OGDA), SVRG, to name a few. Yet, the dependence on the condition number may vary across different algorithms. We assume that any deterministic algorithmM when solving the (µ, τ)-SC-SC minimax problem has a linear convergence rate such that ‖xk − x∗‖2 + ‖yk − y∗‖2 ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (4) and any stochastic algorithmM satisfies E[‖xk − x∗‖2 + ‖yk − y∗‖2] ≤ ( 1− 1∆M,τ )k [‖x0 − x∗‖2 + ‖y0 − y∗‖2], (5) where ∆M,τ depends on τ and algorithmM. For instance, when EG or OGDA is adopted, ∆M,τ = `+τ 4 min{µ,τ} [45, 15, 2]; when SVRG or SAGA is adopted, ∆M,τ ∝ n+ ( `+τ min{µ,τ} )2 , provided that the objective has the finite-sum structure and each component is `-smooth [38]. Stopping criteria. To guarantee the overall convergence in terms of primal-dual gap, it is necessary to approximately solve the auxiliary problem (?) to moderate accuracy and ensure the entire pair (x, y) converges properly. For the sake of generalization, we adopt the criterion specified in (3) in our generic scheme. The stopping criterion can be achieved by most existing minimax optimization algorithms after sufficient iterations. Yet, it could still be hard to check in practice because minx∈X f(x, yt) and maxy∈Y ∇y f̃t(xt, yt)T (y − yt) are not always computable. The following lemma shows that this issue can be alleviated, at the minor cost of a full gradient evaluation and a projection step. Lemma 2.1. Consider a function f̃(x, y) that is (µ1, µ2)-SC-SC and has ˜̀-Lipschitz gradient on X × Y . Let z∗ = (x∗, y∗) be the saddle point, i.e, the solution to the minimax optimization minx∈X maxy∈Y f̃(x, y). For any point z = (x, y) in X × Y , we define [z]β = ([x]β , [y]β) with β > 2˜̀ to be the point after one step of projected gradient descent ascent: [x]β = PX ( x− 1β∇xf̃(x, y) ) , [y]β = PY ( y + 1β∇y f̃(x, y) ) , then we have 1. gapf̃ ([z]β) ≤ A‖z − z∗‖2, ∇f̃([x]β , [y]β)T (ȳ − [y]β) ≤ A‖z − z∗‖2 + 2βDY‖z − z∗‖; 2. ‖z − z∗‖ ≤ β+˜̀µ̃ ‖z − [z]β‖, ‖z − [z]β‖ 2 ≤ 2 (1−˜̀/β)3 ‖z − z ∗‖2, where A = β + 2β ˜̀2 µ̃2 + 4β ˜̀2 µ̃2(1−˜̀/β)3 , µ̃ = min{µ1, µ2}. Based on this observation, we can therefore use the following easy-to-check criterion: ‖x− [x]β‖2 + ‖y − [y]β‖2 ≤ min { µ̃2 (t) 2A(β + ˜̀)2 , ( µ̃ (t) 4βDY(β + ˜̀) )2} . (6) Note that many algorithms such as EG or GDA, already compute ([x]β , [y]β) with β being the stepsize, so there is no additional computation cost to check criterion (6). Choice of regularization parameter. As we can see, the smaller τ is, the auxiliary problem is closer to the original problem. However, smaller τ will give rise to worse conditions of the auxiliary problems, making them harder to solve. We will discuss the dependence of the inner and outer loop complexities on τ and provide a guideline for choosing τ for differentM. As a final remark, we stress that the idea of using (accelerated) proximal point algorithm for minimax optimization is by no means new. Similar ideas have appeared in different contexts. However, they differ from our scheme in one way or the other. To list a few: [41, 30, 23, 38] considered the inexact PPA for C-C or NC-NC minimax problems by adding quadratic terms in both x and y; [40, 44] considered the inexact PPA for NC-C minimax problems, by adding a quadratic term in x; [24] considered the inexact accelerated PPA for SC-SC minimax problems by adding a quadratic term in x. On the other hand, a number of work, e.g., [19, 24, 51] also add a quadratic term in y to the minimax optimization, but in the form O( )‖y‖2, which is completely different from PPA. Besides these differences, the subroutines used to solve the auxiliary minimax problems and choices of regularization parameters in these work are quite distinct from ours. Lastly, we point out that the proposed framework is closely related to the inexact accelerated augmented Lagrangian method designed for linearly constrained optimization problems [18], which can be viewed as a special case by setting f(x, y) as the Lagrangian dual. In spite of this, approaches for solving the auxiliary problems are completely different, as is theoretical analysis. 3 Main Results 3.1 Convergence Analysis In order to derive the total complexity, we first establish the complexity of the outer loop and then combine it with the inner loop complexity from algorithmM. We then discuss the optimal choice of the regularization parameter τ for different settings. Theorem 3.1 (Outer-loop complexity). Suppose function f satisfies Assumptions 1 and 2. The output (x̄T , yT ) from Algorithm 1 satisfies gapf (x̄T , yT ) ≤ α2T [ τ 2D 2 Y + 2 ∑T t=1 1 α2t (t) ] , (7) where DY = maxy,y′∈Y ‖y − y′‖ is the diameter of Y . If we further choose, (t) = 3τDYα 2 t 2πt2 , then gapf (x̄T , yT ) ≤ α2T τD2Y . (8) Remark 1. The above result is true without requiring strong convexity in x; only convexity-concavity of f(x, y) is sufficient. In addition, the regularization parameter τ can be any positive value. Hence, Algorithm 1 is quite flexible. Because 2/(t+ 2)2 ≤ α2t ≤ 4/(t+ 1)2 [39], Theorem 3.1 implies that the algorithm finds a point with primal-dual gap within O( √ τ/ DY) outer-loop iterations. Notice that the outer-loop complexity decreases as τ decreases. We now turn to the inner loop complexity. By construction, the auxiliary problem (?) is (µ, τ)-SC-SC and ˜̀smooth with ˜̀= `+ τ , which can be solved by many existing first-order algorithms at a linear convergence rate. Below we present the complexity of the inner loop with warm start. Proposition 3.1 (Inner-loop complexity). Suppose we apply a linearly convergent algorithmM described by (4) or (5) to solve the auxiliary problem (?) and set the initial point to be (xt−1, zt) at iteration t. Let K( (t)) denote the number of iterations (expected number of iterations ifM is stochastic) forM to find a point satisfying (6). Then K( (t)) is O ( ∆M,τ log ( ˜̀·DY min{1,µ,τ}· (t) )) . In practice, choosing a good initial point to warm start algorithmM can be helpful in accelerating the convergence. The above proposition shows that in theory, using a simple warm start strategy helps alleviate the logarithmic dependence on the distance from the initial point to the optimal point. Without the warm start strategy, one would require X to be bounded and K( (t)) = O ( ∆M,τ log( DX+DY (t) ) ) . Here we do not require boundedness on X . As we can see, the choice of τ plays a crucial role since it affects both inner-loop and outer-loop complexities. Combining the above two results immediately leads to the total complexity: Corollary 3.2 (Total complexity). Suppose Assumptions 1, 2 hold, and the subproblems are solved by a linearly convergent algorithmM to satisfy the stopping criterion (3) or (6) with accuracy (t) as specified in Theorem 3.1. For Algorithm 1 to find an -saddle point, the total number of gradient evaluations (expected number ifM is stochastic) is O ( ∆M,τ √ τ/ DY log ( ` · DY min{1, µ, τ} · )) . For any choice of linearly-convergent methodM and any regularization parameter τ , the oracle complexity is guaranteed to be O (DY/ √ log(DY/ )), which is optimal both in and DY up to a logarithmic factor [37]. The dependence on the condition number will solely be determined by the term ∆M,τ √ τ , which we analyze in detail below for specific algorithms. 3.2 Specific Algorithms and Complexities In order to minimize the total complexity, we should choose the regularization parameter τ that minτ>0 ∆M,τ √ τ . Below we derive the choice of optimal τ for different algorithmsM and present the corresponding total complexity. Table 2 summarizes this for several algorithms we consider. Deterministic first-order algorithms. If we adopt the simplest gradient descent ascent (GDA) as M for solving the subproblem, then ∆M,τ = ( `+τ 2 min{µ,τ} )2 [12]. IfM is extra-gradient method (EG) or optimistic gradient descent ascent (OGDA), then ∆M,τ = `+τ4 min{µ,τ} [45, 15, 2]. Minimizing ∆M,τ √ τ for both cases yields that the optimal choice for τ is µ. In particular, when using EG or OGDA, the total complexity becomes O ( ` · DY√ µ log ( ` · DY min{1, µ} · )) . Remark 2. This complexity matches the lower complexity bound for this class of problems [37] in , `, µ and DY , up to a logarithmic factor. In addition, it improves over the best-known result, which was recently established in [24], which has a cubic order on the logarithmic factor and requires boundedness of X . A key observation is that by setting τ = µ, the auxiliary problem (?) becomes (µ, µ)-SC-SC, and it is known that simple EG or OGDA achieves the optimal complexity for solving this class of well-balanced SC-SC problems [50]. Unlike [44, 24] , their subproblems are harder to solve because of ill-balanced condition numbers, thus leading to an inferior complexity. Besides the complexity improvement, our algorithm is significantly simpler and easier to implement than the current state-of-the-arts. The DIAG algorithm in [44] applies Nesterov’s accelerated gradient ascent to the dual function and an additional two-loop algorithm to solve their subproblems. The MINIMAX-APPA algorithm in [24] adds a smoothing term in y and applies a triple-loop algorithm to solve the auxiliary SC-SC problem. In contrast, our algorithm only requires two loops, does not require to prefix accuracy , and has fewer tuning parameters. Results are summarized in Table 1. Stochastic variance-reduced algorithms. We now consider finite-sum-structure minimax problems, minx∈X maxy∈Y f(x, y) , 1n ∑n i=1 fi(x, y), where each component fi has `i-Lipschitz gradients. Denote ¯̀ = 1n ∑n i=1 `i as the average of smoothness constants. The resulting SC-SC subproblem (?) also has the finite-sum structure and can be solved by a number of linearly-convergent variance-reduced algorithms, such as SVRG, SAGA [38], and SVRE [6]. If using SVRG or SAGA asM, we have ∆M,τ ∝ n+ ( ¯̀+τ min{µ,τ} )2 [38]. When using SVRE asM, ∆M,τ ∝ n + ¯̀+τ min{µ,τ} , assuming that the gradients are also `i-cocoercive [6]. Particularly, when using SVRE, the optimal τ is µ if ¯̀/µ ≥ n and ¯̀/n otherwise. Therefore, the total complexity is Õ ( ¯̀ √ µ ) if ¯̀/µ ≥ n; and Õ ( n 1 2 ¯̀ 1 2 √ ) otherwise. Remark 3. In either case, our result improves over the complexity Õ ( n¯̀√ µ ) when using the batch extra-gradient method asM. To the best of our knowledge, this is the best complexity established so far for this class of SC-C minimax optimization problems. Results are summarized in Table 2. 4 Nonconvex-Concave Minimax Optimization We now turn to nonconvex-concave minimax problems (1), and formally make Assumption 3. Denote g(x) = maxy∈Y f(x, y) as the primal function, which is `-weakly-convex [44]. The goal is to find an -stationary point of g(x). For any x̄, consider the Moreau envelop of g: ψ1/τx(x̄) := minx∈X { gτx(x; x̄) := g(x) + τx 2 ‖x− x̄‖ 2 } . The norm of the gradient ‖∇ψ1/τx(x̄)‖ is commonly used to measure the quality of a solution x̄ [10]. We call x̄ -stationary point of g if ‖∇ψ1/τx(x̄)‖ ≤ . Assumption 3. f(x, ·) is concave for any x in X . X and Y are convex and closed sets, and Y is bounded with diameter DY = maxy,y′∈Y ‖y − y′‖. Our modified Catalyst framework is described in 2, which further applies the proximal point algorithm to the primal function g(x), by adding a quadratic term in x, in the same spirit as [40, 44, 24]. The main difference lies in that we use Algorithm 1 to solve subproblems in form of minx∈X gτx(x;xt). Now we use τy to denote the parameter in Algorithm 1 in order to distinguish from τx. Algorithm 2 can be considered as a two-time-scale inexact proximal point algorithm, which repeatedly solves the subproblem minx∈X maxy∈Y f(x, y) + τx 2 ‖x− x̄t‖ 2 + τy 2 ‖y − zt‖ 2. (9) We call it two-time-scale, not only because τx and τy differ, but also because the prox center of y comes from the extrapolation step of acceleration and is updated more frequently than the prox center of x. The subproblem (9) is (τx − `, τy)-SC-SC if τx > `, thus can be efficiently solved. 1 SVRE requires assuming each component has `i-cocoercive gradient, which is a stronger assumption than assuming `i-Lipschitz gradient. Algorithm 2 Catalyst for NC-C Minimax Optimization 1: Input: initial point (x0, y0), parameter τx > ` 2: for all t = 0, 1, ..., T − 1 do 3: use Algorithm 1 to find xt+1 such that gτx(xt+1;xt) ≤ min x∈X gτx(x;xt) + ̄ 4: end for 5: Output: x̂T which is uniformly sampled from x0, ..., xT−1. Theorem 4.1 (Outer-loop complexity). Suppose f satisfies Assumption 2 and 3. The output from Algorithm 2 satisfies E‖∇ψ1/τx(x̂T )‖ 2 ≤ 2τ 2 x τx − ` [ g(x0)− g∗ T + ̄ ] , where g∗ = minx∈X g(x). If T = 4τ2x(g(x0)−g ∗) (τx−`) 2 and ̄ = (τx−`) 2 2τ2x , then E‖∇ψ1/τx(x̂T )‖ ≤ . Theorem 4.1 implies that the outer-loop complexity is O( −2). In the following corollaries, we specify the choices of τx, τy , andM for solving subproblems and the total complexity. Corollary 4.2. Suppose f satisfies Assumption 2 and 3. If we choose τx = 2`, τy = ` and use EG/OGDA/GDA to solve subproblems, then Algorithm 2 finds an -stationary point with the total number of gradient evaluations of Õ ( `2 −3 ) . Corollary 4.3. Suppose f(x, y) = 1n ∑n i=1 fi(x, y) satisfies Assmption 3 and each component fi has `i-Lipschitz gradient with ¯̀= 1n ∑n i=1 `i. If we choose τx = 2¯̀, τy = ¯̀√ n and use SVRG/SAGA to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 3 4 ¯̀2 −3 ) . If we further assume fi has `i-cocoercive gradient, choose τx = 2¯̀, τy = ¯̀ n and use SVRE to solve subproblems, then Algorithm 2 finds an -stationary point with the total complexity Õ ( n 1 2 ¯̀2 −3 ) . Corollary 4.2 shows that simply using Catalyst-EG/OGDA achieves the complexity Õ ( `2 −3 ) . This matches with the current state-of-the-art complexity for nonconvex-concave minimization [24, 44, 51, 36]. Note that our algorithm is much simpler than the existing algorithms, e.g., Prox-DIAG [44] requires a four-loop procedure, whereas MINIMAX-APPA [24] requires a smoothing step. For problems with finite-sum structure, as shown in Corollary 4.3, using Catalyst-SVRG attains the overall complexity Õ ( n 3 4 ¯̀2 −3 ) , improving over all existing results. For instance, PG-SVRG proposed in [40] gives Õ ( n −2 + −6 ) , which has a much worse dependence on and n. 5 Numerical Experiments We consider the wireless communication problem in [3]. Given n communications channels with signal power p ∈ Rn and noise power σ ∈ Rn, the capacity of channel i is proportional to log(1 + βipi/(σ 0 i + σi)), where βi > 0 and σ 0 i are known constants. We would like to maximize the channel capacity under the adversarially chosen noise [14]. This can be formulated as an SC-C minimax problem: min p max σ f(p, σ) := − n∑ i=1 log ( 1 + βipi σ0i + σi ) + λ 2 ‖p‖2, such that 1>σ = N, p ≥ 0, σ ≥ 0. We generate two datasets with (1) β = 1 and σ0 ∈ R1000 uniformly from [0, 100]1000, (2) β = 1 and σ0 ∈ R500 uniformly from [0, 10]500. In Figure 1, we apply the same stepsizes to EG and subroutine in Catalyst-EG, and we compare their convergence results with stepsizes from small to large. In Figure 2, we compare four algorithms: extragradient (EG), SVRG, Catalyst-EG, Catalyst-SVRG with besttuned stepsizes, and evaluate their errors based on (a) distance to the limit point: ‖pt−p∗‖+‖σt−σ∗‖; (b) norm of gradient mapping: ‖∇pf(pt, σt))‖ + ‖σt − PΣ(σt + β∇σf(pt, σt))‖/β. In Figure 3, we compare EG, Catalyst-EG and DIAG with best-tuned stepsizes. Although EG with average iterates has an optimal complexity of O(1/ ) for solving convex-concave minimax problems [32], its convergence behavior for SC-C minimax optimization remains unknown. Both Catalyst-EG and DIAG are designed for SC-C minimax optimization: Catalyst EG has a complexity of Õ(`/√µ ) and DIAG has a complexity of Õ ( ` 3 2 /(µ √ ) ) . Here we use the same stepsize for primal and dual variables in EG and its counterpart with Catalyst. In Catalyst, we use ‖xt − PX (xt − β∇xf(xt, yt))‖/β + ‖yt − PY(yt + β∇yf(xt, yt))‖/β as stopping criterion for subproblem, which is discussed in Section 2. We control the subroutine accuracy (t) as max{c/t8, ̃}, where c is a constant and ̃ is a prefixed threshold. In contrast, DIAG does not provide a easy-toverify stopping criterion for subroutine. We stop the subroutine of DIAG based on the criterion: ‖xk − xk−1‖2 + ‖yk − yk−1‖2, where k indexes the subroutine iterations. We note that there is no theoretical convergence analysis for SVRG under SC-C setting. To form a fair comprison with SVRG, we report last iterate error in Catalyst-SVRG rather than averaged iterates. We observe that Catalyst-EG performs better than EG and DIAG. Under the same stepsize, Catalyst framework significantly speed up EG. SVRG, albeit without theoretical guarantee in the SC-C setting, converges much faster than batch algorithms. Catalyst-SVRG also greatly improves over SVRG and outperforms all other algorithms. Acknowledgments and Disclosure of Funding This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699. Broader Impact Our work provides a family of simple and efficient algorithms for some classes of minimax optimization. We believe our theoretical results advance many applications in ML which requires minimax optimization. Of particular interests are deep learning and fair machine learning. Deep learning is used in many safety-critical environments, including self-driving car, biometric authentication, and so on. There is growing evidence that shows deep neural networks are vulnerable to adversarial attacks. Since adversarial attacks and defenses are often considered as two-player games, progress in minimax optimization will definitely empower both. Furthermore, minimax optimization problems provide insights and understanding into the balance and equilibrium between attacks and defenses. As a consequence, making good use of those techniques will boost the robustness of deep learning models and strengthen the security of its applications. Fairness in machine learning has attracted much attention, because it is directly relevant to policy design and social welfare. For example, courts use COMPAS for recidivism prediction. Researchers have shown that bias is introduced into many machine learning systems through skewed data, limited features, etc. One approach to mitigate this is adding constraints into the system, which naturally gives rise to minimax problems.
1. What is the focus and contribution of the paper on optimization methods? 2. What are the strengths of the proposed approach, particularly in terms of its extension to a minimax problem? 3. Are there any concerns or limitations regarding the practicality or applicability of the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper extends a recently developed Catalyst framework for optimization. The original paper is Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, where the authors introduced the additional extrapolation step to the existing first order method to accelerate the asymptotic convergence behavior of existing optimization algorithms. This paper is an extension of this approach to a minimax problem. Strengths The extension is trivial in practice (which is good), while theoretical advancement is novel. Weaknesses The work is good, so I can't think of weakness of this work.
NIPS
Title Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks Abstract Recently, neural network based approaches have achieved significant progress for solving large, complex, graph-structured problems. Nevertheless, the advantages of multi-scale information and deep architectures have not been su ciently exploited. In this paper, we first analyze key factors constraining the expressive power of existing Graph Convolutional Networks (GCNs), including the activation function and shallow learning mechanisms. Then, we generalize spectral graph convolution and deep GCN in block Krylov subspace forms, upon which we devise two architectures, both scalable in depth however making use of multi-scale information di↵erently. On several node classification tasks, the proposed architectures achieve state-of-the-art performance. 1 Introduction & Motivation Many real-world problems can be modeled as graphs [14, 18, 25, 12, 27, 7]. Inspired by the success of Convolutional Neural Networks (CNNs) [20] in computer vision [22], graph convolution defined on graph Fourier domain stands out as the key operator and one of the most powerful tools for using machine learning to solve graph problems. In this paper, we focus on spectrum-free Graph Convolutional Networks (GCNs) [2, 29], which have demonstrated state-of-the-art performance on many transductive and inductive learning tasks [7, 18, 25, 3, 4]. One major problem of the existing GCNs is the low expressive power limited by their shallow learning mechanisms [38, 36]. There are mainly two reasons why people have not yet achieved an architecture that is scalable in depth. First, this problem is di cult: considering graph convolution as a special form of Laplacian smoothing [21], networks with multiple convolutional layers will su↵er from an over-smoothing problem that makes the representation of even distant nodes indistinguishable [38]. Second, some people think it is unnecessary: for example, [2] states that it is not necessary for the label information to totally traverse the entire graph and one can operate on the multi-scale coarsened input graph and obtain the same flow of information as GCNs with more layers. Acknowledging the di culty, we hold on to the objective of deepening GCNs since the desired compositionality1 will yield easy articulation and consistent performance for problems with di↵erent scales. In this paper, we break the performance ceiling of the GCNs. First, we analyze the limits of the existing GCNs brought by the shallow learning mechanisms and the activation functions. Then, we show that any graph convolution with a well-defined analytic spectral filter can 1The expressive power of a sound deep NN architecture should be expected to grow with the increment of network depth [19, 16]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. be written as a product of a block Krylov matrix and a learnable parameter matrix in a special form. Based on this, we propose two GCN architectures that leverage multi-scale information in di↵erent ways and are scalable in depth, with stronger expressive powers and abilities to extract richer representations of graph-structured data. We also show that the equivalence of the two architectures can be achieved under certain conditions. For empirical validation, we test di↵erent instances of the proposed architectures on multiple node classification tasks. The results show that even the simplest instance of the architectures achieves state-of-the-art performance, and the complex ones achieve surprisingly higher performance, with or without validation sets. 2 Why Deep GCN Does Not Work Well? 2.1 Foundations As in [11], we use bold font for vectors (e.g. v), block vectors (e.g. V) and matrix blocks (e.g. Vi). Suppose we have an undirected graph G = (V,E,A), where V is the node set with |V| = N, E is the edge set with |E| = E, A 2 RN⇥N is a symmetric adjacency matrix and D is a diagonal degree matrix, i.e. Dii = P j Aij. A di↵usion process [6, 5] on G can be defined by a di↵usion operator L, which is a symmetric matrix, e.g. graph Laplacian L = D A, normalized graph Laplacian L = I D 1/2AD 1/2 and a nity matrix L = A + I, etc.. In this paper, we use L for a general di↵usion operator, unless specified otherwise. The eigendecomposition of L gives us L = U⇤UT, where ⇤ is a diagonal matrix whose diagonal elements are eigenvalues and the columns of U are the orthonormal eigenvectors, named graph Fourier basis. We also have a feature matrix (graph signals) X 2 RN⇥F (which can be regarded as a block vector) defined onV and each node i has a feature vector Xi,:, which is the i-th row of X. Spectral graph convolution is defined in graph Fourier domain s.t. x ⇤G y = U((UTx) (UT y)), where x, y 2 RN and is the Hadamard product [7]. Following this definition, a graph signal x filtered by g✓ can be written as y = g✓(L)x = g✓(U⇤UT)x = Ug✓(⇤)UTx (1) where g✓ is any function which is analytic inside a closed contour which encircles (L), e.g. Chebyshev polynomial [7]. GCN generalizes this definition to signals with F input channels and O output channels and its network structure can be described as Y = softmax(L ReLU(LXW0) W1) (2) where L ⌘ D̃ 1/2ÃD̃ 1/2, Ã ⌘ A + I, D̃ ⌘ diag(P jÃ1 j, . . . , P j ÃN j) (3) This is called spectrum-free method [2] since it requires no explicit computation of eigendecomposition and operations on the frequency domain [38]. 2.2 Problems Suppose we deepen GCN in the same way as [18, 21], we have Y = softmax(L ReLU(· · · L ReLU(L ReLU(LXW0) W1) W2 · · · ) Wn) ⌘ softmax(Y0) (4) For this architecture, [21] gives an analysis on the e↵ect of L without considering the ReLU activation function. Our analyses on (4) can be summarized in the following theorems. Theorem 1. Suppose that G has k connected components and the di↵usion operator L is defined as that in (3). Let X 2 RN⇥F be any block vector and let Wj be any non-negative parameter matrix with kWjk2 1 for j = 0, 1, . . .. If G has no bipartite components, then in (4), as n!1, rank(Y0) k. Proof See Appendix A. ⇤ Conjecture 1. Theorem 1 still holds without the non-negative constraint on the parameter matrices. Theorem 2. Suppose the n-dimensional x and y are independently sampled from a continuous distribution and the activation function Tanh(z) = ez e zez+e z is applied to [x, y] pointwisely, then P(rank Tanh([x, y]) = rank([x, y])) = 1 Proof See Appendix A. ⇤ Theorem 1 shows that if we simply deepen GCN, the extracted features will degrade, i.e. Y 0 only contains the stationary information of the graph structure and loses all the local information in node for being smoothed. In addition, from the proof we see that the pointwise ReLU transformation is a conspirator. Theorem 2 tells us that Tanh is better at keeping linear independence among column features. We design a numerical experiment on synthetic data (see Appendix) to test, under a 100-layer GCN architecture, how activation functions a↵ect the rank of the output in each hidden layer during the feedforward process. As Figure 1(a) shows, the rank of hidden features decreases rapidly with ReLU, while having little fluctuation under Tanh, and even the identity function performs better than ReLU (see Appendix for more comparisons). So we propose to replace ReLU by Tanh. 3 Spectral Graph Convolution and Block Krylov Subspaces 3.1 Block Krylov Subspaces Let S be a vector subspace of RF⇥F containing the identity matrix IF that is closed under matrix multiplication and transposition. We define an inner product h·, ·iS in the block vector space RN⇥F as follows [11]: Definition 1 A mapping h·, ·iS from RN⇥F ⇥ RN⇥F to S is called a block inner product onto S if 8X,Y,Z 2 RN⇥F and 8C 2 S: 1. S-linearity: hX,YCiS = hX,YiSC and hX + Y,ZiS = hX,ZiS + hY,ZiS 2. symmetry: hX,YiS = hY,XiTS 3. definiteness: hX,XiS is positive definite if X has full rank, and hX,XiS = 0F i↵ X = 0. There are mainly three ways to define h·, ·iS [11]: 1) (Classical.) SCl = RF⇥F and hX,YiClS = XTY; 2) (Global.) SGl = cIF, c 2 R and hX,YiGlS = trace(XTY)IF; 3) (Loop-interchange.) SLi is the set of diagonal matrices and hX,YiLiS = diag(XTY). The three definitions are all useful yet we will use the classical one for our contribution. For further explanations, we give the definition of block vector subspace of RN⇥F. Definition 2 Given a set of block vectors {Xk}mk=1 ⇢ RN⇥F, the S-span of {Xk}mk=1 is defined as spanS{X1, . . . ,Xm} := { mP k=1 XkCk : Ck 2 S} Given the above definition, the order-m block Krylov subspace with respect to the matrix A 2 RN⇥N, the block vector B 2 RN⇥F and the vector space S can be defined asKSm(A,B) := spanS{B,AB, . . . ,Am 1B}. The corresponding block Krylov matrix is defined as Km(A,B) := [B,AB, . . . ,Am 1B]. 3.2 Spectral Graph Convolution in Block Krylov Subspace Form In this section, we show that any graph convolution with well-defined analytic spectral filter defined on L 2 RN⇥N can be written as the product of a block Krylov matrix with a learnable parameter matrix in a specific form. We take S = SCl = RF⇥F. For any real analytic scalar function g, its power series expansion around center 0 is g(x) = 1X n=0 anxn = 1X n=0 g(n)(0) n! xn, |x| < R where R is the radius of convergence. The function g can be used to define a filter. Let ⇢(L) denote the spectrum radius of L and suppose ⇢(L) < R. The spectral filter g(L) 2 RN⇥N can be defined as g(L) := 1X n=0 anLn = 1X n=0 g(n)(0) n! Ln, ⇢(L) < R According to the definition of spectral graph convolution in (1), graph signal X is filtered by g(L) as follows, g(L)X = 1X n=0 g(n)(0) n! LnX = h X,LX,L2X, · · · i " g(0)(0) 0! IF, g(1)(0) 1! IF, g(2)(0) 2! IF, · · · #T = A0B0 where A0 2 RN⇥1 and B0 2 R1⇥F. We can see that A0 is a block Krylov matrix and Range(A0B0) ✓ Range(A0). It is shown in [13, 11] that for S = RF⇥F there exists a smallest m such that spanS{X,LX,L2X, · · · } = spanS{X,LX,L2X, . . . ,Lm 1X} (5) where m depends on L and X and will be written as m(L,X) later. This means for any k m, LkX 2 KSm(L,X). From (5), the convolution can be written as g(L)X = 1X n=0 g(n)(0) n! LnX ⌘ h X,LX, . . . ,Lm 1X i h ( 0S)T, ( 1S)T, · · · , ( Sm 1)T iT ⌘ Km(L,X) S (6) where Si 2 RF⇥F for i = 1, . . . ,m 1 are parameter matrix blocks. Then, a graph convolutional layer can be be generally written as g(L)XW0 = Km(L,X) SW0 = Km(L,X)WS (7) where WS ⌘ SW0 2 RmF⇥O. The essential number of learnable parameters is mF ⇥O. 3.3 Deep GCN in the Block Krylov Subspace Form Since the spectral graph convolution can be simplified as (6)(7), we can build deep GCN in the following way. Suppose that we have a sequence of analytic spectral filters G = {g0, g1, . . . , gn} and a sequence of pointwise nonlinear activation functions H = {h0, h1, . . . , hn}. Then, a deep spectral graph convolution network can be written as Y = softmax n gn(L) hn 1 n · · · g2(L) h1 n g1(L) h0 n g0(L)XW00 o W01 o W02 · · · o W0n o (8) Define H0 = X, Hi+1 = hi{gi(L)HiWi}, i = 0, . . . ,n 1 Then, we have Y = softmax{Kmn (L,Hn)WSnn } From (7) and (8), we see we can write Hi+1 = hi{Kmi (L,Hi)WSii }, mi ⌘ m(L,Hi) It is easy to see that, when gi(L) = I, (8) is a fully connected network [21]; when n = 1, g0(L) = g1(L) = L, where L is defined in (3), it is just GCN [18]; when gi(L) is defined by the Chebyshev polynomial [15], W0i = I, (8) is ChebNet [7]. 3.4 Di culties & Inspirations In the last subsection, we gave a general form of deep GCN in the block Krylov form. Following this idea, we can leverage the existing block Lanczos algorithm [11, 10] to find mi and compute orthogonal basis ofKSmi (L,Hi) which makes the filter coe cients compact [25] and improve numerical stability. But there are some di culties in practice: 1. During the training phase, Hi changes every time when parameters are updated. This makes mi a variable and thus requires adaptive size for parameter matrices WSii . 2. For classical inner product, the QR factorization that is needed in block Lanczos algorithm [11] is di cult to be implemented in backpropagation framework. Despite implementation intractability, block Krylov form is still meaningful for constructing GCNs that are scalable in depth as we illustrate below. For each node v 2 {1, . . . ,N} in the graph, denote N(v) as the set of its neighbors and Nk(v) as the set of its k-hop neighbors. Then, LX(v, :) can be interpreted as a weighted mean of the feature vectors of v and N(v). If the network goes deep as (4), Y0(v, :) becomes the “weighted mean” of the feature vectors of v and N(n+1)(v) (not exactly weighted mean because we have ReLU in each layer). As the scope grows, the nodes in the same connected component tend to have the same (global) features, while losing their individual (local) features, which makes them indistinguishable. Such phenomenon is recognized as “oversmoothing” [21]. Though it is reasonable to assume that the nodes in the same cluster share many similar properties, it will be harmful to omit the individual di↵erences between each node. Therefore, the inspiration from the block Krylov form is that, to get a richer representation of each node, we need to concatenate the multi-scale information (local and global) together instead of merely doing smoothing in each hidden layer. If we have a smart way to stack multi-scale information, the network will be scalable in depth. To this end, we naturally come up with a densely connected architecture [17], which we call snowball network and a compact architecture, which we call the truncated Krylov network, in which the multi-scale information is used di↵erently. 4 Deep GCN Architectures 4.1 Snowball The block Krylov form inspires first an architecture that concatenates multi-scale features incrementally, resulting in a densely-connected graph network (Figure 2(a)) as follows: H0 = X, Hl+1 = f (L [H0,H1, . . . ,Hl] Wl) , l = 0, 1, . . . ,n 1 C = g ([H0,H1, . . . ,Hn] Wn) (9) output = softmax (LpCWC) where Wl 2 R( Pl i=0 Fi)⇥Fl+1 ,Wn 2 R( Pn i=0 Fi)⇥FC and WC 2 RFC⇥FO are learnable parameter matrices, Fl+1 is the number of output channels in layer l; f and g are pointwise activation functions; Hl are extracted features; C is the output of a classifier of any kind, e.g., a fully connected neural network or even an identity layer, in which case C = [H0,H1, . . . ,Hn]; p 2 {0, 1}. When p = 0, Lp = I and when p = 1, LP = L, which means that we project C back onto graph Fourier basis, which is necessary when the graph structure encodes much information. Following this construction, we can stack all learned features as the input of the subsequent hidden layer, which is an e cient way to concatenate multi-scale information. The size of input will grow like a snowball and this construction is similar to DenseNet [17], which is designed for regular grids (images). Thus, some advantages of DenseNet are naturally inherited, e.g., alleviate the vanishing-gradient problem, encourage feature reuse, increase the variation of input for each hidden layer, reduce the number of parameters, strengthen feature propagation and improve model compactness. 4.2 Truncated Krylov The block Krylov form inspires then an architecture that concatenates multi-scale features directly together in each layer. However, as stated in Section 3.4, the fact that mi is a variable makes GCN di cult to be merged into the block Krylov framework. Thus we compromise and set mi as a hyperparameter and get a truncated block Krylov network (Figure 2(b)) as shown below: H0 = X, Hl+1 = f ⇣h Hl,LHl . . . ,Lml 1Hl i Wl ⌘ , l = 0, 1, . . . ,n 1 C = g (HnWn) (10) output = softmax (LpCWC) where Wl 2 R(mlFl)⇥Fl+1 ,Wn 2 RFn⇥FC and WC 2 RFC⇥FO are learnable parameter matrices; f and g are activation functions; C is the output of a classifier of any kind; p 2 {0, 1}. In the truncated Krylov network, the local information will not be diluted in each layer because in each layer l, we start the concatenation from L0Hl so that the extracted local information can be kept. There are works on the analysis of error bounds of doing truncation in block Krylov methods [11]. But the results need many assumptions either on X, e.g., X is a standard Gaussian matrix [34], or on L, e.g., some conditions on the smallest and largest eigenvalues of L have to be satisfied [28]. Instead of doing truncation for a specific function or a fixed X, we are dealing with variable X during training. So we cannot get a practical error bound since we cannot put any restriction on X and its relation to L. The Krylov subspace methods are often associated with low-rank approximation methods for large sparse matrices. Here we would like to mention [25] does low-rank approximation of L by the Lanczos algorithm. It su↵ers from the tradeo↵ between accuracy and e ciency: the information in L will be lost if L is not low-rank, while keeping more information via increasing the Lanczos steps will hurt the e ciency. Since most of the graphs we are dealing with have sparse connectivity structures, they are actually not low-rank, e.g., the Erdős-Rényi graph G(n, p) with p = !( 1n ) [32] and examples in Appendix IV. Thus, we do not propose to do low-rank approximation in our architecture. 4.3 Equivalence of Linear Snowball GCN and Truncated Block Krylov Network In this part, we will show that the two proposed architectures are inherently connected. In fact their equivalence can be established when using identify functions as f , identity layer as C and constraining the parameter matrix of truncated Krylov to be in a special form. In linear snowball GCN, we can split the parameter matrix Wi into i + 1 blocks and write it as Wi = h (W (1) i )T, · · · , (W (i+1) i )T iT and then following (9) we have H0 = X, H1 = LXW0, H2 = L[X,H1]W1 = LXW (1)1 +L 2 XW (1) 0 W (2) 1 = L[X,LX] " I 0 0 W (1) 0 # 2666664 W (1) 1 W (2) 1 3 777775, . . . As in (9), we have CWC = L[H0,H1, . . . ,Hn]WC. Thus we can write [H0,H1 · · · ,Hn]WC = [X,LX, · · · ,LnX] 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 0 3 777777777777777775 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 1 3 777777777777777775 · · · 2 666666666666666664 I 0 · · · 0 0 W (n) n 1 · · · 0 ... ... . . . ... 0 0 · · · W (1) n 1 3 777777777777777775 2 666666666666666664 W (1) C W (2) C ... W (n) C 3 777777777777777775 which is in the form of (7), where the parameter matrix is the multiplication of a sequence of block diagonal matrices whose entries consist of identity blocks and blocks from other parameter matrices. Though the two proposed architectures stack multi-scale information in di↵erent ways, i.e. incremental and direct respectively, the equivalence reveals that the truncated block Krylov network can be constrained to leverage multi-scale information in a way similar to the snowball architecture. While it is worth noting that when there are no constraints, truncated Krylov is capable of achieving more than what snowball does. 4.4 Relation to Message Passing Framework We denote the concatenation operator as k. If we consider L as a general aggregation operator which aggregates node features with its neighborhood features, we see that the two proposed architectures both have close relationships with message passing framework [12], which are illustrated in the following table, where N0(v) = {v}, Mt is a message function, Ut is a vertex update function, m(t+1)v ,h(t+1)v are messages and hidden states at each node respectively, m(t+1) = [m(t+1) 1 , · · · ,m(t+1) N ]T, h(t+1) = [h(t+1) 1 , · · · ,h(t+1) N ]T and is a nonlinear activation function. Compared to our proposed architectures, we can see that the message passing paradigm cannot avoid oversmoothing problem because it does not leverage multi-scale information in each layer and will finally lose local information. An alternate solution to address the oversmoothing problem could be to modify the readout function to ŷ = R({h(0)v ,h(1)v , . . . ,h(T)v |v 2V}). 5 Experiments On node classification tasks, we test 2 instances of the snowball GCN and 1 instance of the truncated Krylov GCN, which include linear snowball GCN ( f = g = identity, p = 1), snowball GCN ( f = Tanh, g = identity, p = 1) and truncated Krylov ( f = g = Tanh, p = 0). The test cases include on public splits [37, 25] of Cora, Citeseer and PubMed2, as well as 2Source code to be found at https://github.com/PwnerHarry/Stronger_GCN Table 1: Algorithms in Matrix and Nodewise Forms Forms Algorithms Matrix Nodewise Message Passing m (t+1) =Mt(A,h(t)) m(t+1)v = P w2N(v) Mt(h(t)v ,h (t) w , evw) h (t+1) = Ut(h(t),m(t+1)) h(t+1)v = Ut(h (t) v ,m (t+1) v ) GraphSAGE-GCN m (t+1) = Lh(t) m(t+1)v = mean({h(t)v } [ {h(t)N(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) Snowball m (t+1) = L[h(0)k . . . kh(t)] m(t+1)v = kti=0mean({h (i) v } [ {h(i)N(v)}) h (t+1) v = (m(t+1)Wt) h (t+1) v = (WTt m (t+1) v ) Truncated Krylov m (t+1) = h(t)k . . . kLmt 1h(t) m(t+1)v = kmt 1i=0 mean([ik=0{h (t) Nk(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) the crafted smaller splits that are more di cult [25, 21, 31]. We compare the instances against several methods under 2 experimental settings, with or without validations sets. The compared methods with validation sets include graph convolutional networks for fingerprint (GCN-FP) [8], gated graph neural networks (GGNN) [23], di↵usion convolutional neural networks (DCNN) [1], Chebyshev networks (Cheby) [7], graph convolutional networks (GCN) [18], message passing neural networks (MPNN) [12], graph sample and aggregate (GraphSAGE) [14], graph partition neural networks (GPNN) [24], graph attention networks (GAT) [33], LanczosNet (LNet) [25] and AdaLanczosNet (AdaLNet) [25]. The copmared methods without validation sets include label propagation using ParWalks (LP) [35], Cotraining [21], Self-training [21], Union [21], Intersection [21], GCN without validation [21], Multi-stage training [31], Multi-stage self-supervised (M3S) training [31], GCN with sparse virtual adversarial training (GCN-SVAT) [30] and GCN with dense virtual adversarial training (GCN-DVAT) [30]. In Table 2 and 3, for each test case, we report the accuracy averaged from 10 independent runs using the best searched hyperparameters. These hyperparameters are reported in the appendix, which include learning rate and weight decay for the optimizers RMSprop or Adam for cases with validation or without validation, respectively, taking values in the intervals [10 6, 5 ⇥ 10 3] and [10 5, 10 2], respectively, width of hidden layers taking value in the set {100, 200, · · · , 5000}, number of hidden layers in the set {1, 2, . . . , 50}, dropout in (0, 0.99], and the number of Krylov blocks taking value in {1, 2, . . . , 100}. An early stopping trick is also used to achieve better training. Specifically we terminate the training after 100 update steps of not improving the training loss. We see that the instances of the proposed architectures achieve overwhelming performance in all test cases. We visualize a representative case using t-SNE [26] in Figure 3. From these visualization, we can see the instances can extract good features with small training data, especially for the truncated block Krylov network. Particularly, when the training splits are small, they perform astonishingly better than the existing methods. This may be explained by the fact that when there is less labeled data, larger scope of vision field is needed to make recognition of each node or to let the label signals propagate. We would also highlight that the linear snowball GCN can achieve state-of-the-art performance with much less computational cost. If G has no bipartite components, then in (4), as n ! 1, rank(Y0) k almost surely. 6 Future Works Future research of this like includes: 1) Investigating how the pointwise nonlinear activation functions influence block vectors, e.g., the feature block vector X and hidden feature block vectors Hi, so that we can find possible activation functions better than Tanh; 2) Finding a better way to leverage the block Krylov algorithms instead of conducting simple truncation. Acknowledgements The authors wish to express sincere gratitude for the computational resources of Compute Canada provided by Mila, as well as for the proofreading done by Sitao and Mingde’s good friend & coworker Ian P. Porada.
1. What are the contributions of the paper to the field of Graph Convolutional Networks (GCNs)? 2. What are the weaknesses of the paper in terms of writing and organization? 3. How convincing are the experimental results in demonstrating the effectiveness of the proposed architectures? 4. What additional experiments or comparisons would strengthen the paper's claims and broaden its scope? 5. What factors does the paper suggest are important for the performance of the proposed architectures, and how do they impact accuracy and complexity?
Review
Review As described, the paper makes some strong contributions to the knowledge of GCNs. However, some parts of the paper are not clearly written or organized. As an example, the discussion and numerical results around Figure 1 are difficult to understand. Additionally, some figure and table captions and some mathematical derivations are not clearly expressed, and should be revised. The proposed architectures are evaluated and compared to 3 common datasets (all for classification of scientific publications into one of several classes), and the results on these problems are fairly convincing. However, it would be more convincing to expand experiments and interpretation to different families of problems (with diverse data structures), as well as to compare the time and memory complexity associated with these approaches. Apart from the number layers, it is not clear from the paper and its experiments what factors are critical for the performance of the proposed architectures. What key factors can affect the accuracy and complexity of the proposed architectures?
NIPS
Title Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks Abstract Recently, neural network based approaches have achieved significant progress for solving large, complex, graph-structured problems. Nevertheless, the advantages of multi-scale information and deep architectures have not been su ciently exploited. In this paper, we first analyze key factors constraining the expressive power of existing Graph Convolutional Networks (GCNs), including the activation function and shallow learning mechanisms. Then, we generalize spectral graph convolution and deep GCN in block Krylov subspace forms, upon which we devise two architectures, both scalable in depth however making use of multi-scale information di↵erently. On several node classification tasks, the proposed architectures achieve state-of-the-art performance. 1 Introduction & Motivation Many real-world problems can be modeled as graphs [14, 18, 25, 12, 27, 7]. Inspired by the success of Convolutional Neural Networks (CNNs) [20] in computer vision [22], graph convolution defined on graph Fourier domain stands out as the key operator and one of the most powerful tools for using machine learning to solve graph problems. In this paper, we focus on spectrum-free Graph Convolutional Networks (GCNs) [2, 29], which have demonstrated state-of-the-art performance on many transductive and inductive learning tasks [7, 18, 25, 3, 4]. One major problem of the existing GCNs is the low expressive power limited by their shallow learning mechanisms [38, 36]. There are mainly two reasons why people have not yet achieved an architecture that is scalable in depth. First, this problem is di cult: considering graph convolution as a special form of Laplacian smoothing [21], networks with multiple convolutional layers will su↵er from an over-smoothing problem that makes the representation of even distant nodes indistinguishable [38]. Second, some people think it is unnecessary: for example, [2] states that it is not necessary for the label information to totally traverse the entire graph and one can operate on the multi-scale coarsened input graph and obtain the same flow of information as GCNs with more layers. Acknowledging the di culty, we hold on to the objective of deepening GCNs since the desired compositionality1 will yield easy articulation and consistent performance for problems with di↵erent scales. In this paper, we break the performance ceiling of the GCNs. First, we analyze the limits of the existing GCNs brought by the shallow learning mechanisms and the activation functions. Then, we show that any graph convolution with a well-defined analytic spectral filter can 1The expressive power of a sound deep NN architecture should be expected to grow with the increment of network depth [19, 16]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. be written as a product of a block Krylov matrix and a learnable parameter matrix in a special form. Based on this, we propose two GCN architectures that leverage multi-scale information in di↵erent ways and are scalable in depth, with stronger expressive powers and abilities to extract richer representations of graph-structured data. We also show that the equivalence of the two architectures can be achieved under certain conditions. For empirical validation, we test di↵erent instances of the proposed architectures on multiple node classification tasks. The results show that even the simplest instance of the architectures achieves state-of-the-art performance, and the complex ones achieve surprisingly higher performance, with or without validation sets. 2 Why Deep GCN Does Not Work Well? 2.1 Foundations As in [11], we use bold font for vectors (e.g. v), block vectors (e.g. V) and matrix blocks (e.g. Vi). Suppose we have an undirected graph G = (V,E,A), where V is the node set with |V| = N, E is the edge set with |E| = E, A 2 RN⇥N is a symmetric adjacency matrix and D is a diagonal degree matrix, i.e. Dii = P j Aij. A di↵usion process [6, 5] on G can be defined by a di↵usion operator L, which is a symmetric matrix, e.g. graph Laplacian L = D A, normalized graph Laplacian L = I D 1/2AD 1/2 and a nity matrix L = A + I, etc.. In this paper, we use L for a general di↵usion operator, unless specified otherwise. The eigendecomposition of L gives us L = U⇤UT, where ⇤ is a diagonal matrix whose diagonal elements are eigenvalues and the columns of U are the orthonormal eigenvectors, named graph Fourier basis. We also have a feature matrix (graph signals) X 2 RN⇥F (which can be regarded as a block vector) defined onV and each node i has a feature vector Xi,:, which is the i-th row of X. Spectral graph convolution is defined in graph Fourier domain s.t. x ⇤G y = U((UTx) (UT y)), where x, y 2 RN and is the Hadamard product [7]. Following this definition, a graph signal x filtered by g✓ can be written as y = g✓(L)x = g✓(U⇤UT)x = Ug✓(⇤)UTx (1) where g✓ is any function which is analytic inside a closed contour which encircles (L), e.g. Chebyshev polynomial [7]. GCN generalizes this definition to signals with F input channels and O output channels and its network structure can be described as Y = softmax(L ReLU(LXW0) W1) (2) where L ⌘ D̃ 1/2ÃD̃ 1/2, Ã ⌘ A + I, D̃ ⌘ diag(P jÃ1 j, . . . , P j ÃN j) (3) This is called spectrum-free method [2] since it requires no explicit computation of eigendecomposition and operations on the frequency domain [38]. 2.2 Problems Suppose we deepen GCN in the same way as [18, 21], we have Y = softmax(L ReLU(· · · L ReLU(L ReLU(LXW0) W1) W2 · · · ) Wn) ⌘ softmax(Y0) (4) For this architecture, [21] gives an analysis on the e↵ect of L without considering the ReLU activation function. Our analyses on (4) can be summarized in the following theorems. Theorem 1. Suppose that G has k connected components and the di↵usion operator L is defined as that in (3). Let X 2 RN⇥F be any block vector and let Wj be any non-negative parameter matrix with kWjk2 1 for j = 0, 1, . . .. If G has no bipartite components, then in (4), as n!1, rank(Y0) k. Proof See Appendix A. ⇤ Conjecture 1. Theorem 1 still holds without the non-negative constraint on the parameter matrices. Theorem 2. Suppose the n-dimensional x and y are independently sampled from a continuous distribution and the activation function Tanh(z) = ez e zez+e z is applied to [x, y] pointwisely, then P(rank Tanh([x, y]) = rank([x, y])) = 1 Proof See Appendix A. ⇤ Theorem 1 shows that if we simply deepen GCN, the extracted features will degrade, i.e. Y 0 only contains the stationary information of the graph structure and loses all the local information in node for being smoothed. In addition, from the proof we see that the pointwise ReLU transformation is a conspirator. Theorem 2 tells us that Tanh is better at keeping linear independence among column features. We design a numerical experiment on synthetic data (see Appendix) to test, under a 100-layer GCN architecture, how activation functions a↵ect the rank of the output in each hidden layer during the feedforward process. As Figure 1(a) shows, the rank of hidden features decreases rapidly with ReLU, while having little fluctuation under Tanh, and even the identity function performs better than ReLU (see Appendix for more comparisons). So we propose to replace ReLU by Tanh. 3 Spectral Graph Convolution and Block Krylov Subspaces 3.1 Block Krylov Subspaces Let S be a vector subspace of RF⇥F containing the identity matrix IF that is closed under matrix multiplication and transposition. We define an inner product h·, ·iS in the block vector space RN⇥F as follows [11]: Definition 1 A mapping h·, ·iS from RN⇥F ⇥ RN⇥F to S is called a block inner product onto S if 8X,Y,Z 2 RN⇥F and 8C 2 S: 1. S-linearity: hX,YCiS = hX,YiSC and hX + Y,ZiS = hX,ZiS + hY,ZiS 2. symmetry: hX,YiS = hY,XiTS 3. definiteness: hX,XiS is positive definite if X has full rank, and hX,XiS = 0F i↵ X = 0. There are mainly three ways to define h·, ·iS [11]: 1) (Classical.) SCl = RF⇥F and hX,YiClS = XTY; 2) (Global.) SGl = cIF, c 2 R and hX,YiGlS = trace(XTY)IF; 3) (Loop-interchange.) SLi is the set of diagonal matrices and hX,YiLiS = diag(XTY). The three definitions are all useful yet we will use the classical one for our contribution. For further explanations, we give the definition of block vector subspace of RN⇥F. Definition 2 Given a set of block vectors {Xk}mk=1 ⇢ RN⇥F, the S-span of {Xk}mk=1 is defined as spanS{X1, . . . ,Xm} := { mP k=1 XkCk : Ck 2 S} Given the above definition, the order-m block Krylov subspace with respect to the matrix A 2 RN⇥N, the block vector B 2 RN⇥F and the vector space S can be defined asKSm(A,B) := spanS{B,AB, . . . ,Am 1B}. The corresponding block Krylov matrix is defined as Km(A,B) := [B,AB, . . . ,Am 1B]. 3.2 Spectral Graph Convolution in Block Krylov Subspace Form In this section, we show that any graph convolution with well-defined analytic spectral filter defined on L 2 RN⇥N can be written as the product of a block Krylov matrix with a learnable parameter matrix in a specific form. We take S = SCl = RF⇥F. For any real analytic scalar function g, its power series expansion around center 0 is g(x) = 1X n=0 anxn = 1X n=0 g(n)(0) n! xn, |x| < R where R is the radius of convergence. The function g can be used to define a filter. Let ⇢(L) denote the spectrum radius of L and suppose ⇢(L) < R. The spectral filter g(L) 2 RN⇥N can be defined as g(L) := 1X n=0 anLn = 1X n=0 g(n)(0) n! Ln, ⇢(L) < R According to the definition of spectral graph convolution in (1), graph signal X is filtered by g(L) as follows, g(L)X = 1X n=0 g(n)(0) n! LnX = h X,LX,L2X, · · · i " g(0)(0) 0! IF, g(1)(0) 1! IF, g(2)(0) 2! IF, · · · #T = A0B0 where A0 2 RN⇥1 and B0 2 R1⇥F. We can see that A0 is a block Krylov matrix and Range(A0B0) ✓ Range(A0). It is shown in [13, 11] that for S = RF⇥F there exists a smallest m such that spanS{X,LX,L2X, · · · } = spanS{X,LX,L2X, . . . ,Lm 1X} (5) where m depends on L and X and will be written as m(L,X) later. This means for any k m, LkX 2 KSm(L,X). From (5), the convolution can be written as g(L)X = 1X n=0 g(n)(0) n! LnX ⌘ h X,LX, . . . ,Lm 1X i h ( 0S)T, ( 1S)T, · · · , ( Sm 1)T iT ⌘ Km(L,X) S (6) where Si 2 RF⇥F for i = 1, . . . ,m 1 are parameter matrix blocks. Then, a graph convolutional layer can be be generally written as g(L)XW0 = Km(L,X) SW0 = Km(L,X)WS (7) where WS ⌘ SW0 2 RmF⇥O. The essential number of learnable parameters is mF ⇥O. 3.3 Deep GCN in the Block Krylov Subspace Form Since the spectral graph convolution can be simplified as (6)(7), we can build deep GCN in the following way. Suppose that we have a sequence of analytic spectral filters G = {g0, g1, . . . , gn} and a sequence of pointwise nonlinear activation functions H = {h0, h1, . . . , hn}. Then, a deep spectral graph convolution network can be written as Y = softmax n gn(L) hn 1 n · · · g2(L) h1 n g1(L) h0 n g0(L)XW00 o W01 o W02 · · · o W0n o (8) Define H0 = X, Hi+1 = hi{gi(L)HiWi}, i = 0, . . . ,n 1 Then, we have Y = softmax{Kmn (L,Hn)WSnn } From (7) and (8), we see we can write Hi+1 = hi{Kmi (L,Hi)WSii }, mi ⌘ m(L,Hi) It is easy to see that, when gi(L) = I, (8) is a fully connected network [21]; when n = 1, g0(L) = g1(L) = L, where L is defined in (3), it is just GCN [18]; when gi(L) is defined by the Chebyshev polynomial [15], W0i = I, (8) is ChebNet [7]. 3.4 Di culties & Inspirations In the last subsection, we gave a general form of deep GCN in the block Krylov form. Following this idea, we can leverage the existing block Lanczos algorithm [11, 10] to find mi and compute orthogonal basis ofKSmi (L,Hi) which makes the filter coe cients compact [25] and improve numerical stability. But there are some di culties in practice: 1. During the training phase, Hi changes every time when parameters are updated. This makes mi a variable and thus requires adaptive size for parameter matrices WSii . 2. For classical inner product, the QR factorization that is needed in block Lanczos algorithm [11] is di cult to be implemented in backpropagation framework. Despite implementation intractability, block Krylov form is still meaningful for constructing GCNs that are scalable in depth as we illustrate below. For each node v 2 {1, . . . ,N} in the graph, denote N(v) as the set of its neighbors and Nk(v) as the set of its k-hop neighbors. Then, LX(v, :) can be interpreted as a weighted mean of the feature vectors of v and N(v). If the network goes deep as (4), Y0(v, :) becomes the “weighted mean” of the feature vectors of v and N(n+1)(v) (not exactly weighted mean because we have ReLU in each layer). As the scope grows, the nodes in the same connected component tend to have the same (global) features, while losing their individual (local) features, which makes them indistinguishable. Such phenomenon is recognized as “oversmoothing” [21]. Though it is reasonable to assume that the nodes in the same cluster share many similar properties, it will be harmful to omit the individual di↵erences between each node. Therefore, the inspiration from the block Krylov form is that, to get a richer representation of each node, we need to concatenate the multi-scale information (local and global) together instead of merely doing smoothing in each hidden layer. If we have a smart way to stack multi-scale information, the network will be scalable in depth. To this end, we naturally come up with a densely connected architecture [17], which we call snowball network and a compact architecture, which we call the truncated Krylov network, in which the multi-scale information is used di↵erently. 4 Deep GCN Architectures 4.1 Snowball The block Krylov form inspires first an architecture that concatenates multi-scale features incrementally, resulting in a densely-connected graph network (Figure 2(a)) as follows: H0 = X, Hl+1 = f (L [H0,H1, . . . ,Hl] Wl) , l = 0, 1, . . . ,n 1 C = g ([H0,H1, . . . ,Hn] Wn) (9) output = softmax (LpCWC) where Wl 2 R( Pl i=0 Fi)⇥Fl+1 ,Wn 2 R( Pn i=0 Fi)⇥FC and WC 2 RFC⇥FO are learnable parameter matrices, Fl+1 is the number of output channels in layer l; f and g are pointwise activation functions; Hl are extracted features; C is the output of a classifier of any kind, e.g., a fully connected neural network or even an identity layer, in which case C = [H0,H1, . . . ,Hn]; p 2 {0, 1}. When p = 0, Lp = I and when p = 1, LP = L, which means that we project C back onto graph Fourier basis, which is necessary when the graph structure encodes much information. Following this construction, we can stack all learned features as the input of the subsequent hidden layer, which is an e cient way to concatenate multi-scale information. The size of input will grow like a snowball and this construction is similar to DenseNet [17], which is designed for regular grids (images). Thus, some advantages of DenseNet are naturally inherited, e.g., alleviate the vanishing-gradient problem, encourage feature reuse, increase the variation of input for each hidden layer, reduce the number of parameters, strengthen feature propagation and improve model compactness. 4.2 Truncated Krylov The block Krylov form inspires then an architecture that concatenates multi-scale features directly together in each layer. However, as stated in Section 3.4, the fact that mi is a variable makes GCN di cult to be merged into the block Krylov framework. Thus we compromise and set mi as a hyperparameter and get a truncated block Krylov network (Figure 2(b)) as shown below: H0 = X, Hl+1 = f ⇣h Hl,LHl . . . ,Lml 1Hl i Wl ⌘ , l = 0, 1, . . . ,n 1 C = g (HnWn) (10) output = softmax (LpCWC) where Wl 2 R(mlFl)⇥Fl+1 ,Wn 2 RFn⇥FC and WC 2 RFC⇥FO are learnable parameter matrices; f and g are activation functions; C is the output of a classifier of any kind; p 2 {0, 1}. In the truncated Krylov network, the local information will not be diluted in each layer because in each layer l, we start the concatenation from L0Hl so that the extracted local information can be kept. There are works on the analysis of error bounds of doing truncation in block Krylov methods [11]. But the results need many assumptions either on X, e.g., X is a standard Gaussian matrix [34], or on L, e.g., some conditions on the smallest and largest eigenvalues of L have to be satisfied [28]. Instead of doing truncation for a specific function or a fixed X, we are dealing with variable X during training. So we cannot get a practical error bound since we cannot put any restriction on X and its relation to L. The Krylov subspace methods are often associated with low-rank approximation methods for large sparse matrices. Here we would like to mention [25] does low-rank approximation of L by the Lanczos algorithm. It su↵ers from the tradeo↵ between accuracy and e ciency: the information in L will be lost if L is not low-rank, while keeping more information via increasing the Lanczos steps will hurt the e ciency. Since most of the graphs we are dealing with have sparse connectivity structures, they are actually not low-rank, e.g., the Erdős-Rényi graph G(n, p) with p = !( 1n ) [32] and examples in Appendix IV. Thus, we do not propose to do low-rank approximation in our architecture. 4.3 Equivalence of Linear Snowball GCN and Truncated Block Krylov Network In this part, we will show that the two proposed architectures are inherently connected. In fact their equivalence can be established when using identify functions as f , identity layer as C and constraining the parameter matrix of truncated Krylov to be in a special form. In linear snowball GCN, we can split the parameter matrix Wi into i + 1 blocks and write it as Wi = h (W (1) i )T, · · · , (W (i+1) i )T iT and then following (9) we have H0 = X, H1 = LXW0, H2 = L[X,H1]W1 = LXW (1)1 +L 2 XW (1) 0 W (2) 1 = L[X,LX] " I 0 0 W (1) 0 # 2666664 W (1) 1 W (2) 1 3 777775, . . . As in (9), we have CWC = L[H0,H1, . . . ,Hn]WC. Thus we can write [H0,H1 · · · ,Hn]WC = [X,LX, · · · ,LnX] 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 0 3 777777777777777775 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 1 3 777777777777777775 · · · 2 666666666666666664 I 0 · · · 0 0 W (n) n 1 · · · 0 ... ... . . . ... 0 0 · · · W (1) n 1 3 777777777777777775 2 666666666666666664 W (1) C W (2) C ... W (n) C 3 777777777777777775 which is in the form of (7), where the parameter matrix is the multiplication of a sequence of block diagonal matrices whose entries consist of identity blocks and blocks from other parameter matrices. Though the two proposed architectures stack multi-scale information in di↵erent ways, i.e. incremental and direct respectively, the equivalence reveals that the truncated block Krylov network can be constrained to leverage multi-scale information in a way similar to the snowball architecture. While it is worth noting that when there are no constraints, truncated Krylov is capable of achieving more than what snowball does. 4.4 Relation to Message Passing Framework We denote the concatenation operator as k. If we consider L as a general aggregation operator which aggregates node features with its neighborhood features, we see that the two proposed architectures both have close relationships with message passing framework [12], which are illustrated in the following table, where N0(v) = {v}, Mt is a message function, Ut is a vertex update function, m(t+1)v ,h(t+1)v are messages and hidden states at each node respectively, m(t+1) = [m(t+1) 1 , · · · ,m(t+1) N ]T, h(t+1) = [h(t+1) 1 , · · · ,h(t+1) N ]T and is a nonlinear activation function. Compared to our proposed architectures, we can see that the message passing paradigm cannot avoid oversmoothing problem because it does not leverage multi-scale information in each layer and will finally lose local information. An alternate solution to address the oversmoothing problem could be to modify the readout function to ŷ = R({h(0)v ,h(1)v , . . . ,h(T)v |v 2V}). 5 Experiments On node classification tasks, we test 2 instances of the snowball GCN and 1 instance of the truncated Krylov GCN, which include linear snowball GCN ( f = g = identity, p = 1), snowball GCN ( f = Tanh, g = identity, p = 1) and truncated Krylov ( f = g = Tanh, p = 0). The test cases include on public splits [37, 25] of Cora, Citeseer and PubMed2, as well as 2Source code to be found at https://github.com/PwnerHarry/Stronger_GCN Table 1: Algorithms in Matrix and Nodewise Forms Forms Algorithms Matrix Nodewise Message Passing m (t+1) =Mt(A,h(t)) m(t+1)v = P w2N(v) Mt(h(t)v ,h (t) w , evw) h (t+1) = Ut(h(t),m(t+1)) h(t+1)v = Ut(h (t) v ,m (t+1) v ) GraphSAGE-GCN m (t+1) = Lh(t) m(t+1)v = mean({h(t)v } [ {h(t)N(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) Snowball m (t+1) = L[h(0)k . . . kh(t)] m(t+1)v = kti=0mean({h (i) v } [ {h(i)N(v)}) h (t+1) v = (m(t+1)Wt) h (t+1) v = (WTt m (t+1) v ) Truncated Krylov m (t+1) = h(t)k . . . kLmt 1h(t) m(t+1)v = kmt 1i=0 mean([ik=0{h (t) Nk(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) the crafted smaller splits that are more di cult [25, 21, 31]. We compare the instances against several methods under 2 experimental settings, with or without validations sets. The compared methods with validation sets include graph convolutional networks for fingerprint (GCN-FP) [8], gated graph neural networks (GGNN) [23], di↵usion convolutional neural networks (DCNN) [1], Chebyshev networks (Cheby) [7], graph convolutional networks (GCN) [18], message passing neural networks (MPNN) [12], graph sample and aggregate (GraphSAGE) [14], graph partition neural networks (GPNN) [24], graph attention networks (GAT) [33], LanczosNet (LNet) [25] and AdaLanczosNet (AdaLNet) [25]. The copmared methods without validation sets include label propagation using ParWalks (LP) [35], Cotraining [21], Self-training [21], Union [21], Intersection [21], GCN without validation [21], Multi-stage training [31], Multi-stage self-supervised (M3S) training [31], GCN with sparse virtual adversarial training (GCN-SVAT) [30] and GCN with dense virtual adversarial training (GCN-DVAT) [30]. In Table 2 and 3, for each test case, we report the accuracy averaged from 10 independent runs using the best searched hyperparameters. These hyperparameters are reported in the appendix, which include learning rate and weight decay for the optimizers RMSprop or Adam for cases with validation or without validation, respectively, taking values in the intervals [10 6, 5 ⇥ 10 3] and [10 5, 10 2], respectively, width of hidden layers taking value in the set {100, 200, · · · , 5000}, number of hidden layers in the set {1, 2, . . . , 50}, dropout in (0, 0.99], and the number of Krylov blocks taking value in {1, 2, . . . , 100}. An early stopping trick is also used to achieve better training. Specifically we terminate the training after 100 update steps of not improving the training loss. We see that the instances of the proposed architectures achieve overwhelming performance in all test cases. We visualize a representative case using t-SNE [26] in Figure 3. From these visualization, we can see the instances can extract good features with small training data, especially for the truncated block Krylov network. Particularly, when the training splits are small, they perform astonishingly better than the existing methods. This may be explained by the fact that when there is less labeled data, larger scope of vision field is needed to make recognition of each node or to let the label signals propagate. We would also highlight that the linear snowball GCN can achieve state-of-the-art performance with much less computational cost. If G has no bipartite components, then in (4), as n ! 1, rank(Y0) k almost surely. 6 Future Works Future research of this like includes: 1) Investigating how the pointwise nonlinear activation functions influence block vectors, e.g., the feature block vector X and hidden feature block vectors Hi, so that we can find possible activation functions better than Tanh; 2) Finding a better way to leverage the block Krylov algorithms instead of conducting simple truncation. Acknowledgements The authors wish to express sincere gratitude for the computational resources of Compute Canada provided by Mila, as well as for the proofreading done by Sitao and Mingde’s good friend & coworker Ian P. Porada.
1. What is the contribution of the paper regarding the scalability of Graph Convolutional Networks (GCN)? 2. What are the strengths and weaknesses of the proposed architectures, snowball GCN and truncated Krylov block network? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any concerns or questions regarding the experimental results and comparisons with other works? 5. How does the reviewer evaluate the significance and impact of the paper's findings in the field of GCN research?
Review
Review # originality The theoretical analyses on the scalability of GCN have great originality and important in practice. The idea to use the Krylov subspace methods to learn the spectral filter has been a sort of universal one recently, e.g., LanczosNet. The proposed architectures, namely, snowball GCN and truncated Krylov block network, have a certain novelty. # quality - I would like to see the ablation test in order to investigate either Tanh or the snowball/truncated Krylov works. This is very important to clarify the source of difficulty to train GCN, given good end-to-end performances. - Experiments mostly follow the procedures in the existing work, which would be fair comparisons. If there are more fancy applications or datasets, that would be more interesting. I guess Tables 1 and 2 do not show precision, rather classification accuracy (I also confirmed the submitted code and it also says the classification accuracy). At least LanczosNet and Multi-stage training report the same numbers in those tables as classification accuracy in their original papers. Ditto to Tables 3 and 4 in the appendix. By chance, I found that the number in LNet + Cora + 3% must be 76.3 if you directly refer to the number in the original paper. - It's interesting to look at two different architectures, while I would like to know why you need to propose two architectures, especially given the equivalence analysis in Section 5.3. Can I say that let's use snowball if you do not have much computational resource, otherwise truncated Krylov? # clarity - The introduction to GCN and the Krylov subspace methods are very good and nice to outsiders. I just suggest a few points that could be improved. * In Section 4.1, the block vector subspace appears in l.92 for the first place, which would be defined later in l.104. Could you make the definition first? * About 5 lines after l.115 (between l.115 and l.116), the spectrum radius is undefined. Could you define it? - The explanation of the motivation for the proposed architectures might be a bit misleading. I might have misunderstood, but let me confirm so as not to get something wrong. You claim that the proposed architectures are motivated by the theoretical analysis (Theorems 1 and 2) in l.39; this is partially the case since you choose to use the Tanh activation instead of Relu, which would make GCN more scalable and retain richer information even in deeper networks. However, the main ideas in your Snowball GCN and Truncated Block Krylov Network seem motivated by how to alleviate the difficulty in the Krylov subspace methods (in Section 4.4). Could you clarify this part? Or you may emphasize this difficulty somehow in the introduction. - Please explain the percentage under dataset names in Tables 1 and 2. (train-validation split ratio, right?) Ditto to Tables 3 and 4 in the appendix. # significance This work gives an important insight that the commonly used activation Relu might not be suitable in case of GCN. The fact is also confirmed through simple empirical studies. The following work can investigate this line as one of the open problems in GCN field. =========== After feedback =========== Thank you for providing answers. I will leave my score the same. I still feel that it is nice to have messages that says which architectures we should use base on some trade-off relationships if any.
NIPS
Title Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks Abstract Recently, neural network based approaches have achieved significant progress for solving large, complex, graph-structured problems. Nevertheless, the advantages of multi-scale information and deep architectures have not been su ciently exploited. In this paper, we first analyze key factors constraining the expressive power of existing Graph Convolutional Networks (GCNs), including the activation function and shallow learning mechanisms. Then, we generalize spectral graph convolution and deep GCN in block Krylov subspace forms, upon which we devise two architectures, both scalable in depth however making use of multi-scale information di↵erently. On several node classification tasks, the proposed architectures achieve state-of-the-art performance. 1 Introduction & Motivation Many real-world problems can be modeled as graphs [14, 18, 25, 12, 27, 7]. Inspired by the success of Convolutional Neural Networks (CNNs) [20] in computer vision [22], graph convolution defined on graph Fourier domain stands out as the key operator and one of the most powerful tools for using machine learning to solve graph problems. In this paper, we focus on spectrum-free Graph Convolutional Networks (GCNs) [2, 29], which have demonstrated state-of-the-art performance on many transductive and inductive learning tasks [7, 18, 25, 3, 4]. One major problem of the existing GCNs is the low expressive power limited by their shallow learning mechanisms [38, 36]. There are mainly two reasons why people have not yet achieved an architecture that is scalable in depth. First, this problem is di cult: considering graph convolution as a special form of Laplacian smoothing [21], networks with multiple convolutional layers will su↵er from an over-smoothing problem that makes the representation of even distant nodes indistinguishable [38]. Second, some people think it is unnecessary: for example, [2] states that it is not necessary for the label information to totally traverse the entire graph and one can operate on the multi-scale coarsened input graph and obtain the same flow of information as GCNs with more layers. Acknowledging the di culty, we hold on to the objective of deepening GCNs since the desired compositionality1 will yield easy articulation and consistent performance for problems with di↵erent scales. In this paper, we break the performance ceiling of the GCNs. First, we analyze the limits of the existing GCNs brought by the shallow learning mechanisms and the activation functions. Then, we show that any graph convolution with a well-defined analytic spectral filter can 1The expressive power of a sound deep NN architecture should be expected to grow with the increment of network depth [19, 16]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. be written as a product of a block Krylov matrix and a learnable parameter matrix in a special form. Based on this, we propose two GCN architectures that leverage multi-scale information in di↵erent ways and are scalable in depth, with stronger expressive powers and abilities to extract richer representations of graph-structured data. We also show that the equivalence of the two architectures can be achieved under certain conditions. For empirical validation, we test di↵erent instances of the proposed architectures on multiple node classification tasks. The results show that even the simplest instance of the architectures achieves state-of-the-art performance, and the complex ones achieve surprisingly higher performance, with or without validation sets. 2 Why Deep GCN Does Not Work Well? 2.1 Foundations As in [11], we use bold font for vectors (e.g. v), block vectors (e.g. V) and matrix blocks (e.g. Vi). Suppose we have an undirected graph G = (V,E,A), where V is the node set with |V| = N, E is the edge set with |E| = E, A 2 RN⇥N is a symmetric adjacency matrix and D is a diagonal degree matrix, i.e. Dii = P j Aij. A di↵usion process [6, 5] on G can be defined by a di↵usion operator L, which is a symmetric matrix, e.g. graph Laplacian L = D A, normalized graph Laplacian L = I D 1/2AD 1/2 and a nity matrix L = A + I, etc.. In this paper, we use L for a general di↵usion operator, unless specified otherwise. The eigendecomposition of L gives us L = U⇤UT, where ⇤ is a diagonal matrix whose diagonal elements are eigenvalues and the columns of U are the orthonormal eigenvectors, named graph Fourier basis. We also have a feature matrix (graph signals) X 2 RN⇥F (which can be regarded as a block vector) defined onV and each node i has a feature vector Xi,:, which is the i-th row of X. Spectral graph convolution is defined in graph Fourier domain s.t. x ⇤G y = U((UTx) (UT y)), where x, y 2 RN and is the Hadamard product [7]. Following this definition, a graph signal x filtered by g✓ can be written as y = g✓(L)x = g✓(U⇤UT)x = Ug✓(⇤)UTx (1) where g✓ is any function which is analytic inside a closed contour which encircles (L), e.g. Chebyshev polynomial [7]. GCN generalizes this definition to signals with F input channels and O output channels and its network structure can be described as Y = softmax(L ReLU(LXW0) W1) (2) where L ⌘ D̃ 1/2ÃD̃ 1/2, Ã ⌘ A + I, D̃ ⌘ diag(P jÃ1 j, . . . , P j ÃN j) (3) This is called spectrum-free method [2] since it requires no explicit computation of eigendecomposition and operations on the frequency domain [38]. 2.2 Problems Suppose we deepen GCN in the same way as [18, 21], we have Y = softmax(L ReLU(· · · L ReLU(L ReLU(LXW0) W1) W2 · · · ) Wn) ⌘ softmax(Y0) (4) For this architecture, [21] gives an analysis on the e↵ect of L without considering the ReLU activation function. Our analyses on (4) can be summarized in the following theorems. Theorem 1. Suppose that G has k connected components and the di↵usion operator L is defined as that in (3). Let X 2 RN⇥F be any block vector and let Wj be any non-negative parameter matrix with kWjk2 1 for j = 0, 1, . . .. If G has no bipartite components, then in (4), as n!1, rank(Y0) k. Proof See Appendix A. ⇤ Conjecture 1. Theorem 1 still holds without the non-negative constraint on the parameter matrices. Theorem 2. Suppose the n-dimensional x and y are independently sampled from a continuous distribution and the activation function Tanh(z) = ez e zez+e z is applied to [x, y] pointwisely, then P(rank Tanh([x, y]) = rank([x, y])) = 1 Proof See Appendix A. ⇤ Theorem 1 shows that if we simply deepen GCN, the extracted features will degrade, i.e. Y 0 only contains the stationary information of the graph structure and loses all the local information in node for being smoothed. In addition, from the proof we see that the pointwise ReLU transformation is a conspirator. Theorem 2 tells us that Tanh is better at keeping linear independence among column features. We design a numerical experiment on synthetic data (see Appendix) to test, under a 100-layer GCN architecture, how activation functions a↵ect the rank of the output in each hidden layer during the feedforward process. As Figure 1(a) shows, the rank of hidden features decreases rapidly with ReLU, while having little fluctuation under Tanh, and even the identity function performs better than ReLU (see Appendix for more comparisons). So we propose to replace ReLU by Tanh. 3 Spectral Graph Convolution and Block Krylov Subspaces 3.1 Block Krylov Subspaces Let S be a vector subspace of RF⇥F containing the identity matrix IF that is closed under matrix multiplication and transposition. We define an inner product h·, ·iS in the block vector space RN⇥F as follows [11]: Definition 1 A mapping h·, ·iS from RN⇥F ⇥ RN⇥F to S is called a block inner product onto S if 8X,Y,Z 2 RN⇥F and 8C 2 S: 1. S-linearity: hX,YCiS = hX,YiSC and hX + Y,ZiS = hX,ZiS + hY,ZiS 2. symmetry: hX,YiS = hY,XiTS 3. definiteness: hX,XiS is positive definite if X has full rank, and hX,XiS = 0F i↵ X = 0. There are mainly three ways to define h·, ·iS [11]: 1) (Classical.) SCl = RF⇥F and hX,YiClS = XTY; 2) (Global.) SGl = cIF, c 2 R and hX,YiGlS = trace(XTY)IF; 3) (Loop-interchange.) SLi is the set of diagonal matrices and hX,YiLiS = diag(XTY). The three definitions are all useful yet we will use the classical one for our contribution. For further explanations, we give the definition of block vector subspace of RN⇥F. Definition 2 Given a set of block vectors {Xk}mk=1 ⇢ RN⇥F, the S-span of {Xk}mk=1 is defined as spanS{X1, . . . ,Xm} := { mP k=1 XkCk : Ck 2 S} Given the above definition, the order-m block Krylov subspace with respect to the matrix A 2 RN⇥N, the block vector B 2 RN⇥F and the vector space S can be defined asKSm(A,B) := spanS{B,AB, . . . ,Am 1B}. The corresponding block Krylov matrix is defined as Km(A,B) := [B,AB, . . . ,Am 1B]. 3.2 Spectral Graph Convolution in Block Krylov Subspace Form In this section, we show that any graph convolution with well-defined analytic spectral filter defined on L 2 RN⇥N can be written as the product of a block Krylov matrix with a learnable parameter matrix in a specific form. We take S = SCl = RF⇥F. For any real analytic scalar function g, its power series expansion around center 0 is g(x) = 1X n=0 anxn = 1X n=0 g(n)(0) n! xn, |x| < R where R is the radius of convergence. The function g can be used to define a filter. Let ⇢(L) denote the spectrum radius of L and suppose ⇢(L) < R. The spectral filter g(L) 2 RN⇥N can be defined as g(L) := 1X n=0 anLn = 1X n=0 g(n)(0) n! Ln, ⇢(L) < R According to the definition of spectral graph convolution in (1), graph signal X is filtered by g(L) as follows, g(L)X = 1X n=0 g(n)(0) n! LnX = h X,LX,L2X, · · · i " g(0)(0) 0! IF, g(1)(0) 1! IF, g(2)(0) 2! IF, · · · #T = A0B0 where A0 2 RN⇥1 and B0 2 R1⇥F. We can see that A0 is a block Krylov matrix and Range(A0B0) ✓ Range(A0). It is shown in [13, 11] that for S = RF⇥F there exists a smallest m such that spanS{X,LX,L2X, · · · } = spanS{X,LX,L2X, . . . ,Lm 1X} (5) where m depends on L and X and will be written as m(L,X) later. This means for any k m, LkX 2 KSm(L,X). From (5), the convolution can be written as g(L)X = 1X n=0 g(n)(0) n! LnX ⌘ h X,LX, . . . ,Lm 1X i h ( 0S)T, ( 1S)T, · · · , ( Sm 1)T iT ⌘ Km(L,X) S (6) where Si 2 RF⇥F for i = 1, . . . ,m 1 are parameter matrix blocks. Then, a graph convolutional layer can be be generally written as g(L)XW0 = Km(L,X) SW0 = Km(L,X)WS (7) where WS ⌘ SW0 2 RmF⇥O. The essential number of learnable parameters is mF ⇥O. 3.3 Deep GCN in the Block Krylov Subspace Form Since the spectral graph convolution can be simplified as (6)(7), we can build deep GCN in the following way. Suppose that we have a sequence of analytic spectral filters G = {g0, g1, . . . , gn} and a sequence of pointwise nonlinear activation functions H = {h0, h1, . . . , hn}. Then, a deep spectral graph convolution network can be written as Y = softmax n gn(L) hn 1 n · · · g2(L) h1 n g1(L) h0 n g0(L)XW00 o W01 o W02 · · · o W0n o (8) Define H0 = X, Hi+1 = hi{gi(L)HiWi}, i = 0, . . . ,n 1 Then, we have Y = softmax{Kmn (L,Hn)WSnn } From (7) and (8), we see we can write Hi+1 = hi{Kmi (L,Hi)WSii }, mi ⌘ m(L,Hi) It is easy to see that, when gi(L) = I, (8) is a fully connected network [21]; when n = 1, g0(L) = g1(L) = L, where L is defined in (3), it is just GCN [18]; when gi(L) is defined by the Chebyshev polynomial [15], W0i = I, (8) is ChebNet [7]. 3.4 Di culties & Inspirations In the last subsection, we gave a general form of deep GCN in the block Krylov form. Following this idea, we can leverage the existing block Lanczos algorithm [11, 10] to find mi and compute orthogonal basis ofKSmi (L,Hi) which makes the filter coe cients compact [25] and improve numerical stability. But there are some di culties in practice: 1. During the training phase, Hi changes every time when parameters are updated. This makes mi a variable and thus requires adaptive size for parameter matrices WSii . 2. For classical inner product, the QR factorization that is needed in block Lanczos algorithm [11] is di cult to be implemented in backpropagation framework. Despite implementation intractability, block Krylov form is still meaningful for constructing GCNs that are scalable in depth as we illustrate below. For each node v 2 {1, . . . ,N} in the graph, denote N(v) as the set of its neighbors and Nk(v) as the set of its k-hop neighbors. Then, LX(v, :) can be interpreted as a weighted mean of the feature vectors of v and N(v). If the network goes deep as (4), Y0(v, :) becomes the “weighted mean” of the feature vectors of v and N(n+1)(v) (not exactly weighted mean because we have ReLU in each layer). As the scope grows, the nodes in the same connected component tend to have the same (global) features, while losing their individual (local) features, which makes them indistinguishable. Such phenomenon is recognized as “oversmoothing” [21]. Though it is reasonable to assume that the nodes in the same cluster share many similar properties, it will be harmful to omit the individual di↵erences between each node. Therefore, the inspiration from the block Krylov form is that, to get a richer representation of each node, we need to concatenate the multi-scale information (local and global) together instead of merely doing smoothing in each hidden layer. If we have a smart way to stack multi-scale information, the network will be scalable in depth. To this end, we naturally come up with a densely connected architecture [17], which we call snowball network and a compact architecture, which we call the truncated Krylov network, in which the multi-scale information is used di↵erently. 4 Deep GCN Architectures 4.1 Snowball The block Krylov form inspires first an architecture that concatenates multi-scale features incrementally, resulting in a densely-connected graph network (Figure 2(a)) as follows: H0 = X, Hl+1 = f (L [H0,H1, . . . ,Hl] Wl) , l = 0, 1, . . . ,n 1 C = g ([H0,H1, . . . ,Hn] Wn) (9) output = softmax (LpCWC) where Wl 2 R( Pl i=0 Fi)⇥Fl+1 ,Wn 2 R( Pn i=0 Fi)⇥FC and WC 2 RFC⇥FO are learnable parameter matrices, Fl+1 is the number of output channels in layer l; f and g are pointwise activation functions; Hl are extracted features; C is the output of a classifier of any kind, e.g., a fully connected neural network or even an identity layer, in which case C = [H0,H1, . . . ,Hn]; p 2 {0, 1}. When p = 0, Lp = I and when p = 1, LP = L, which means that we project C back onto graph Fourier basis, which is necessary when the graph structure encodes much information. Following this construction, we can stack all learned features as the input of the subsequent hidden layer, which is an e cient way to concatenate multi-scale information. The size of input will grow like a snowball and this construction is similar to DenseNet [17], which is designed for regular grids (images). Thus, some advantages of DenseNet are naturally inherited, e.g., alleviate the vanishing-gradient problem, encourage feature reuse, increase the variation of input for each hidden layer, reduce the number of parameters, strengthen feature propagation and improve model compactness. 4.2 Truncated Krylov The block Krylov form inspires then an architecture that concatenates multi-scale features directly together in each layer. However, as stated in Section 3.4, the fact that mi is a variable makes GCN di cult to be merged into the block Krylov framework. Thus we compromise and set mi as a hyperparameter and get a truncated block Krylov network (Figure 2(b)) as shown below: H0 = X, Hl+1 = f ⇣h Hl,LHl . . . ,Lml 1Hl i Wl ⌘ , l = 0, 1, . . . ,n 1 C = g (HnWn) (10) output = softmax (LpCWC) where Wl 2 R(mlFl)⇥Fl+1 ,Wn 2 RFn⇥FC and WC 2 RFC⇥FO are learnable parameter matrices; f and g are activation functions; C is the output of a classifier of any kind; p 2 {0, 1}. In the truncated Krylov network, the local information will not be diluted in each layer because in each layer l, we start the concatenation from L0Hl so that the extracted local information can be kept. There are works on the analysis of error bounds of doing truncation in block Krylov methods [11]. But the results need many assumptions either on X, e.g., X is a standard Gaussian matrix [34], or on L, e.g., some conditions on the smallest and largest eigenvalues of L have to be satisfied [28]. Instead of doing truncation for a specific function or a fixed X, we are dealing with variable X during training. So we cannot get a practical error bound since we cannot put any restriction on X and its relation to L. The Krylov subspace methods are often associated with low-rank approximation methods for large sparse matrices. Here we would like to mention [25] does low-rank approximation of L by the Lanczos algorithm. It su↵ers from the tradeo↵ between accuracy and e ciency: the information in L will be lost if L is not low-rank, while keeping more information via increasing the Lanczos steps will hurt the e ciency. Since most of the graphs we are dealing with have sparse connectivity structures, they are actually not low-rank, e.g., the Erdős-Rényi graph G(n, p) with p = !( 1n ) [32] and examples in Appendix IV. Thus, we do not propose to do low-rank approximation in our architecture. 4.3 Equivalence of Linear Snowball GCN and Truncated Block Krylov Network In this part, we will show that the two proposed architectures are inherently connected. In fact their equivalence can be established when using identify functions as f , identity layer as C and constraining the parameter matrix of truncated Krylov to be in a special form. In linear snowball GCN, we can split the parameter matrix Wi into i + 1 blocks and write it as Wi = h (W (1) i )T, · · · , (W (i+1) i )T iT and then following (9) we have H0 = X, H1 = LXW0, H2 = L[X,H1]W1 = LXW (1)1 +L 2 XW (1) 0 W (2) 1 = L[X,LX] " I 0 0 W (1) 0 # 2666664 W (1) 1 W (2) 1 3 777775, . . . As in (9), we have CWC = L[H0,H1, . . . ,Hn]WC. Thus we can write [H0,H1 · · · ,Hn]WC = [X,LX, · · · ,LnX] 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 0 3 777777777777777775 2 666666666666666664 I 0 · · · 0 0 I · · · 0 ... ... . . . ... 0 0 · · · W (1) 1 3 777777777777777775 · · · 2 666666666666666664 I 0 · · · 0 0 W (n) n 1 · · · 0 ... ... . . . ... 0 0 · · · W (1) n 1 3 777777777777777775 2 666666666666666664 W (1) C W (2) C ... W (n) C 3 777777777777777775 which is in the form of (7), where the parameter matrix is the multiplication of a sequence of block diagonal matrices whose entries consist of identity blocks and blocks from other parameter matrices. Though the two proposed architectures stack multi-scale information in di↵erent ways, i.e. incremental and direct respectively, the equivalence reveals that the truncated block Krylov network can be constrained to leverage multi-scale information in a way similar to the snowball architecture. While it is worth noting that when there are no constraints, truncated Krylov is capable of achieving more than what snowball does. 4.4 Relation to Message Passing Framework We denote the concatenation operator as k. If we consider L as a general aggregation operator which aggregates node features with its neighborhood features, we see that the two proposed architectures both have close relationships with message passing framework [12], which are illustrated in the following table, where N0(v) = {v}, Mt is a message function, Ut is a vertex update function, m(t+1)v ,h(t+1)v are messages and hidden states at each node respectively, m(t+1) = [m(t+1) 1 , · · · ,m(t+1) N ]T, h(t+1) = [h(t+1) 1 , · · · ,h(t+1) N ]T and is a nonlinear activation function. Compared to our proposed architectures, we can see that the message passing paradigm cannot avoid oversmoothing problem because it does not leverage multi-scale information in each layer and will finally lose local information. An alternate solution to address the oversmoothing problem could be to modify the readout function to ŷ = R({h(0)v ,h(1)v , . . . ,h(T)v |v 2V}). 5 Experiments On node classification tasks, we test 2 instances of the snowball GCN and 1 instance of the truncated Krylov GCN, which include linear snowball GCN ( f = g = identity, p = 1), snowball GCN ( f = Tanh, g = identity, p = 1) and truncated Krylov ( f = g = Tanh, p = 0). The test cases include on public splits [37, 25] of Cora, Citeseer and PubMed2, as well as 2Source code to be found at https://github.com/PwnerHarry/Stronger_GCN Table 1: Algorithms in Matrix and Nodewise Forms Forms Algorithms Matrix Nodewise Message Passing m (t+1) =Mt(A,h(t)) m(t+1)v = P w2N(v) Mt(h(t)v ,h (t) w , evw) h (t+1) = Ut(h(t),m(t+1)) h(t+1)v = Ut(h (t) v ,m (t+1) v ) GraphSAGE-GCN m (t+1) = Lh(t) m(t+1)v = mean({h(t)v } [ {h(t)N(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) Snowball m (t+1) = L[h(0)k . . . kh(t)] m(t+1)v = kti=0mean({h (i) v } [ {h(i)N(v)}) h (t+1) v = (m(t+1)Wt) h (t+1) v = (WTt m (t+1) v ) Truncated Krylov m (t+1) = h(t)k . . . kLmt 1h(t) m(t+1)v = kmt 1i=0 mean([ik=0{h (t) Nk(v)}) h (t+1) = (m(t+1)Wt) h(t+1)v = (WTt m (t+1) v ) the crafted smaller splits that are more di cult [25, 21, 31]. We compare the instances against several methods under 2 experimental settings, with or without validations sets. The compared methods with validation sets include graph convolutional networks for fingerprint (GCN-FP) [8], gated graph neural networks (GGNN) [23], di↵usion convolutional neural networks (DCNN) [1], Chebyshev networks (Cheby) [7], graph convolutional networks (GCN) [18], message passing neural networks (MPNN) [12], graph sample and aggregate (GraphSAGE) [14], graph partition neural networks (GPNN) [24], graph attention networks (GAT) [33], LanczosNet (LNet) [25] and AdaLanczosNet (AdaLNet) [25]. The copmared methods without validation sets include label propagation using ParWalks (LP) [35], Cotraining [21], Self-training [21], Union [21], Intersection [21], GCN without validation [21], Multi-stage training [31], Multi-stage self-supervised (M3S) training [31], GCN with sparse virtual adversarial training (GCN-SVAT) [30] and GCN with dense virtual adversarial training (GCN-DVAT) [30]. In Table 2 and 3, for each test case, we report the accuracy averaged from 10 independent runs using the best searched hyperparameters. These hyperparameters are reported in the appendix, which include learning rate and weight decay for the optimizers RMSprop or Adam for cases with validation or without validation, respectively, taking values in the intervals [10 6, 5 ⇥ 10 3] and [10 5, 10 2], respectively, width of hidden layers taking value in the set {100, 200, · · · , 5000}, number of hidden layers in the set {1, 2, . . . , 50}, dropout in (0, 0.99], and the number of Krylov blocks taking value in {1, 2, . . . , 100}. An early stopping trick is also used to achieve better training. Specifically we terminate the training after 100 update steps of not improving the training loss. We see that the instances of the proposed architectures achieve overwhelming performance in all test cases. We visualize a representative case using t-SNE [26] in Figure 3. From these visualization, we can see the instances can extract good features with small training data, especially for the truncated block Krylov network. Particularly, when the training splits are small, they perform astonishingly better than the existing methods. This may be explained by the fact that when there is less labeled data, larger scope of vision field is needed to make recognition of each node or to let the label signals propagate. We would also highlight that the linear snowball GCN can achieve state-of-the-art performance with much less computational cost. If G has no bipartite components, then in (4), as n ! 1, rank(Y0) k almost surely. 6 Future Works Future research of this like includes: 1) Investigating how the pointwise nonlinear activation functions influence block vectors, e.g., the feature block vector X and hidden feature block vectors Hi, so that we can find possible activation functions better than Tanh; 2) Finding a better way to leverage the block Krylov algorithms instead of conducting simple truncation. Acknowledgements The authors wish to express sincere gratitude for the computational resources of Compute Canada provided by Mila, as well as for the proofreading done by Sitao and Mingde’s good friend & coworker Ian P. Porada.
1. What are the limitations of the paper's approach to graph convolutional networks? 2. How does the method compare to other approaches based on the message passing paradigm? 3. Why is the graph defined using both edges and adjacency? 4. Is the method truly spectrum-free, or does it still rely on spectral methods? 5. What could be improved in the experiments section for better clarity? 6. What are some potential future directions for research in this area?
Review
Review The paper is clear but some parts could be improved, for example the authors refer to scalability issues for GCN in the sense of stacking multiple layers but the term refers to scalability wrt size of the input. The authors focus on a very specific instantiation of graph convolutional networks, namely GCN, and to spectral methods. How does the method compare to approaches based on the more general message passing paradigm that can implement both local and global computation? Laplacian smoothing is not necessarily an issue there. Why is the graph defined using edges and adjacency? Isn’t it enough to have either one? Please explain. At the end of sec 2 it is said that Chebyshev polynomial constitutes a spectrum-free. The method does not require the computation of the eigendecomposition, however the resulting method still behaves as spectrum-based. Experiments are very thorough and show that the proposed method achieves good performance in all the proposed tasks. However the section is quite short and should be expanded for better clarity. For example, it is not specified what column is the usual data-regime (not decimated setting) used for each experiment. Future works in this current form is not very useful. I’d consider some rewrite of section 4 and 5 to make space for better motivation and explanation of results.
NIPS
Title Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability Abstract Recent findings demonstrate that modern neural networks trained by full-batch gradient descent typically enter a regime called Edge of Stability (EOS). In this regime, the sharpness, i.e., the maximum Hessian eigenvalue, first increases to the value 2/(step size) (the progressive sharpening phase) and then oscillates around this value (the EOS phase). This paper aims to analyze the GD dynamics and the sharpness along the optimization trajectory. Our analysis naturally divides the GD trajectory into four phases depending on the change in the sharpness value. We empirically identify the norm of output layer weight as an interesting indicator of the sharpness dynamics. Based on this empirical observation, we attempt to theoretically and empirically explain the dynamics of various key quantities that lead to the change of the sharpness in each phase of EOS. Moreover, based on certain assumptions, we provide a theoretical proof of the sharpness behavior in the EOS regime in two-layer fully-connected linear neural networks. We also discuss some other empirical findings and the limitation of our theoretical results. 1 Introduction Deep learning has achieved great success in a variety of machine learning applications, and gradientbased algorithms are the prevailing optimization methods for training deep neural networks. However, mathematically understanding the behavior of the optimization methods for deep learning is highly challenging, due to non-convexity, over-parameterization, and complicated architectures. In particular, some recent empirical findings in deep networks contradict the traditional understandings of gradient methods. For example, Wu et al. [30] observed that the solution found by gradient descent has sharpness approximately equal to 2/η instead of just being smaller than 2/η. Also, Jastrzebski et al. [14] observed that there is a break-even point in the SGD trajectory, and after this point, there is a regularization effect on the loss curvature. One recent well-known example is the phenomenon called “Edge of Stability" (EOS) (Cohen et al. [6]). Based on the classical optimization theory, the learning rate η of gradient-based method should be smaller than 2/λ so that the loss can decrease, where λ is the largest eigenvalue of the Hessian of the objective, also called “sharpness” in the literature. Otherwise, the loss diverges (even for simple quadratic functions). However, the empirical findings in Cohen et al. [6] show that under various ∗Contributed equally, listed in alphabetical order. †The authors are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi’an Institute for Interdisciplinary Information Core Technology. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). network settings, the EOS phenomena typically occurs along the gradient descent trajectory: (1) the sharpness first increases until it reaches 2/η (called “progressive sharpening”) (2) the sharpness starts hovering around 2/η (the EOS regime) and (3) the loss non-monotonically decreases without diverging. Although (1) seems to be consistent with the traditional beliefs about optimization, a rigorous mathematical explanation for it is still open. Moreover, phenomena (2) and (3) are more mysterious because they violate the η < 2/λ “rule” in traditional optimization theory, yet the training loss does not completely diverge. Instead, the loss may oscillate but still decrease in the long run, while the sharpness seems to be restrained from further increasing. In this paper, we aim to provide a theoretical and empirical explanation for the mystery of EOS. Towards the goal, we focus on the dynamics of these key quantities when EOS happens and attempt to find out the main driving force to explain these phenomena along the gradient descent trajectory from both theoretical and empirical perspectives. 1.1 Our Contributions Our contributions can be summarized as follows. (Section 3.1) We analyze the typical sharpness behavior along the gradient descent trajectory when EOS happens, and propose a four-phase division of GD trajectory, based on the dynamics of some key quantities such as the loss and the sharpness, for further understanding this phenomenon. (Section 3.2) We empirically identify the weight norm of the output layer as an effective indicator of the sharpness dynamics. We show that analyzing the dynamics of this surrogate can qualitatively explain the dynamics of sharpness. By assuming this relation, together with some additional simplifying assumptions and approximations, we can explain the dynamics of the sharpness, the loss, and the output layer norm in each phase of EOS (Section 3.3). In this context, we also offers an interesting explanation for the non-monotonic loss decrement (also observed in Cohen et al. [6], Xing et al. [32]) (Section 3.4). (Section 4) Following similar ideas, we provide a more rigorous proof for the progressive sharpening and EOS phenomena in a two-layer fully-connected linear neural network setting based on certain assumptions. The assumptions made here are either weaker or arguably less restrictive. 1.2 Related work The structure of Hessian The Hessian matrix carries the second order information of the loss landscape. Several prior works have empirically found that the spectrum of Hessian has several “outliers” and a continuous “bulk” (Sagun et al. [28, 29], Papyan [25, 26]). Typically, each outlier corresponds to one class in multi-class classification. As we consider the binary classification setting, there is typically one outlier (i.e., the largest eigenvalue) that is much larger than other eigenvalues. It is consistent with our Assumption 4.1. The Gauss-Newton decomposition of the Hessian was used in several prior works (Martens [23], Bottou et al. [4], Papyan [25, 26]). Papyan [25] empirically showed that the outliers of Hessian can be attributed to a “G component”, which is also known as Fisher Information Matrix (FIM) in Karakida et al. [15, 16]. Also, Wu et al. [31] analyzed the leading Hessian eigenspace by approximating the Hessian with Kronecker factorization and theoretically proved the outliers structure under some random setting assumption. Neural Tangent Kernel A recent line of work studied the learning of over-parameterized neural networks in the so-called. “neural tangent kernel (NTK) regime or the lazy training regime (Jacot et al. [13], Lee et al. [18], Du et al. [8, 7], Arora et al. [2], Chizat et al. [5]). A main result in this regime is that if the neural network is wide enough, gradient flow can find the global optimal empirical minimizer very close to the initialization. Moreover, the Hessian does not change much in the NTK regime. Our findings go beyond NTK setting to analyze the change of sharpness. Edge of Stability regime The Edge of Stability phenomena was first formalized by Cohen et al. [6]. Similar phenomena were also identified in Jastrzebski et al. [14] as the existence of the “break-even” point on SGD trajectory after which loss curvature gets regularized. Xing et al. [32] observed that gradient descent eventually enters a regime where the iterates oscillate on the leading curvature direction and the loss drops non-monotonically. Recently Ahn et al. [1] studied the non-monotonic decreasing behavior of GD which they called unstable convergence, and discussed the possible causes of this phenomenon. Ma et al. [22] proposed a special subquadratic landscape property and proved that EOS occurs based on this assumption. Arora et al. [3] studied the implicit bias on the sharpness of deterministic gradient descent in the EOS regime. They proved in some specific settings with a varying learning rate (called normalized GD) or with a modified loss √ L, gradient descent enters EOS and further reduces sharpness. They mainly focus on the analysis near the manifold of minimum loss, but our analysis also applies to the early stage of the training when the loss is not close to the minimum. In particular, our analysis provides an explanation of non-monotonic loss decrease that cannot be explained by their theory. Another difference is that they consider √ L (for constant learning rate) where L is a fairly general MSE loss independent of any neural network structure, while our analysis is strongly tied with the MSE loss of a neural network. Very recently, Lyu et al. [21] explained how GD enters EOS for normalized loss (e.g., neural networks with normalization layers), and analyzed the sharpness reduction effect along the training trajectory. The notion of sharpness in their work is somewhat different due to normalization. In particular, they consider the so-called spherical sharpness, that is the sharpness of the normalized weight vector. They also mainly studied the regime where the parameter is close to the manifold of minimum loss as in [3] and proved that GD approximately tracks a continuous sharpness-reduction flow. Lewkowycz et al. [19] proposed a similar regime called “catapult phase” where loss does not diverge even if the largest Hessian eigenvalue is larger than 2/η. Our work mainly considers training in this regime and assumes that the training is not in the “divergent phase” in Lewkowycz et al. [19]. Compared with Lewkowycz et al. [19], we provide a more detailed analysis in more general settings along gradient descent trajectory. 2 Preliminaries Notations: We denote the training dataset as {xi, yi}ni=1 ⊂ Rd × {1,−1} and the neural network as f : Rd × Rp → R. The network f(θ,x) maps the input x ∈ Rd and parameter θ ∈ Rp to an output in R. In this paper, we mainly consider the case of binary classification with mean square error (MSE) loss ℓ(z, y) = (z − y)2. Denote the input matrix as X = (x1,x2, ...,xn) ∈ Rd×n and the label vector as Y = (y1, y2, ..., yn) ∈ Rn. We let F (t) = (f(θ(t),x1), f(θ(t),x2), ..., f(θ(t),xn)) ∈ Rn and D(t) = F (t)− Y be the (output) prediction vector, and the residual vector respectively at time t. The training objective is: L(f(θ)) = 1n ∑n i=1 ℓ(f(θ,xi), yi) = 1 n ∑n i=1(f(θ,xi), yi) 2. Hessian, Fisher information matrix and NTK: In this part, we apply previous works to show that the largest eigenvalue of Hessian is almost the same as the largest eigenvalue of NTK. We use the latter as the definition of the sharpness in this paper. Further details can be found in Appendix F. As shown in Papyan [26], Martens [23], Bottou et al. [4], the Hessian can be decomposed into two components, where the term known as “Gauss-Newton matrix”, G-term or Fisher information matrix (FIM), dominates the second term in terms of the largest eigenvalue. Meanwhile, Karakida et al. [16] pointed out the duality between the FIM and a Gram matrix M , defined as M = 2n ∂F (θ) ∂θ ∂F (θ) ∂θ ⊤ . It is also known as the neural tangent kernel NTK (Karakida et al. [16, 15]), which has been studied extensively in recent years (see e.g., [13],[8],[2],[5]). Note that in this paper, we do not assume the training is in NTK regime, in which the Hessian does not change much during training. It is not hard to see that M and FIM share the same non-zero eigenvalues: if Gu = λu for some eigenvector u ∈ Rp, M ∂F (θ)∂θ u = ∂F (θ) ∂θ Gu = λ ∂F (θ) ∂θ u, i.e., λ is also an eigenvalue of M . In this paper, we use θ(t) to denote the parameter at iteration t (or time t) and the sharpness at time t as Λ(t) = Λ(θ(t)). We similarly define M(t),F (t),D(t),L(t). Here we show the gradient flow dynamics of the residual vector D(t): dD(t) dt = ∂D(t) ∂θ dθ(t) dt = −∂F (t) ∂θ ∂L(t) ∂θ = − 2 n ∂F (t) ∂θ ∂F (t) ∂θ ⊤ D(t) = −M(t)D(t) (1) 3 A Four-phase Analysis of GD Dynamics In this section, we study the dynamics of gradient descent and the change of sharpness along the optimization trajectory. We divide the whole training process into four phases, occurring repeatedly in the EOS regime. In Section 3.1, we introduce the four phases. In Section 3.2, we show empirically that the change of the norm of the output layer weight vector almost coincides with the change of the sharpness. In Section 3.3, using this observation, we attempt to explain the dynamics of each phase and provide a mathematical explanation for the changes in the sharpness. In Section 3.4, we explain why the loss decreases but non-monotonically. We admit that a completely rigorous theoretical explanation is still beyond our reach and much of our argument is based on various simplifying assumptions and is somewhat heuristic at some points. Due to space limits, we defer all the proofs in this section to Appendix E.1. 3.1 A Four-phase Division To further understand the properties along the trajectory when EOS happens, we study the behaviors of the loss and the sharpness during the training process. As illustrated in Figure 1, we train a shallow neural network by gradient descent on a subset of 1,000 samples from CIFAR-10 (Krizhevsky et al. [17]), using the MSE loss as the objective. Notice that the sharpness keeps increasing while the loss decreases until the sharpness reaches 2/η. Then the sharpness begins to oscillate around 2/η while the loss decreases non-monotonically. This is a typical sharpness behavior in the EOS regime, and consistent with the experiments in [6]. We divide the training process into four phases according to the evolution of the loss, the sharpness, and their correlation, as shown in Figure 1. The four phases happen cyclically along the training trajectory. We first briefly describe the properties of each phase and explain the dynamics in more detail in Section 3.3. Phase I: Sharpness Λ < 2/η. In this stage, all the eigenvalues of Gram matrix M are below the threshold 2/η. In particular, using standard initialization, the training typically starts from this phase, and during this phase the loss keeps decreasing and the sharpness keeps growing along the trajectory. This initial phase is called progressive sharpening (PS) in prior work Cohen et al. [6]. Empirically, the behavior of GD trajectory (as well as the loss and the sharpness) is very similar to that of gradient flow, until the sharpness reaches 2/η (this phenomena is also observed in Cohen et al. [6]. See Figure 5 or Appendix J.1 in their paper). We note that GD may come back to this phase from Phase IV later. Phase II: Sharpness Λ > 2/η. In this phase, the sharpness exceeds 2/η and may keep increasing. We will show shortly that the fact that Λ > 2/η causes |D⊤v1| (where v1 the first eigenvector of M ) to increase exponentially (Lemma 3.2). This would quickly lead ∥D∥ to exceed ∥Y ∥ in a few iterations, which leads the sharpness to start decreasing by Proposition 3.1, hence the training process enters Phase III. Phase III: Sharpness Λ > 2/η yet begins to gradually drop. Before Λ drops below 2/η, Lemma 3.2 still holds, so |D⊤v1| keeps increasing. Proposition 3.1 still holds and thus the sharpness keeps decreasing until it is below 2/η, at which point we enter Phase IV. A distinctive feature of this phase is that the loss may increase due to the exponential increase of |D⊤v1|. Phase IV: Sharpness Λ < 2/η. When the sharpness is below 2/η, |D⊤v1| begins to decrease quickly, leading the loss to decrease quickly. At the same time, the sharpness keeps oscillating and gradually decreasing for some iterations. This lasts until the loss decrease to a level that is around its value right before Phase III. The sharpness is still below 2/η and our training process gets back to Phase I. 3.2 The Norm of the Output Layer Weight It is difficult to rigorously analyze the dynamics of the sharpness Λ(t). In this subsection, we make an interesting observation, that the change of the norm of the output layer of the network (usually a fully-connected linear layer) is consistent with the change of the sharpness most of the time. In particular, for a general neural network f(x) = A⊤h(W ,x), where A ∈ Rm is the output layer weight and the feature extractor h : Rp × Rd → Rm outputs a m-dimensional feature vector (h corresponds to all but the last layers). W ∈ Rp is the parameter vector of the extractor h. Note that M = (∂F∂θ ) ⊤(∂F∂θ ) can be decomposed as follows: M = ( ∂F ∂θ )( ∂F ∂θ )⊤ = ( ∂F ∂A )( ∂F ∂A )⊤ + ( ∂F ∂W )( ∂F ∂W )⊤ := MA +MW . where the (i, j)−entry of MW is (MW )ij = 〈 ∂f(xi) ∂W , ∂f(xj) ∂W 〉 = A⊤ ∂h(W ,xi)∂W ∂h(W ,xj) ∂W ⊤ A. In this expression, intuitively ∥A∥ should be positively related to ∥MW ∥. We empirically observe that the part MA = ( ∂F∂A )( ∂F ∂A ) ⊤ has a much smaller spectral norm compared to the whole Gram matrix M (see Figure 3(a) and Appendix D), which means ∥MW ∥ dominates ∥MA∥. Therefore, ∥A∥ should be positively correlated with ∥M∥. The benefit of analyzing ∥A∥2 is that the gradient flow of ∥A∥2 enjoys the following clean formula: d∥A∥2 dt = −2 ( ∂L ∂A )⊤ A = − 4 n D⊤ ( ∂F ∂A ) A = − 4 n D⊤F . (2) In this work, we do experiments on two-layer linear networks, fully connected deep neural networks, and Resnet18, and all of them have such output layer structures. From Figure 3(a), we can observe that the output layer norm ∥A∥2 and the sharpness Λ change in the same direction most of the time along the gradient descent trajectory, i.e., they both increase or decrease at the same time. We note that they may change in different directions very occasionally around the time when ∥A(t+ 1)∥2 − ∥A(t)∥2 changes its sign (see the experiments in Figure 2). 3.3 Detailed Analysis of Each Phase In this section, we explain the dynamics of each phase in more detail. For clarity, we first list the assumptions we need in this section. For different phases, we may need some different assumptions to simplify the arguments. Most of the assumptions are consistent with the experiments or the findings in the literature. Some of them are somewhat stronger, and we also discuss how to relax them. 3.3.1 Assumptions Used in Section 3.3 Assumption 3.1. (A-norm and sharpness) Along the gradient descent training trajectory, for all time t, the norm ∥A(t)∥ of the output layer and the sharpness Λ(t) moves in the same directions, i.e., sign(Λ(t+ 1)− Λ(t)) = sign(∥A(t+ 1)∥ − ∥A(t)∥). It is the key observation that we have discussed in Section 3.2. The following are two assumptions about the gradient descent trajectory. The first one assumes that D(t) and ∥A∥2 are updated according their first order approximations. Empirical justification of this approximation can be found in Appendix D.1.3. Assumption 3.2. (First Order Approximation of GD) Along the gradient descent trajectory, the update rule is assumed as the first order approximation D(t+ 1)−D(t) = −ηM(t)D(t), ∥A(t+ 1)∥2 − ∥A(t)∥2 = −4η n D(t)⊤F (t) (3) Assumption 3.3. (Gradient flow for the PS phase) When Λ(t) < 2/η, D(t) follows the gradient flow trajectory: dD(t)dt = −M(t)D(t). Assumption 3.3 holds empirically, especially in the progressive sharpening phase (see Figure 5 or Appendix J.1 in Cohen et al. [6]) when the networks are continuously differentiable. We include these experimental details in Appendix D. See also (Theorems 4.3 and 4.5) in Arora et al. [3] for further theoretical justification. We need this assumption for the proof in the progressive sharpening phase. Then we state an assumption on the upper bound of the sharpness to restrict the regime we discuss: Assumption 3.4. (Sharpness upper bound) If the training does not diverge, there exists some constant BΛ, such that 0 < Λ(t) ≤ BΛη for all t. This assumption states that there is an upper bound of the sharpness throughout the optimization process. Actually, in Lewkowycz et al. [19], they proved that 4/η is an upper bound of the sharpness in a two-layer linear network with one datapoint, otherwise the training process (loss) would diverge. They empirically found that similar upper bounds exist also for nonlinear activations, albeit with somewhat larger constant BΛ. In the work, We focus on the case when the loss does not diverge and hence we make Assumption 3.4. The main set of assumptions we need is about the change of M ’s eigendirections. Assumption 3.5. Denote {vi}ni=1 to be the set of eigenvectors of M(t). We have three levels of assumptions on M ’s eigenspace. (i) (fixed eigendirections) the set {vi}ni=1 is fixed throughout the phase under consideration; (ii) (eigendirections move slowly) at all time t and for any i, F (t)⊤ dvi(t)dt < λi(t)D(t) ⊤vi(t); (iii) (principal directions moves slowly) at all time t, there is a small constant ϵ2 ≥ 0 such that ⟨v1(t),v1(t+ 1)⟩ ≥ 1− ϵ2. Clearly, these three assumptions are increasingly weaker from (i) to (iii). Assumption 3.5 (i) on the eigenvectors is somewhat strong, and the eigenvectors corresponding to small eigenvalues may change notably in our experiments. We use it to illustrate a basic proof idea of the progressive sharpening phase, but later we relax this assumption to Assumption 3.5 (ii). Moreover, for the proof in Phase II and III, Assumption 3.5 (iii), which only assume that the main direction changes slightly, is sufficient for our proof. Actually, we note that v1(t) (the eigenvector corresponding to the largest eigenvalue) changes slowly and the inner product of its initial direction and its direction at the end of the phase is also large (see Appendix D for the empirical verification). For the proof in Phase II, we need another small technical assumption: Assumption 3.6. Assume D(t)⊤v1(t) ≥ cϵ2∥D(t)∥ for some c > 1 for some t = t0 at the beginning of this phase. Here ϵ2 is defined in Assumption 3.5 (iii). Assumption 3.6 says that D(t) has a non-negligible component in the direction of v1. Since ϵ2 > 0 is a small constant, this is not a strong assumption as some small perturbation (due to discrete updates) would make the assumption hold for some c > 1. 3.3.2 Detailed Analysis In each phase, we attempt to explain the main driving force of the change of the sharpness and the loss. Phase I: In this phase, we show that D(t)⊤F (t) < 0 under certain assumptions (detailed shortly) on the spectral properties of M(t) (see Lemma 3.1 below). By Assumption 3.2, we have ∥A(t + 1)∥2 − ∥A(t)∥2 > 0, implying that the sharpness Λ(t) also increases based on Assumption 3.1. This phase stops if Λ(t) grows larger than 2/η. We assume the output vector F (t) is initialized to be small (this is true if we use very small initial weights). For simplicity, we assume F (0) = 0 in the following argument. Lemma 3.1. For all t in Phase I, under Assumption 3.5 (i) and 3.3, it holds that D(t)⊤F (t) < 0. From this lemma, ∥A∥ keeps increasing by Assumption 3.2; hence the sharpness keeps increasing by Assumption 3.1 until it reaches 2/η or the loss converges to 0. In the former case, the training process enters Phase II, while the latter case is also possible when η is very small (e.g., even the largest possible sharpness value is less than 2/η). We admit that Assumption 3.5 (i) is somewhat strong. In fact, the assumption can be relaxed significantly to Assumption 3.5 (ii). We show in Appendix E.2 that under Assumption 3.5 (ii) and Assumption 3.3, we can still guarantee D(t)⊤F (t) < 0. Moreover, we provide a dynamical system view of the dynamics of D(t)⊤F (t) in that Appendix. Phase II: When the training process just enters Phase II, the sharpness keeps increasing. We show shortly that D(t)⊤v1(t) starts to increase geometrically, and this causes the sharpness to stop increasing at some point, thus entering Phase III. In this phase, we adopt a weaker assumption on the sharpness direction v1: Assumption 3.5 (iii). This assumption holds in our experiments (See Figure 17). Also, Assumption 3.6 is necessary. Lemma 3.2. Suppose Assumption 3.5 (iii) and 3.6 hold during this phase (with constants ϵ2 > 0 and c > 1). If Λ(t) = (2 + τ)/η and τ > 11−ϵ2−1/c − 1, then D(t) ⊤v1(t) increases geometrically with factor (1 + τ)(1− ϵ2 − 1/c) > 1 for t ≥ t0 in this phase. Since D(t)⊤v1(t) increases geometrically, ∥D∥ ≥ D(t)⊤v1(t) will exceed ∥Y ∥ eventually. Next, the following proposition states that when this happens, D(t)⊤F (t) > 0. Consequently, ∥A∥ decreases by Assumption 3.2, leading to the decrement of the sharpness based on our Assumption 3.1. Proposition 3.1. If ∥D(t)∥ > ∥Y ∥, then D(t)⊤F (t) > 0. Phase III: The sharpness is still larger than 2/η, but it starts decreasing. Meanwhile, the loss continues to increase rapidly due to Lemma 3.2. Eventually, the sharpness will fall below 2/η and then the training process enters phase IV. By Lemma 3.2, if the sharpness stays above 2/η, then we can have an arbitrarily large loss. According to Proposition 3.1, if the loss is large enough, the sharpness keeps decreasing. Now we show that if the sharpness stays above 2/η, ∥A(t)∥2 will decrease by a significant amount. This partially explains that the sharpness should also decrease significantly until it drops below 2/η (instead of decreasingly converging to a value above 2/η without ever entering the next phase). Proposition 3.2. Under Assumption 3.2, if ∥D(t)∥ > ∥Y ∥, then ∥A(t + 1)∥2 − ∥A(t)∥2 < − 4ηn (∥D(t)∥ − ∥Y ∥) 2. From the above argument, we can see that if D(t)⊤v1(t) is larger than ∥Y ∥, then D(t)⊤v1(t) does not decrease in Phase III, and according to Proposition 3.2, ∥A(t)∥2 decreases significantly, implying the sharpness drops below 2/η eventually. Remark: The fact that the sharpness can provably drop below 2/η in this phase can be proved more rigorously in Section 4 for the two-layer linear setting. See Theorem 2. Phase IV: First, since the training process has just left phase III, D(t)⊤F (t) is still positive and large, hence ∥A(t)∥2 keeps decreasing and the sharpness decreases as well. Since the sharpness stays below 2/η, the loss decreases due to the following descent lemma (with u replaced by D(t)). Lemma 3.3. If Λ(t) < 2/η, then for any vector u ∈ Rn, ∥u⊤(I − ηM(t))∥ ≤ (1 − ηα)∥u∥, where α = min{2/η − Λ(t), λmin(M(t))}. In particular, replacing u with D(t), we can see ∥D(t+ 1)∥ ≤ (1− ηα)2∥D(t)∥. Next we argue that D(t)⊤F (t) will become negative eventually, which indicates that ∥A(t)∥2 and hence the sharpness will grow again. Since the sharpness is below 2/η, D(t)⊤v1(t) decreases geometrically due to Lemma 3.3 (replacing u with D(t)v1(t)v1(t)⊤). In fact, D can be decomposed into the v1-component v1v⊤1 D and the remaining part R defined as R(t) := (I − v1(t)v1(t)⊤)D(t). Then we have D(t)⊤F (t) = (v1(t)v1(t) ⊤D(t))⊤(v1(t)v1(t) ⊤D(t) + Y ) +R(t)⊤(R(t) + Y ). (4) As shown in the next subsection, R(t) almost follows a similar gradient descent trajectory R′(t) (Lemma 3.5). More precisely, R′(t) is defined as R′(t+1) = (I − ηM(t)(I −v1(t)v1(t)⊤))R′(t) (Lemma 3.4). While D’s dynamics is D(t+1) = (I−ηM(t))D(t), R′ follows a similar dynamics R(t + 1) = (I − ηM ′(t))R′(t), where M ′(t) = M(t)(I − v1(t)v1(t)⊤). Note that M ′(t) has eigenvalues smaller than 1/η for any time t (by Assumption 3.7), hence with an assumption similar to Assumption 3.5 (i) (or a similar version of our relaxed assumption in Appendix E.2 for M ′), we can prove that R′(t)⊤(R′(t) + Y ) < 0 for any time t (See Appendix E.1 for the rigorous proof). Since R(t) ≈ R′(t), the second term in the decomposition (4) is always negative and the first term (v1 direction term) is decreasing geometrically. Therefore, there are only two possible cases. The first possibility is that the first term decreases to a small value near 0 and the second term remains largely negative. Then their sum will be negative, which is D(t)⊤F (t) < 0, thus implying the training enters Phase I. The second possibility is that when the first term decreases to a small value near 0, the second term is also a small negative value. In this case both R(t) and D(t)⊤v1(t) are small, implying the loss is almost 0, which is indeed the end of the training. 3.4 Explaining Non-monotonic Loss Decrement In this subsection, we attempt to explain the non-monotonic decrement of the loss during the entire GD trajectory. See Figure 3(b). As defined in the last section, we decompose D into the v1-component v1v ⊤ 1 D and the remaining part R. Below, we prove that R(t) is not affected much by the exponential growth of the loss (Proposition 3.5) in Phase II and III, and almost follows a converging trajectory (which is defined as R′(t) later in this section). The arguments in this subsection need Assumption 3.4 and Assumption 3.5 (iii), both very consistent with the experiments. We need an additional assumption on the spectrum of M . Assumption 3.7. All M(t)’s eigenvalues except Λ(t) = λmax(M(t)) are smaller than 1/η for all t. Recall that the largest eigenvalue is at most BΛη by Assumption 3.4. Empirically, the largest eigenvalue is an outlier in the spectrum, i.e., it is much larger than the other eigenvalues. Hence, we make Assumption 3.7 which states that all other eigenvalues are at most 1/η, which is consistent with our experiments. See Figure 3(b). Similar fact is also mentioned in [28, 29]. First, we let BD be an upper bound of D(t), i.e., for all t, ∥D(t)∥ ≤ BD. In the two-layer linear network case, we can have an explicit form of BD. (see Lemma C.9 in Appendix C.) Recall that in Assumption 3.4, BΛ is the upper bound of ηΛ. Lemma 3.4. Suppose Assumption 3.5 (iii) holds. R(t) satisfies the following: R(t+ 1) = (I − ηM(t))R(t) + e1(t), where ∥e1(t)∥ ≤ 6 √ ϵ2∥D(t)∥(BΛ − 1) Lemma 3.5. Define an auxiliary sequence R′(t) by R′(0) = R(0), and R′(t+1) = (I−ηM(t)(I− v1(t)v1(t) ⊤))R′(t). If Assumption 3.4, Assumption 3.5 (iii), Assumption 3.7 hold, and for any time t there exists a quantity λr > 0, such that the smallest eigenvalue of M(t), i.e. λmin(M(t)) > λr, then there exists a constant cr > 0 such that ∥R(t)−R′(t)∥ ≤ cr BD(BΛ−1) √ ϵ2 ηλr . Now, in light of Lemma 3.2 and Lemma 3.5, we arrive at an interesting explanation of the phenomena of non-monotonic decrease of the loss. Basically, D can be decomposed into the v1-component v1v ⊤ 1 D and the remaining part R = (I − v1v⊤1 )D. The v1-component may increase geometrically during the EOS (Lemma 3.2), but the behavior of the remaining part R(t) is close to R′(t), which follows the simple updating rule R′(t+ 1) = (I − ηM(t))R′(t), so Lemma 3.3 implies that the R part almost keeps decreasing during the entire trajectory (here Lemma 3.3 applies with u replaced by R′(t), noticing that the eigenvalues except the first are well below 2/η). Hence, the non-monotonicity of the loss is mainly due to the v1-component of D, and the rest part R is optimized in the classical regime (step size well below 2/(the operator norm)) and hence steadily decreases. See Figure 3(b). 4 A Theoretical Analysis for 2-Layer Linear NN In this section, we aim to provide a more rigorous explanation of the EOS phenomenon in two-layer linear networks. The proof ideas follow similar high-level intuition as the proofs in Section 3.3. In particular, we can remove or replace the assumptions in Section 3.3 with arguably weaker assumptions. Due to space limit, we state our main theoretical results and elaborate their relation with the proofs in Section 3.3. The detailed settings and proof are more tedious and can be found in Appendix C. 4.1 Setting and basic notations Model: In this section, we study a two-layer neural network with linear activation, i.e. f(x) =∑m q=1 1√ m aqwqx = 1√ m A⊤Wx where W = [w1, ...,wm]⊤ ∈ Rm×d, A = [a1, ..., am] ∈ Rm. Dataset: For simplicity, we assume yi = ±1 for all i ∈ [n], and ∥X⊤X∥2 = Θ(n). We assume X⊤X has rank r, and we decompose X⊤X and Y according to the orthonormal basis {vi}, the eigenvectors of X⊤X: X⊤X = ∑r i=1 λiviv ⊤ i , Y = ∑r i=1(Y ⊤vi)vi := ∑r i=1 zivi where vi is the eigenvector corresponding to the i-th largest eigenvalue λi of X⊤X. zi = Y ⊤vi is the projection of Y onto the direction vi. Here we suppose n ≫ r and the global minimum (A∗,W ∗) exists. Update rule: We write explicitly the GD dynamics of D(t): D(t+1) = (I−ηM∗(t))D(t), where M∗(t) = 2mn (∥A(t)∥ 2X⊤X+X⊤W⊤(t)W (t)X)− 4ηn2m (D(t) ⊤F (t))X⊤X is the Gram matrix combined with second order terms. 4.2 Main Theorem and The Proof Sketch Phase I and Progressive Sharpening: Assumption 4.1. There exists some constant χ > 1, s.t. for all i ∈ [r − 1], λi(X⊤X) ≤ χλi+1(X ⊤X). Moreover, λ1(X⊤X) ≥ 2λ2(X⊤X). Assumption 4.2. There exists κ = Ω(r−1) such that mini∈[r]{zi/ √ n} ≥ κ. The first assumption is about the eigenvalue spectrum of X⊤X. 3 The second assumes that all component zi = Y ⊤vi are not too small. Theorem 1 (Informal). Suppose Assumption 4.1, Assumption 4.2 hold, the smallest nonzero eigenvalue λr = λr(X⊤X) > 0 and λ1 = λmax(X⊤X) = c1n. Then for any ϵ > 0, if m = Ω( c1n 2 λ2r ), and n = Ω( λ2r κ4ϵ2 ), we have the progressive sharpening property: Λ(t+1)−Λ(t) > 0 for t = 1, 2, ..., t0−1 where t0 is the time when ∥D(t)∥2 ≤ O(ϵ2) or λmax(M∗(t)) > 1/η for the first time. In the proof of this theorem, we show that the Gram matrix M(t) ≈ 2mn (∥A(t)∥ 2 + md )X ⊤X, which serves as a justification of Assumption 3.5 we made in Section 3.3. That shows all M(t) 3It guarantees the gap between two adjacent eigenvalues is not very large, and there is a gap between the largest and the second largest eigenvalue. Note the second part of the assumption is a relaxed version of Assumption 3.7. In our CIFAR-10 1k-subset with samples’ mean subtracted, λ1/λ2 = χ ≈ 3 (See Figure 19). approximately share the same set of eigenvectors as X⊤X. In our proof, we also prove more rigorously that ∥A(t)∥2 is an indicator of the sharpness in this simpler setting. Edge of Stability (Phase II - IV): Assumption 4.3. There exists some constant c2 > 0, such that ∥Γ(t)∥ ≤ c2m . This assumption is based on Theorem 1. In Theorem 1, we state that in the progressive sharpening phase, ∥Γ(t)∥ has an upper bound of O(1/m). Now in the EOS phase, we assume that ∥Γ(t)∥ grows larger by at most a constant factor. Further discussions refer to Appendix D.2.2. Assumption 4.4. There exists some constant β > 0, such that Λ ≤ 4η (1− β). This assumption is consistent with Assumption 3.4, which assumes an upper bound of the sharpness. Assumption 4.5. There exist some constant c3 such that |D(t)⊤v1| > c3 √ n/m for some t = t0 at the beginning of phase II. This assumption is in the same spirit of Assumption 3.6 with the only change of the bound in terms of m and n. Now, we are ready to state our theorem in this stage. Theorem 2. Denote the smallest nonzero eigenvalue as λr ≜ λr(X⊤X) > 0 and the largest eigenvalue as λ1 ≜ λ1(X⊤X). Under Assumption 4.3, 4.4, 4.5, and λ1(X⊤X) ≥ 2λ2(X⊤X) in Assumption 4.1, there exist constants c4, c5, c6 such that if n > c6λrη,m > max{ c4d 2n2 λ2r , c5η}, then • There exists ρ = O(1) which depends on c3 such that if Λ(t0) > 2η (1+ρ) for some t0, there must exist some t1 > t0 such that Λ(t1) < 2η (1 + ρ). • If Λ(t),Λ(t+ 1) > 2η (1 + ρ), then there is a constant c7 > 0 (depending on c3) such that |D(t+ 1)⊤v1| > |D(t)⊤v1|(1 + c7). • Define R(t) := (I − v1v⊤1 )D(t), and R′(t) := (I − ηM∗(t)(I − v1v⊤1 ))R′(t − 1). It holds that ∥R(t)−R′(t)∥ = O( √ n3d λr √ m ). We can conclude the following from Theorem 2: (1) The first statement of the theorem states that if the progressive sharpening phase causes the sharpness to grow over 2/η, then the sharpness eventually goes below 2/η. This illustrates the regularization effect of gradient descent on the sharpness (this is consistent with the analysis of Phase III in Section 3.3). (2) The second states that |D(t)⊤v1| geometrically increases in Phase II and III. Note that we proved a similar Lemma 3.2 for Phase II in the more general setting in Section 3.3. (3) The third conclusion gives an upper bound for the distance between R(t)’s trajectory and R′(t)’s. This bound helps illustrate why R(t)’s trajectory is similar with R′(t) in Phase IV of Section 3.3. 5 Discussions and Open Problems In this section, we discuss the limitation of our theory and some related findings. First, our argument crucially relies on the assumption that ∥A∥ changes in the same direction as Λ does most of the time. Here, we elaborate more on this point. Seeing from a longer time scale, ∥A∥2 and the sharpness may have very different overall trends (See Figure (c) in 2), i.e., the sharpness oscillates around 2/η but ∥A∥2 increases. Moreover, the sharpness may oscillate more frequently than ∥A∥2, while the low-frequency trends seem to match well (See the late training phases in Figure (b) in 2). Currently, our theory cannot explain the high-frequency oscillation of the sharpness in Figure (b). While we still believe the change of ∥A∥ is a major driving force of the change of the sharpness, other factors (such as other layers) must be taken into consideration for a complete understanding and explanation of the sharpness dynamics. We also carry out some experiments that reveal some interesting relation between the inner layers and the sharpness, which is not yet reflected in our theory. Due to space limit, we defer it to Appendix D.3. We conclude with some open problems. It would be very interesting to remove some of our assumptions or replace them (especially those related to the spectrum of M ) by weaker or more natural assumptions on the data or architectures, or make some of the heuristic argument more rigorous (e.g., first order approximation of the dynamics (3)). Extending our results in Section 4 to deeper neural networks with nonlinear activation function is an intriguing and challenging open problem.
1. What is the main contribution of the paper regarding edge of stability and progressive sharpening? 2. What are the strengths of the proposed approach, particularly in terms of its analysis and observations? 3. Do you have any concerns or questions about the methodology used in the study? 4. How does the reviewer assess the clarity and convincing nature of the analysis presented in the paper? 5. Are there any limitations to the study that should be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The work aims to understand the phenomena of edge of stability along with progressive sharpening (rising of leading eigenvalues of the Hessian). As a foundation, it provides an interesting observation wrt the correlation between sharpness and the output layer's norm. Then it divides the analysis into four stages where the behavior of the leading eigenvalue is studied. Moreover, it proves the theorems in the setting of two-layer linear neural networks. Some assumptions are verified with numerical experiments. Strengths And Weaknesses Pros: The analysis is clear and convicing generally, with a detailed investigation into the four stages. The observation of the correlation between the sharpness and output layer's norm is quite interesting, which makes it a good start point for the following analysis. Concerns: How to handle several eigenvalues close to 2 / η ? This phenomena is observed by Cohen et al., but the assumption 3.6 and 4.1 rule it out. Generally speaking, I would like to recommend an acceptance score for the complete analysis to resolve the problem of EoS. Questions Please see the above concerns. Typo: Shall the fraction in Eq(3) be 4 n ? Limitations No. But it is a pure theoretical work in a standard setting of optimization, so there is no significant need to do so.
NIPS
Title Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability Abstract Recent findings demonstrate that modern neural networks trained by full-batch gradient descent typically enter a regime called Edge of Stability (EOS). In this regime, the sharpness, i.e., the maximum Hessian eigenvalue, first increases to the value 2/(step size) (the progressive sharpening phase) and then oscillates around this value (the EOS phase). This paper aims to analyze the GD dynamics and the sharpness along the optimization trajectory. Our analysis naturally divides the GD trajectory into four phases depending on the change in the sharpness value. We empirically identify the norm of output layer weight as an interesting indicator of the sharpness dynamics. Based on this empirical observation, we attempt to theoretically and empirically explain the dynamics of various key quantities that lead to the change of the sharpness in each phase of EOS. Moreover, based on certain assumptions, we provide a theoretical proof of the sharpness behavior in the EOS regime in two-layer fully-connected linear neural networks. We also discuss some other empirical findings and the limitation of our theoretical results. 1 Introduction Deep learning has achieved great success in a variety of machine learning applications, and gradientbased algorithms are the prevailing optimization methods for training deep neural networks. However, mathematically understanding the behavior of the optimization methods for deep learning is highly challenging, due to non-convexity, over-parameterization, and complicated architectures. In particular, some recent empirical findings in deep networks contradict the traditional understandings of gradient methods. For example, Wu et al. [30] observed that the solution found by gradient descent has sharpness approximately equal to 2/η instead of just being smaller than 2/η. Also, Jastrzebski et al. [14] observed that there is a break-even point in the SGD trajectory, and after this point, there is a regularization effect on the loss curvature. One recent well-known example is the phenomenon called “Edge of Stability" (EOS) (Cohen et al. [6]). Based on the classical optimization theory, the learning rate η of gradient-based method should be smaller than 2/λ so that the loss can decrease, where λ is the largest eigenvalue of the Hessian of the objective, also called “sharpness” in the literature. Otherwise, the loss diverges (even for simple quadratic functions). However, the empirical findings in Cohen et al. [6] show that under various ∗Contributed equally, listed in alphabetical order. †The authors are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi’an Institute for Interdisciplinary Information Core Technology. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). network settings, the EOS phenomena typically occurs along the gradient descent trajectory: (1) the sharpness first increases until it reaches 2/η (called “progressive sharpening”) (2) the sharpness starts hovering around 2/η (the EOS regime) and (3) the loss non-monotonically decreases without diverging. Although (1) seems to be consistent with the traditional beliefs about optimization, a rigorous mathematical explanation for it is still open. Moreover, phenomena (2) and (3) are more mysterious because they violate the η < 2/λ “rule” in traditional optimization theory, yet the training loss does not completely diverge. Instead, the loss may oscillate but still decrease in the long run, while the sharpness seems to be restrained from further increasing. In this paper, we aim to provide a theoretical and empirical explanation for the mystery of EOS. Towards the goal, we focus on the dynamics of these key quantities when EOS happens and attempt to find out the main driving force to explain these phenomena along the gradient descent trajectory from both theoretical and empirical perspectives. 1.1 Our Contributions Our contributions can be summarized as follows. (Section 3.1) We analyze the typical sharpness behavior along the gradient descent trajectory when EOS happens, and propose a four-phase division of GD trajectory, based on the dynamics of some key quantities such as the loss and the sharpness, for further understanding this phenomenon. (Section 3.2) We empirically identify the weight norm of the output layer as an effective indicator of the sharpness dynamics. We show that analyzing the dynamics of this surrogate can qualitatively explain the dynamics of sharpness. By assuming this relation, together with some additional simplifying assumptions and approximations, we can explain the dynamics of the sharpness, the loss, and the output layer norm in each phase of EOS (Section 3.3). In this context, we also offers an interesting explanation for the non-monotonic loss decrement (also observed in Cohen et al. [6], Xing et al. [32]) (Section 3.4). (Section 4) Following similar ideas, we provide a more rigorous proof for the progressive sharpening and EOS phenomena in a two-layer fully-connected linear neural network setting based on certain assumptions. The assumptions made here are either weaker or arguably less restrictive. 1.2 Related work The structure of Hessian The Hessian matrix carries the second order information of the loss landscape. Several prior works have empirically found that the spectrum of Hessian has several “outliers” and a continuous “bulk” (Sagun et al. [28, 29], Papyan [25, 26]). Typically, each outlier corresponds to one class in multi-class classification. As we consider the binary classification setting, there is typically one outlier (i.e., the largest eigenvalue) that is much larger than other eigenvalues. It is consistent with our Assumption 4.1. The Gauss-Newton decomposition of the Hessian was used in several prior works (Martens [23], Bottou et al. [4], Papyan [25, 26]). Papyan [25] empirically showed that the outliers of Hessian can be attributed to a “G component”, which is also known as Fisher Information Matrix (FIM) in Karakida et al. [15, 16]. Also, Wu et al. [31] analyzed the leading Hessian eigenspace by approximating the Hessian with Kronecker factorization and theoretically proved the outliers structure under some random setting assumption. Neural Tangent Kernel A recent line of work studied the learning of over-parameterized neural networks in the so-called. “neural tangent kernel (NTK) regime or the lazy training regime (Jacot et al. [13], Lee et al. [18], Du et al. [8, 7], Arora et al. [2], Chizat et al. [5]). A main result in this regime is that if the neural network is wide enough, gradient flow can find the global optimal empirical minimizer very close to the initialization. Moreover, the Hessian does not change much in the NTK regime. Our findings go beyond NTK setting to analyze the change of sharpness. Edge of Stability regime The Edge of Stability phenomena was first formalized by Cohen et al. [6]. Similar phenomena were also identified in Jastrzebski et al. [14] as the existence of the “break-even” point on SGD trajectory after which loss curvature gets regularized. Xing et al. [32] observed that gradient descent eventually enters a regime where the iterates oscillate on the leading curvature direction and the loss drops non-monotonically. Recently Ahn et al. [1] studied the non-monotonic decreasing behavior of GD which they called unstable convergence, and discussed the possible causes of this phenomenon. Ma et al. [22] proposed a special subquadratic landscape property and proved that EOS occurs based on this assumption. Arora et al. [3] studied the implicit bias on the sharpness of deterministic gradient descent in the EOS regime. They proved in some specific settings with a varying learning rate (called normalized GD) or with a modified loss √ L, gradient descent enters EOS and further reduces sharpness. They mainly focus on the analysis near the manifold of minimum loss, but our analysis also applies to the early stage of the training when the loss is not close to the minimum. In particular, our analysis provides an explanation of non-monotonic loss decrease that cannot be explained by their theory. Another difference is that they consider √ L (for constant learning rate) where L is a fairly general MSE loss independent of any neural network structure, while our analysis is strongly tied with the MSE loss of a neural network. Very recently, Lyu et al. [21] explained how GD enters EOS for normalized loss (e.g., neural networks with normalization layers), and analyzed the sharpness reduction effect along the training trajectory. The notion of sharpness in their work is somewhat different due to normalization. In particular, they consider the so-called spherical sharpness, that is the sharpness of the normalized weight vector. They also mainly studied the regime where the parameter is close to the manifold of minimum loss as in [3] and proved that GD approximately tracks a continuous sharpness-reduction flow. Lewkowycz et al. [19] proposed a similar regime called “catapult phase” where loss does not diverge even if the largest Hessian eigenvalue is larger than 2/η. Our work mainly considers training in this regime and assumes that the training is not in the “divergent phase” in Lewkowycz et al. [19]. Compared with Lewkowycz et al. [19], we provide a more detailed analysis in more general settings along gradient descent trajectory. 2 Preliminaries Notations: We denote the training dataset as {xi, yi}ni=1 ⊂ Rd × {1,−1} and the neural network as f : Rd × Rp → R. The network f(θ,x) maps the input x ∈ Rd and parameter θ ∈ Rp to an output in R. In this paper, we mainly consider the case of binary classification with mean square error (MSE) loss ℓ(z, y) = (z − y)2. Denote the input matrix as X = (x1,x2, ...,xn) ∈ Rd×n and the label vector as Y = (y1, y2, ..., yn) ∈ Rn. We let F (t) = (f(θ(t),x1), f(θ(t),x2), ..., f(θ(t),xn)) ∈ Rn and D(t) = F (t)− Y be the (output) prediction vector, and the residual vector respectively at time t. The training objective is: L(f(θ)) = 1n ∑n i=1 ℓ(f(θ,xi), yi) = 1 n ∑n i=1(f(θ,xi), yi) 2. Hessian, Fisher information matrix and NTK: In this part, we apply previous works to show that the largest eigenvalue of Hessian is almost the same as the largest eigenvalue of NTK. We use the latter as the definition of the sharpness in this paper. Further details can be found in Appendix F. As shown in Papyan [26], Martens [23], Bottou et al. [4], the Hessian can be decomposed into two components, where the term known as “Gauss-Newton matrix”, G-term or Fisher information matrix (FIM), dominates the second term in terms of the largest eigenvalue. Meanwhile, Karakida et al. [16] pointed out the duality between the FIM and a Gram matrix M , defined as M = 2n ∂F (θ) ∂θ ∂F (θ) ∂θ ⊤ . It is also known as the neural tangent kernel NTK (Karakida et al. [16, 15]), which has been studied extensively in recent years (see e.g., [13],[8],[2],[5]). Note that in this paper, we do not assume the training is in NTK regime, in which the Hessian does not change much during training. It is not hard to see that M and FIM share the same non-zero eigenvalues: if Gu = λu for some eigenvector u ∈ Rp, M ∂F (θ)∂θ u = ∂F (θ) ∂θ Gu = λ ∂F (θ) ∂θ u, i.e., λ is also an eigenvalue of M . In this paper, we use θ(t) to denote the parameter at iteration t (or time t) and the sharpness at time t as Λ(t) = Λ(θ(t)). We similarly define M(t),F (t),D(t),L(t). Here we show the gradient flow dynamics of the residual vector D(t): dD(t) dt = ∂D(t) ∂θ dθ(t) dt = −∂F (t) ∂θ ∂L(t) ∂θ = − 2 n ∂F (t) ∂θ ∂F (t) ∂θ ⊤ D(t) = −M(t)D(t) (1) 3 A Four-phase Analysis of GD Dynamics In this section, we study the dynamics of gradient descent and the change of sharpness along the optimization trajectory. We divide the whole training process into four phases, occurring repeatedly in the EOS regime. In Section 3.1, we introduce the four phases. In Section 3.2, we show empirically that the change of the norm of the output layer weight vector almost coincides with the change of the sharpness. In Section 3.3, using this observation, we attempt to explain the dynamics of each phase and provide a mathematical explanation for the changes in the sharpness. In Section 3.4, we explain why the loss decreases but non-monotonically. We admit that a completely rigorous theoretical explanation is still beyond our reach and much of our argument is based on various simplifying assumptions and is somewhat heuristic at some points. Due to space limits, we defer all the proofs in this section to Appendix E.1. 3.1 A Four-phase Division To further understand the properties along the trajectory when EOS happens, we study the behaviors of the loss and the sharpness during the training process. As illustrated in Figure 1, we train a shallow neural network by gradient descent on a subset of 1,000 samples from CIFAR-10 (Krizhevsky et al. [17]), using the MSE loss as the objective. Notice that the sharpness keeps increasing while the loss decreases until the sharpness reaches 2/η. Then the sharpness begins to oscillate around 2/η while the loss decreases non-monotonically. This is a typical sharpness behavior in the EOS regime, and consistent with the experiments in [6]. We divide the training process into four phases according to the evolution of the loss, the sharpness, and their correlation, as shown in Figure 1. The four phases happen cyclically along the training trajectory. We first briefly describe the properties of each phase and explain the dynamics in more detail in Section 3.3. Phase I: Sharpness Λ < 2/η. In this stage, all the eigenvalues of Gram matrix M are below the threshold 2/η. In particular, using standard initialization, the training typically starts from this phase, and during this phase the loss keeps decreasing and the sharpness keeps growing along the trajectory. This initial phase is called progressive sharpening (PS) in prior work Cohen et al. [6]. Empirically, the behavior of GD trajectory (as well as the loss and the sharpness) is very similar to that of gradient flow, until the sharpness reaches 2/η (this phenomena is also observed in Cohen et al. [6]. See Figure 5 or Appendix J.1 in their paper). We note that GD may come back to this phase from Phase IV later. Phase II: Sharpness Λ > 2/η. In this phase, the sharpness exceeds 2/η and may keep increasing. We will show shortly that the fact that Λ > 2/η causes |D⊤v1| (where v1 the first eigenvector of M ) to increase exponentially (Lemma 3.2). This would quickly lead ∥D∥ to exceed ∥Y ∥ in a few iterations, which leads the sharpness to start decreasing by Proposition 3.1, hence the training process enters Phase III. Phase III: Sharpness Λ > 2/η yet begins to gradually drop. Before Λ drops below 2/η, Lemma 3.2 still holds, so |D⊤v1| keeps increasing. Proposition 3.1 still holds and thus the sharpness keeps decreasing until it is below 2/η, at which point we enter Phase IV. A distinctive feature of this phase is that the loss may increase due to the exponential increase of |D⊤v1|. Phase IV: Sharpness Λ < 2/η. When the sharpness is below 2/η, |D⊤v1| begins to decrease quickly, leading the loss to decrease quickly. At the same time, the sharpness keeps oscillating and gradually decreasing for some iterations. This lasts until the loss decrease to a level that is around its value right before Phase III. The sharpness is still below 2/η and our training process gets back to Phase I. 3.2 The Norm of the Output Layer Weight It is difficult to rigorously analyze the dynamics of the sharpness Λ(t). In this subsection, we make an interesting observation, that the change of the norm of the output layer of the network (usually a fully-connected linear layer) is consistent with the change of the sharpness most of the time. In particular, for a general neural network f(x) = A⊤h(W ,x), where A ∈ Rm is the output layer weight and the feature extractor h : Rp × Rd → Rm outputs a m-dimensional feature vector (h corresponds to all but the last layers). W ∈ Rp is the parameter vector of the extractor h. Note that M = (∂F∂θ ) ⊤(∂F∂θ ) can be decomposed as follows: M = ( ∂F ∂θ )( ∂F ∂θ )⊤ = ( ∂F ∂A )( ∂F ∂A )⊤ + ( ∂F ∂W )( ∂F ∂W )⊤ := MA +MW . where the (i, j)−entry of MW is (MW )ij = 〈 ∂f(xi) ∂W , ∂f(xj) ∂W 〉 = A⊤ ∂h(W ,xi)∂W ∂h(W ,xj) ∂W ⊤ A. In this expression, intuitively ∥A∥ should be positively related to ∥MW ∥. We empirically observe that the part MA = ( ∂F∂A )( ∂F ∂A ) ⊤ has a much smaller spectral norm compared to the whole Gram matrix M (see Figure 3(a) and Appendix D), which means ∥MW ∥ dominates ∥MA∥. Therefore, ∥A∥ should be positively correlated with ∥M∥. The benefit of analyzing ∥A∥2 is that the gradient flow of ∥A∥2 enjoys the following clean formula: d∥A∥2 dt = −2 ( ∂L ∂A )⊤ A = − 4 n D⊤ ( ∂F ∂A ) A = − 4 n D⊤F . (2) In this work, we do experiments on two-layer linear networks, fully connected deep neural networks, and Resnet18, and all of them have such output layer structures. From Figure 3(a), we can observe that the output layer norm ∥A∥2 and the sharpness Λ change in the same direction most of the time along the gradient descent trajectory, i.e., they both increase or decrease at the same time. We note that they may change in different directions very occasionally around the time when ∥A(t+ 1)∥2 − ∥A(t)∥2 changes its sign (see the experiments in Figure 2). 3.3 Detailed Analysis of Each Phase In this section, we explain the dynamics of each phase in more detail. For clarity, we first list the assumptions we need in this section. For different phases, we may need some different assumptions to simplify the arguments. Most of the assumptions are consistent with the experiments or the findings in the literature. Some of them are somewhat stronger, and we also discuss how to relax them. 3.3.1 Assumptions Used in Section 3.3 Assumption 3.1. (A-norm and sharpness) Along the gradient descent training trajectory, for all time t, the norm ∥A(t)∥ of the output layer and the sharpness Λ(t) moves in the same directions, i.e., sign(Λ(t+ 1)− Λ(t)) = sign(∥A(t+ 1)∥ − ∥A(t)∥). It is the key observation that we have discussed in Section 3.2. The following are two assumptions about the gradient descent trajectory. The first one assumes that D(t) and ∥A∥2 are updated according their first order approximations. Empirical justification of this approximation can be found in Appendix D.1.3. Assumption 3.2. (First Order Approximation of GD) Along the gradient descent trajectory, the update rule is assumed as the first order approximation D(t+ 1)−D(t) = −ηM(t)D(t), ∥A(t+ 1)∥2 − ∥A(t)∥2 = −4η n D(t)⊤F (t) (3) Assumption 3.3. (Gradient flow for the PS phase) When Λ(t) < 2/η, D(t) follows the gradient flow trajectory: dD(t)dt = −M(t)D(t). Assumption 3.3 holds empirically, especially in the progressive sharpening phase (see Figure 5 or Appendix J.1 in Cohen et al. [6]) when the networks are continuously differentiable. We include these experimental details in Appendix D. See also (Theorems 4.3 and 4.5) in Arora et al. [3] for further theoretical justification. We need this assumption for the proof in the progressive sharpening phase. Then we state an assumption on the upper bound of the sharpness to restrict the regime we discuss: Assumption 3.4. (Sharpness upper bound) If the training does not diverge, there exists some constant BΛ, such that 0 < Λ(t) ≤ BΛη for all t. This assumption states that there is an upper bound of the sharpness throughout the optimization process. Actually, in Lewkowycz et al. [19], they proved that 4/η is an upper bound of the sharpness in a two-layer linear network with one datapoint, otherwise the training process (loss) would diverge. They empirically found that similar upper bounds exist also for nonlinear activations, albeit with somewhat larger constant BΛ. In the work, We focus on the case when the loss does not diverge and hence we make Assumption 3.4. The main set of assumptions we need is about the change of M ’s eigendirections. Assumption 3.5. Denote {vi}ni=1 to be the set of eigenvectors of M(t). We have three levels of assumptions on M ’s eigenspace. (i) (fixed eigendirections) the set {vi}ni=1 is fixed throughout the phase under consideration; (ii) (eigendirections move slowly) at all time t and for any i, F (t)⊤ dvi(t)dt < λi(t)D(t) ⊤vi(t); (iii) (principal directions moves slowly) at all time t, there is a small constant ϵ2 ≥ 0 such that ⟨v1(t),v1(t+ 1)⟩ ≥ 1− ϵ2. Clearly, these three assumptions are increasingly weaker from (i) to (iii). Assumption 3.5 (i) on the eigenvectors is somewhat strong, and the eigenvectors corresponding to small eigenvalues may change notably in our experiments. We use it to illustrate a basic proof idea of the progressive sharpening phase, but later we relax this assumption to Assumption 3.5 (ii). Moreover, for the proof in Phase II and III, Assumption 3.5 (iii), which only assume that the main direction changes slightly, is sufficient for our proof. Actually, we note that v1(t) (the eigenvector corresponding to the largest eigenvalue) changes slowly and the inner product of its initial direction and its direction at the end of the phase is also large (see Appendix D for the empirical verification). For the proof in Phase II, we need another small technical assumption: Assumption 3.6. Assume D(t)⊤v1(t) ≥ cϵ2∥D(t)∥ for some c > 1 for some t = t0 at the beginning of this phase. Here ϵ2 is defined in Assumption 3.5 (iii). Assumption 3.6 says that D(t) has a non-negligible component in the direction of v1. Since ϵ2 > 0 is a small constant, this is not a strong assumption as some small perturbation (due to discrete updates) would make the assumption hold for some c > 1. 3.3.2 Detailed Analysis In each phase, we attempt to explain the main driving force of the change of the sharpness and the loss. Phase I: In this phase, we show that D(t)⊤F (t) < 0 under certain assumptions (detailed shortly) on the spectral properties of M(t) (see Lemma 3.1 below). By Assumption 3.2, we have ∥A(t + 1)∥2 − ∥A(t)∥2 > 0, implying that the sharpness Λ(t) also increases based on Assumption 3.1. This phase stops if Λ(t) grows larger than 2/η. We assume the output vector F (t) is initialized to be small (this is true if we use very small initial weights). For simplicity, we assume F (0) = 0 in the following argument. Lemma 3.1. For all t in Phase I, under Assumption 3.5 (i) and 3.3, it holds that D(t)⊤F (t) < 0. From this lemma, ∥A∥ keeps increasing by Assumption 3.2; hence the sharpness keeps increasing by Assumption 3.1 until it reaches 2/η or the loss converges to 0. In the former case, the training process enters Phase II, while the latter case is also possible when η is very small (e.g., even the largest possible sharpness value is less than 2/η). We admit that Assumption 3.5 (i) is somewhat strong. In fact, the assumption can be relaxed significantly to Assumption 3.5 (ii). We show in Appendix E.2 that under Assumption 3.5 (ii) and Assumption 3.3, we can still guarantee D(t)⊤F (t) < 0. Moreover, we provide a dynamical system view of the dynamics of D(t)⊤F (t) in that Appendix. Phase II: When the training process just enters Phase II, the sharpness keeps increasing. We show shortly that D(t)⊤v1(t) starts to increase geometrically, and this causes the sharpness to stop increasing at some point, thus entering Phase III. In this phase, we adopt a weaker assumption on the sharpness direction v1: Assumption 3.5 (iii). This assumption holds in our experiments (See Figure 17). Also, Assumption 3.6 is necessary. Lemma 3.2. Suppose Assumption 3.5 (iii) and 3.6 hold during this phase (with constants ϵ2 > 0 and c > 1). If Λ(t) = (2 + τ)/η and τ > 11−ϵ2−1/c − 1, then D(t) ⊤v1(t) increases geometrically with factor (1 + τ)(1− ϵ2 − 1/c) > 1 for t ≥ t0 in this phase. Since D(t)⊤v1(t) increases geometrically, ∥D∥ ≥ D(t)⊤v1(t) will exceed ∥Y ∥ eventually. Next, the following proposition states that when this happens, D(t)⊤F (t) > 0. Consequently, ∥A∥ decreases by Assumption 3.2, leading to the decrement of the sharpness based on our Assumption 3.1. Proposition 3.1. If ∥D(t)∥ > ∥Y ∥, then D(t)⊤F (t) > 0. Phase III: The sharpness is still larger than 2/η, but it starts decreasing. Meanwhile, the loss continues to increase rapidly due to Lemma 3.2. Eventually, the sharpness will fall below 2/η and then the training process enters phase IV. By Lemma 3.2, if the sharpness stays above 2/η, then we can have an arbitrarily large loss. According to Proposition 3.1, if the loss is large enough, the sharpness keeps decreasing. Now we show that if the sharpness stays above 2/η, ∥A(t)∥2 will decrease by a significant amount. This partially explains that the sharpness should also decrease significantly until it drops below 2/η (instead of decreasingly converging to a value above 2/η without ever entering the next phase). Proposition 3.2. Under Assumption 3.2, if ∥D(t)∥ > ∥Y ∥, then ∥A(t + 1)∥2 − ∥A(t)∥2 < − 4ηn (∥D(t)∥ − ∥Y ∥) 2. From the above argument, we can see that if D(t)⊤v1(t) is larger than ∥Y ∥, then D(t)⊤v1(t) does not decrease in Phase III, and according to Proposition 3.2, ∥A(t)∥2 decreases significantly, implying the sharpness drops below 2/η eventually. Remark: The fact that the sharpness can provably drop below 2/η in this phase can be proved more rigorously in Section 4 for the two-layer linear setting. See Theorem 2. Phase IV: First, since the training process has just left phase III, D(t)⊤F (t) is still positive and large, hence ∥A(t)∥2 keeps decreasing and the sharpness decreases as well. Since the sharpness stays below 2/η, the loss decreases due to the following descent lemma (with u replaced by D(t)). Lemma 3.3. If Λ(t) < 2/η, then for any vector u ∈ Rn, ∥u⊤(I − ηM(t))∥ ≤ (1 − ηα)∥u∥, where α = min{2/η − Λ(t), λmin(M(t))}. In particular, replacing u with D(t), we can see ∥D(t+ 1)∥ ≤ (1− ηα)2∥D(t)∥. Next we argue that D(t)⊤F (t) will become negative eventually, which indicates that ∥A(t)∥2 and hence the sharpness will grow again. Since the sharpness is below 2/η, D(t)⊤v1(t) decreases geometrically due to Lemma 3.3 (replacing u with D(t)v1(t)v1(t)⊤). In fact, D can be decomposed into the v1-component v1v⊤1 D and the remaining part R defined as R(t) := (I − v1(t)v1(t)⊤)D(t). Then we have D(t)⊤F (t) = (v1(t)v1(t) ⊤D(t))⊤(v1(t)v1(t) ⊤D(t) + Y ) +R(t)⊤(R(t) + Y ). (4) As shown in the next subsection, R(t) almost follows a similar gradient descent trajectory R′(t) (Lemma 3.5). More precisely, R′(t) is defined as R′(t+1) = (I − ηM(t)(I −v1(t)v1(t)⊤))R′(t) (Lemma 3.4). While D’s dynamics is D(t+1) = (I−ηM(t))D(t), R′ follows a similar dynamics R(t + 1) = (I − ηM ′(t))R′(t), where M ′(t) = M(t)(I − v1(t)v1(t)⊤). Note that M ′(t) has eigenvalues smaller than 1/η for any time t (by Assumption 3.7), hence with an assumption similar to Assumption 3.5 (i) (or a similar version of our relaxed assumption in Appendix E.2 for M ′), we can prove that R′(t)⊤(R′(t) + Y ) < 0 for any time t (See Appendix E.1 for the rigorous proof). Since R(t) ≈ R′(t), the second term in the decomposition (4) is always negative and the first term (v1 direction term) is decreasing geometrically. Therefore, there are only two possible cases. The first possibility is that the first term decreases to a small value near 0 and the second term remains largely negative. Then their sum will be negative, which is D(t)⊤F (t) < 0, thus implying the training enters Phase I. The second possibility is that when the first term decreases to a small value near 0, the second term is also a small negative value. In this case both R(t) and D(t)⊤v1(t) are small, implying the loss is almost 0, which is indeed the end of the training. 3.4 Explaining Non-monotonic Loss Decrement In this subsection, we attempt to explain the non-monotonic decrement of the loss during the entire GD trajectory. See Figure 3(b). As defined in the last section, we decompose D into the v1-component v1v ⊤ 1 D and the remaining part R. Below, we prove that R(t) is not affected much by the exponential growth of the loss (Proposition 3.5) in Phase II and III, and almost follows a converging trajectory (which is defined as R′(t) later in this section). The arguments in this subsection need Assumption 3.4 and Assumption 3.5 (iii), both very consistent with the experiments. We need an additional assumption on the spectrum of M . Assumption 3.7. All M(t)’s eigenvalues except Λ(t) = λmax(M(t)) are smaller than 1/η for all t. Recall that the largest eigenvalue is at most BΛη by Assumption 3.4. Empirically, the largest eigenvalue is an outlier in the spectrum, i.e., it is much larger than the other eigenvalues. Hence, we make Assumption 3.7 which states that all other eigenvalues are at most 1/η, which is consistent with our experiments. See Figure 3(b). Similar fact is also mentioned in [28, 29]. First, we let BD be an upper bound of D(t), i.e., for all t, ∥D(t)∥ ≤ BD. In the two-layer linear network case, we can have an explicit form of BD. (see Lemma C.9 in Appendix C.) Recall that in Assumption 3.4, BΛ is the upper bound of ηΛ. Lemma 3.4. Suppose Assumption 3.5 (iii) holds. R(t) satisfies the following: R(t+ 1) = (I − ηM(t))R(t) + e1(t), where ∥e1(t)∥ ≤ 6 √ ϵ2∥D(t)∥(BΛ − 1) Lemma 3.5. Define an auxiliary sequence R′(t) by R′(0) = R(0), and R′(t+1) = (I−ηM(t)(I− v1(t)v1(t) ⊤))R′(t). If Assumption 3.4, Assumption 3.5 (iii), Assumption 3.7 hold, and for any time t there exists a quantity λr > 0, such that the smallest eigenvalue of M(t), i.e. λmin(M(t)) > λr, then there exists a constant cr > 0 such that ∥R(t)−R′(t)∥ ≤ cr BD(BΛ−1) √ ϵ2 ηλr . Now, in light of Lemma 3.2 and Lemma 3.5, we arrive at an interesting explanation of the phenomena of non-monotonic decrease of the loss. Basically, D can be decomposed into the v1-component v1v ⊤ 1 D and the remaining part R = (I − v1v⊤1 )D. The v1-component may increase geometrically during the EOS (Lemma 3.2), but the behavior of the remaining part R(t) is close to R′(t), which follows the simple updating rule R′(t+ 1) = (I − ηM(t))R′(t), so Lemma 3.3 implies that the R part almost keeps decreasing during the entire trajectory (here Lemma 3.3 applies with u replaced by R′(t), noticing that the eigenvalues except the first are well below 2/η). Hence, the non-monotonicity of the loss is mainly due to the v1-component of D, and the rest part R is optimized in the classical regime (step size well below 2/(the operator norm)) and hence steadily decreases. See Figure 3(b). 4 A Theoretical Analysis for 2-Layer Linear NN In this section, we aim to provide a more rigorous explanation of the EOS phenomenon in two-layer linear networks. The proof ideas follow similar high-level intuition as the proofs in Section 3.3. In particular, we can remove or replace the assumptions in Section 3.3 with arguably weaker assumptions. Due to space limit, we state our main theoretical results and elaborate their relation with the proofs in Section 3.3. The detailed settings and proof are more tedious and can be found in Appendix C. 4.1 Setting and basic notations Model: In this section, we study a two-layer neural network with linear activation, i.e. f(x) =∑m q=1 1√ m aqwqx = 1√ m A⊤Wx where W = [w1, ...,wm]⊤ ∈ Rm×d, A = [a1, ..., am] ∈ Rm. Dataset: For simplicity, we assume yi = ±1 for all i ∈ [n], and ∥X⊤X∥2 = Θ(n). We assume X⊤X has rank r, and we decompose X⊤X and Y according to the orthonormal basis {vi}, the eigenvectors of X⊤X: X⊤X = ∑r i=1 λiviv ⊤ i , Y = ∑r i=1(Y ⊤vi)vi := ∑r i=1 zivi where vi is the eigenvector corresponding to the i-th largest eigenvalue λi of X⊤X. zi = Y ⊤vi is the projection of Y onto the direction vi. Here we suppose n ≫ r and the global minimum (A∗,W ∗) exists. Update rule: We write explicitly the GD dynamics of D(t): D(t+1) = (I−ηM∗(t))D(t), where M∗(t) = 2mn (∥A(t)∥ 2X⊤X+X⊤W⊤(t)W (t)X)− 4ηn2m (D(t) ⊤F (t))X⊤X is the Gram matrix combined with second order terms. 4.2 Main Theorem and The Proof Sketch Phase I and Progressive Sharpening: Assumption 4.1. There exists some constant χ > 1, s.t. for all i ∈ [r − 1], λi(X⊤X) ≤ χλi+1(X ⊤X). Moreover, λ1(X⊤X) ≥ 2λ2(X⊤X). Assumption 4.2. There exists κ = Ω(r−1) such that mini∈[r]{zi/ √ n} ≥ κ. The first assumption is about the eigenvalue spectrum of X⊤X. 3 The second assumes that all component zi = Y ⊤vi are not too small. Theorem 1 (Informal). Suppose Assumption 4.1, Assumption 4.2 hold, the smallest nonzero eigenvalue λr = λr(X⊤X) > 0 and λ1 = λmax(X⊤X) = c1n. Then for any ϵ > 0, if m = Ω( c1n 2 λ2r ), and n = Ω( λ2r κ4ϵ2 ), we have the progressive sharpening property: Λ(t+1)−Λ(t) > 0 for t = 1, 2, ..., t0−1 where t0 is the time when ∥D(t)∥2 ≤ O(ϵ2) or λmax(M∗(t)) > 1/η for the first time. In the proof of this theorem, we show that the Gram matrix M(t) ≈ 2mn (∥A(t)∥ 2 + md )X ⊤X, which serves as a justification of Assumption 3.5 we made in Section 3.3. That shows all M(t) 3It guarantees the gap between two adjacent eigenvalues is not very large, and there is a gap between the largest and the second largest eigenvalue. Note the second part of the assumption is a relaxed version of Assumption 3.7. In our CIFAR-10 1k-subset with samples’ mean subtracted, λ1/λ2 = χ ≈ 3 (See Figure 19). approximately share the same set of eigenvectors as X⊤X. In our proof, we also prove more rigorously that ∥A(t)∥2 is an indicator of the sharpness in this simpler setting. Edge of Stability (Phase II - IV): Assumption 4.3. There exists some constant c2 > 0, such that ∥Γ(t)∥ ≤ c2m . This assumption is based on Theorem 1. In Theorem 1, we state that in the progressive sharpening phase, ∥Γ(t)∥ has an upper bound of O(1/m). Now in the EOS phase, we assume that ∥Γ(t)∥ grows larger by at most a constant factor. Further discussions refer to Appendix D.2.2. Assumption 4.4. There exists some constant β > 0, such that Λ ≤ 4η (1− β). This assumption is consistent with Assumption 3.4, which assumes an upper bound of the sharpness. Assumption 4.5. There exist some constant c3 such that |D(t)⊤v1| > c3 √ n/m for some t = t0 at the beginning of phase II. This assumption is in the same spirit of Assumption 3.6 with the only change of the bound in terms of m and n. Now, we are ready to state our theorem in this stage. Theorem 2. Denote the smallest nonzero eigenvalue as λr ≜ λr(X⊤X) > 0 and the largest eigenvalue as λ1 ≜ λ1(X⊤X). Under Assumption 4.3, 4.4, 4.5, and λ1(X⊤X) ≥ 2λ2(X⊤X) in Assumption 4.1, there exist constants c4, c5, c6 such that if n > c6λrη,m > max{ c4d 2n2 λ2r , c5η}, then • There exists ρ = O(1) which depends on c3 such that if Λ(t0) > 2η (1+ρ) for some t0, there must exist some t1 > t0 such that Λ(t1) < 2η (1 + ρ). • If Λ(t),Λ(t+ 1) > 2η (1 + ρ), then there is a constant c7 > 0 (depending on c3) such that |D(t+ 1)⊤v1| > |D(t)⊤v1|(1 + c7). • Define R(t) := (I − v1v⊤1 )D(t), and R′(t) := (I − ηM∗(t)(I − v1v⊤1 ))R′(t − 1). It holds that ∥R(t)−R′(t)∥ = O( √ n3d λr √ m ). We can conclude the following from Theorem 2: (1) The first statement of the theorem states that if the progressive sharpening phase causes the sharpness to grow over 2/η, then the sharpness eventually goes below 2/η. This illustrates the regularization effect of gradient descent on the sharpness (this is consistent with the analysis of Phase III in Section 3.3). (2) The second states that |D(t)⊤v1| geometrically increases in Phase II and III. Note that we proved a similar Lemma 3.2 for Phase II in the more general setting in Section 3.3. (3) The third conclusion gives an upper bound for the distance between R(t)’s trajectory and R′(t)’s. This bound helps illustrate why R(t)’s trajectory is similar with R′(t) in Phase IV of Section 3.3. 5 Discussions and Open Problems In this section, we discuss the limitation of our theory and some related findings. First, our argument crucially relies on the assumption that ∥A∥ changes in the same direction as Λ does most of the time. Here, we elaborate more on this point. Seeing from a longer time scale, ∥A∥2 and the sharpness may have very different overall trends (See Figure (c) in 2), i.e., the sharpness oscillates around 2/η but ∥A∥2 increases. Moreover, the sharpness may oscillate more frequently than ∥A∥2, while the low-frequency trends seem to match well (See the late training phases in Figure (b) in 2). Currently, our theory cannot explain the high-frequency oscillation of the sharpness in Figure (b). While we still believe the change of ∥A∥ is a major driving force of the change of the sharpness, other factors (such as other layers) must be taken into consideration for a complete understanding and explanation of the sharpness dynamics. We also carry out some experiments that reveal some interesting relation between the inner layers and the sharpness, which is not yet reflected in our theory. Due to space limit, we defer it to Appendix D.3. We conclude with some open problems. It would be very interesting to remove some of our assumptions or replace them (especially those related to the spectrum of M ) by weaker or more natural assumptions on the data or architectures, or make some of the heuristic argument more rigorous (e.g., first order approximation of the dynamics (3)). Extending our results in Section 4 to deeper neural networks with nonlinear activation function is an intriguing and challenging open problem.
1. What is the focus of the paper regarding neural networks, and what are the claimed contributions? 2. What are the strengths of the proposed approach, particularly in its novelty and research direction? 3. What are the weaknesses of the paper, especially regarding its heuristic nature, assumptions, and empirical justification? 4. How would you respond to the reviewer's questions about the assumptions made in the paper, such as the last layer weight being a proxy for sharpness, and the boundedness of Γ(t)? 5. Would you provide additional explanations or modifications to address the concerns raised by the reviewer regarding the paper's clarity and limitations?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a theoretical explanation for the progressive sharpening and edge of stability phenomena observed in Cohen et al (2021). Specifically, this paper claims that the gradient descent dynamics can be divided into four stages based on the value of the sharpness. For a general neural network, the paper gives a heuristic derivation for the four stages, where the proxy for sharpness is the last layer weight. It also gives a more rigorous derivation for a two-layer linear neural network. Strengths And Weaknesses Strengths: The edge of stability phenomenon contradicts much of classical optimization, and as of yet does not have a satisfactory theoretical explanation. Therefore I find this work to be an important research direction. Furthermore, the four stage division of dynamics proposed here appears to be novel. Weaknesses: My central issue with this paper is that the derivations are far too heuristic. As a result, I do not find the claims to be theoretically sound or a convincing explanation of the edge of stability phenomenon. Some specific instances are the following (all page references are for the version in the supplementary material): Section 3: Rather than tracking the sharpness, this paper tracks | A ( t ) | 2 , where A ( t ) is the last layer weight. I don’t believe that in general the last layer weight is an accurate proxy for sharpness. The justification given here is that the difference in norms is usually the same sign as the change in sharpness; however, the empirical justification in Figure 2 is unconvincing as this correlation is only strong for the two layer linear network. Furthermore, while the analysis in Section 3 shows that | A ( t ) | 2 increases or decreases, this does not necessarily imply that the sharpness will decrease below 2 / η , which is a central component of the empirical analysis in Cohen et al (2021). This analysis does not track the full GD dynamics, but rather a first order Taylor expansion of the gradient. Appropriately dealing with the full dynamics seems far more challenging. This should also be stated explicitly as an assumption. Assumption 3.2 is very strong, and changing eigenvectors could have a nontrivial effect on the dynamics. I don’t find the argument for why the dynamics eventually return to stage I convincing. This argument is given at the bottom of page 8 + top of page 9 and is quite handwavy / unrigorous. Lemma 3.5 claims to explain the non-monotonic loss decrease, by saying R ( t ) , the dynamics with the v 1 direction projected out, approximately follows R ′ ( t ) . However, it seems that the error term in this Lemma can be very large, since η is small and λ r can be very small as well. Section 4: I don’t find Assumption 4.3 to be realistic. First, the fact that Γ ( t ) is bounded during the progressive sharpening phase does not mean it will stay bounded during the EOS phase. In fact, Figure 18 clearly shows that | Γ ( t ) | spikes during the edge of stability. Another issue is that assumption 4.3 amounts to the weights W ( t ) changing very little during training, which is essentially equivalent to saying the network is in the lazy training/NTK regime. It is known that neural networks in practice don’t follow lazy training and weights do more far from initialization, and since edge of stability is inherently a non-quadratic phenomenon, I thus find it unreasonable to assume that weights do not move far from initialization. This assumption appears to be key to the claims in section 4, and therefore I do not find the main claims of this section to be convincing. Furthermore, the supplementary material contains an alternate version of the paper where the main text is much longer (13 pages). I am not sure if this is allowed under the submission guidelines. Questions As mentioned in the weaknesses section, I find a number of the assumptions to be unrealistic, and don’t find the empirical justification sufficient. Can you please elaborate on why these assumptions are reasonable? In particular, why the last layer norm is a good proxy for sharpness, and why one can assume W ( t ) changes very little for the two-layer linear network? Furthermore, can you please explain why the argument in section 3 guarantees a return to phase I? From a clarity perspective, I find the assumptions and heuristic derivations being stated within the main theorems hard to follow. One suggestion is to have an explicit assumption section before the main results in section 3. Limitations The paper does admit that the analysis is highly heuristic, which is the main limitation of this work.
NIPS
Title Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability Abstract Recent findings demonstrate that modern neural networks trained by full-batch gradient descent typically enter a regime called Edge of Stability (EOS). In this regime, the sharpness, i.e., the maximum Hessian eigenvalue, first increases to the value 2/(step size) (the progressive sharpening phase) and then oscillates around this value (the EOS phase). This paper aims to analyze the GD dynamics and the sharpness along the optimization trajectory. Our analysis naturally divides the GD trajectory into four phases depending on the change in the sharpness value. We empirically identify the norm of output layer weight as an interesting indicator of the sharpness dynamics. Based on this empirical observation, we attempt to theoretically and empirically explain the dynamics of various key quantities that lead to the change of the sharpness in each phase of EOS. Moreover, based on certain assumptions, we provide a theoretical proof of the sharpness behavior in the EOS regime in two-layer fully-connected linear neural networks. We also discuss some other empirical findings and the limitation of our theoretical results. 1 Introduction Deep learning has achieved great success in a variety of machine learning applications, and gradientbased algorithms are the prevailing optimization methods for training deep neural networks. However, mathematically understanding the behavior of the optimization methods for deep learning is highly challenging, due to non-convexity, over-parameterization, and complicated architectures. In particular, some recent empirical findings in deep networks contradict the traditional understandings of gradient methods. For example, Wu et al. [30] observed that the solution found by gradient descent has sharpness approximately equal to 2/η instead of just being smaller than 2/η. Also, Jastrzebski et al. [14] observed that there is a break-even point in the SGD trajectory, and after this point, there is a regularization effect on the loss curvature. One recent well-known example is the phenomenon called “Edge of Stability" (EOS) (Cohen et al. [6]). Based on the classical optimization theory, the learning rate η of gradient-based method should be smaller than 2/λ so that the loss can decrease, where λ is the largest eigenvalue of the Hessian of the objective, also called “sharpness” in the literature. Otherwise, the loss diverges (even for simple quadratic functions). However, the empirical findings in Cohen et al. [6] show that under various ∗Contributed equally, listed in alphabetical order. †The authors are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi’an Institute for Interdisciplinary Information Core Technology. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). network settings, the EOS phenomena typically occurs along the gradient descent trajectory: (1) the sharpness first increases until it reaches 2/η (called “progressive sharpening”) (2) the sharpness starts hovering around 2/η (the EOS regime) and (3) the loss non-monotonically decreases without diverging. Although (1) seems to be consistent with the traditional beliefs about optimization, a rigorous mathematical explanation for it is still open. Moreover, phenomena (2) and (3) are more mysterious because they violate the η < 2/λ “rule” in traditional optimization theory, yet the training loss does not completely diverge. Instead, the loss may oscillate but still decrease in the long run, while the sharpness seems to be restrained from further increasing. In this paper, we aim to provide a theoretical and empirical explanation for the mystery of EOS. Towards the goal, we focus on the dynamics of these key quantities when EOS happens and attempt to find out the main driving force to explain these phenomena along the gradient descent trajectory from both theoretical and empirical perspectives. 1.1 Our Contributions Our contributions can be summarized as follows. (Section 3.1) We analyze the typical sharpness behavior along the gradient descent trajectory when EOS happens, and propose a four-phase division of GD trajectory, based on the dynamics of some key quantities such as the loss and the sharpness, for further understanding this phenomenon. (Section 3.2) We empirically identify the weight norm of the output layer as an effective indicator of the sharpness dynamics. We show that analyzing the dynamics of this surrogate can qualitatively explain the dynamics of sharpness. By assuming this relation, together with some additional simplifying assumptions and approximations, we can explain the dynamics of the sharpness, the loss, and the output layer norm in each phase of EOS (Section 3.3). In this context, we also offers an interesting explanation for the non-monotonic loss decrement (also observed in Cohen et al. [6], Xing et al. [32]) (Section 3.4). (Section 4) Following similar ideas, we provide a more rigorous proof for the progressive sharpening and EOS phenomena in a two-layer fully-connected linear neural network setting based on certain assumptions. The assumptions made here are either weaker or arguably less restrictive. 1.2 Related work The structure of Hessian The Hessian matrix carries the second order information of the loss landscape. Several prior works have empirically found that the spectrum of Hessian has several “outliers” and a continuous “bulk” (Sagun et al. [28, 29], Papyan [25, 26]). Typically, each outlier corresponds to one class in multi-class classification. As we consider the binary classification setting, there is typically one outlier (i.e., the largest eigenvalue) that is much larger than other eigenvalues. It is consistent with our Assumption 4.1. The Gauss-Newton decomposition of the Hessian was used in several prior works (Martens [23], Bottou et al. [4], Papyan [25, 26]). Papyan [25] empirically showed that the outliers of Hessian can be attributed to a “G component”, which is also known as Fisher Information Matrix (FIM) in Karakida et al. [15, 16]. Also, Wu et al. [31] analyzed the leading Hessian eigenspace by approximating the Hessian with Kronecker factorization and theoretically proved the outliers structure under some random setting assumption. Neural Tangent Kernel A recent line of work studied the learning of over-parameterized neural networks in the so-called. “neural tangent kernel (NTK) regime or the lazy training regime (Jacot et al. [13], Lee et al. [18], Du et al. [8, 7], Arora et al. [2], Chizat et al. [5]). A main result in this regime is that if the neural network is wide enough, gradient flow can find the global optimal empirical minimizer very close to the initialization. Moreover, the Hessian does not change much in the NTK regime. Our findings go beyond NTK setting to analyze the change of sharpness. Edge of Stability regime The Edge of Stability phenomena was first formalized by Cohen et al. [6]. Similar phenomena were also identified in Jastrzebski et al. [14] as the existence of the “break-even” point on SGD trajectory after which loss curvature gets regularized. Xing et al. [32] observed that gradient descent eventually enters a regime where the iterates oscillate on the leading curvature direction and the loss drops non-monotonically. Recently Ahn et al. [1] studied the non-monotonic decreasing behavior of GD which they called unstable convergence, and discussed the possible causes of this phenomenon. Ma et al. [22] proposed a special subquadratic landscape property and proved that EOS occurs based on this assumption. Arora et al. [3] studied the implicit bias on the sharpness of deterministic gradient descent in the EOS regime. They proved in some specific settings with a varying learning rate (called normalized GD) or with a modified loss √ L, gradient descent enters EOS and further reduces sharpness. They mainly focus on the analysis near the manifold of minimum loss, but our analysis also applies to the early stage of the training when the loss is not close to the minimum. In particular, our analysis provides an explanation of non-monotonic loss decrease that cannot be explained by their theory. Another difference is that they consider √ L (for constant learning rate) where L is a fairly general MSE loss independent of any neural network structure, while our analysis is strongly tied with the MSE loss of a neural network. Very recently, Lyu et al. [21] explained how GD enters EOS for normalized loss (e.g., neural networks with normalization layers), and analyzed the sharpness reduction effect along the training trajectory. The notion of sharpness in their work is somewhat different due to normalization. In particular, they consider the so-called spherical sharpness, that is the sharpness of the normalized weight vector. They also mainly studied the regime where the parameter is close to the manifold of minimum loss as in [3] and proved that GD approximately tracks a continuous sharpness-reduction flow. Lewkowycz et al. [19] proposed a similar regime called “catapult phase” where loss does not diverge even if the largest Hessian eigenvalue is larger than 2/η. Our work mainly considers training in this regime and assumes that the training is not in the “divergent phase” in Lewkowycz et al. [19]. Compared with Lewkowycz et al. [19], we provide a more detailed analysis in more general settings along gradient descent trajectory. 2 Preliminaries Notations: We denote the training dataset as {xi, yi}ni=1 ⊂ Rd × {1,−1} and the neural network as f : Rd × Rp → R. The network f(θ,x) maps the input x ∈ Rd and parameter θ ∈ Rp to an output in R. In this paper, we mainly consider the case of binary classification with mean square error (MSE) loss ℓ(z, y) = (z − y)2. Denote the input matrix as X = (x1,x2, ...,xn) ∈ Rd×n and the label vector as Y = (y1, y2, ..., yn) ∈ Rn. We let F (t) = (f(θ(t),x1), f(θ(t),x2), ..., f(θ(t),xn)) ∈ Rn and D(t) = F (t)− Y be the (output) prediction vector, and the residual vector respectively at time t. The training objective is: L(f(θ)) = 1n ∑n i=1 ℓ(f(θ,xi), yi) = 1 n ∑n i=1(f(θ,xi), yi) 2. Hessian, Fisher information matrix and NTK: In this part, we apply previous works to show that the largest eigenvalue of Hessian is almost the same as the largest eigenvalue of NTK. We use the latter as the definition of the sharpness in this paper. Further details can be found in Appendix F. As shown in Papyan [26], Martens [23], Bottou et al. [4], the Hessian can be decomposed into two components, where the term known as “Gauss-Newton matrix”, G-term or Fisher information matrix (FIM), dominates the second term in terms of the largest eigenvalue. Meanwhile, Karakida et al. [16] pointed out the duality between the FIM and a Gram matrix M , defined as M = 2n ∂F (θ) ∂θ ∂F (θ) ∂θ ⊤ . It is also known as the neural tangent kernel NTK (Karakida et al. [16, 15]), which has been studied extensively in recent years (see e.g., [13],[8],[2],[5]). Note that in this paper, we do not assume the training is in NTK regime, in which the Hessian does not change much during training. It is not hard to see that M and FIM share the same non-zero eigenvalues: if Gu = λu for some eigenvector u ∈ Rp, M ∂F (θ)∂θ u = ∂F (θ) ∂θ Gu = λ ∂F (θ) ∂θ u, i.e., λ is also an eigenvalue of M . In this paper, we use θ(t) to denote the parameter at iteration t (or time t) and the sharpness at time t as Λ(t) = Λ(θ(t)). We similarly define M(t),F (t),D(t),L(t). Here we show the gradient flow dynamics of the residual vector D(t): dD(t) dt = ∂D(t) ∂θ dθ(t) dt = −∂F (t) ∂θ ∂L(t) ∂θ = − 2 n ∂F (t) ∂θ ∂F (t) ∂θ ⊤ D(t) = −M(t)D(t) (1) 3 A Four-phase Analysis of GD Dynamics In this section, we study the dynamics of gradient descent and the change of sharpness along the optimization trajectory. We divide the whole training process into four phases, occurring repeatedly in the EOS regime. In Section 3.1, we introduce the four phases. In Section 3.2, we show empirically that the change of the norm of the output layer weight vector almost coincides with the change of the sharpness. In Section 3.3, using this observation, we attempt to explain the dynamics of each phase and provide a mathematical explanation for the changes in the sharpness. In Section 3.4, we explain why the loss decreases but non-monotonically. We admit that a completely rigorous theoretical explanation is still beyond our reach and much of our argument is based on various simplifying assumptions and is somewhat heuristic at some points. Due to space limits, we defer all the proofs in this section to Appendix E.1. 3.1 A Four-phase Division To further understand the properties along the trajectory when EOS happens, we study the behaviors of the loss and the sharpness during the training process. As illustrated in Figure 1, we train a shallow neural network by gradient descent on a subset of 1,000 samples from CIFAR-10 (Krizhevsky et al. [17]), using the MSE loss as the objective. Notice that the sharpness keeps increasing while the loss decreases until the sharpness reaches 2/η. Then the sharpness begins to oscillate around 2/η while the loss decreases non-monotonically. This is a typical sharpness behavior in the EOS regime, and consistent with the experiments in [6]. We divide the training process into four phases according to the evolution of the loss, the sharpness, and their correlation, as shown in Figure 1. The four phases happen cyclically along the training trajectory. We first briefly describe the properties of each phase and explain the dynamics in more detail in Section 3.3. Phase I: Sharpness Λ < 2/η. In this stage, all the eigenvalues of Gram matrix M are below the threshold 2/η. In particular, using standard initialization, the training typically starts from this phase, and during this phase the loss keeps decreasing and the sharpness keeps growing along the trajectory. This initial phase is called progressive sharpening (PS) in prior work Cohen et al. [6]. Empirically, the behavior of GD trajectory (as well as the loss and the sharpness) is very similar to that of gradient flow, until the sharpness reaches 2/η (this phenomena is also observed in Cohen et al. [6]. See Figure 5 or Appendix J.1 in their paper). We note that GD may come back to this phase from Phase IV later. Phase II: Sharpness Λ > 2/η. In this phase, the sharpness exceeds 2/η and may keep increasing. We will show shortly that the fact that Λ > 2/η causes |D⊤v1| (where v1 the first eigenvector of M ) to increase exponentially (Lemma 3.2). This would quickly lead ∥D∥ to exceed ∥Y ∥ in a few iterations, which leads the sharpness to start decreasing by Proposition 3.1, hence the training process enters Phase III. Phase III: Sharpness Λ > 2/η yet begins to gradually drop. Before Λ drops below 2/η, Lemma 3.2 still holds, so |D⊤v1| keeps increasing. Proposition 3.1 still holds and thus the sharpness keeps decreasing until it is below 2/η, at which point we enter Phase IV. A distinctive feature of this phase is that the loss may increase due to the exponential increase of |D⊤v1|. Phase IV: Sharpness Λ < 2/η. When the sharpness is below 2/η, |D⊤v1| begins to decrease quickly, leading the loss to decrease quickly. At the same time, the sharpness keeps oscillating and gradually decreasing for some iterations. This lasts until the loss decrease to a level that is around its value right before Phase III. The sharpness is still below 2/η and our training process gets back to Phase I. 3.2 The Norm of the Output Layer Weight It is difficult to rigorously analyze the dynamics of the sharpness Λ(t). In this subsection, we make an interesting observation, that the change of the norm of the output layer of the network (usually a fully-connected linear layer) is consistent with the change of the sharpness most of the time. In particular, for a general neural network f(x) = A⊤h(W ,x), where A ∈ Rm is the output layer weight and the feature extractor h : Rp × Rd → Rm outputs a m-dimensional feature vector (h corresponds to all but the last layers). W ∈ Rp is the parameter vector of the extractor h. Note that M = (∂F∂θ ) ⊤(∂F∂θ ) can be decomposed as follows: M = ( ∂F ∂θ )( ∂F ∂θ )⊤ = ( ∂F ∂A )( ∂F ∂A )⊤ + ( ∂F ∂W )( ∂F ∂W )⊤ := MA +MW . where the (i, j)−entry of MW is (MW )ij = 〈 ∂f(xi) ∂W , ∂f(xj) ∂W 〉 = A⊤ ∂h(W ,xi)∂W ∂h(W ,xj) ∂W ⊤ A. In this expression, intuitively ∥A∥ should be positively related to ∥MW ∥. We empirically observe that the part MA = ( ∂F∂A )( ∂F ∂A ) ⊤ has a much smaller spectral norm compared to the whole Gram matrix M (see Figure 3(a) and Appendix D), which means ∥MW ∥ dominates ∥MA∥. Therefore, ∥A∥ should be positively correlated with ∥M∥. The benefit of analyzing ∥A∥2 is that the gradient flow of ∥A∥2 enjoys the following clean formula: d∥A∥2 dt = −2 ( ∂L ∂A )⊤ A = − 4 n D⊤ ( ∂F ∂A ) A = − 4 n D⊤F . (2) In this work, we do experiments on two-layer linear networks, fully connected deep neural networks, and Resnet18, and all of them have such output layer structures. From Figure 3(a), we can observe that the output layer norm ∥A∥2 and the sharpness Λ change in the same direction most of the time along the gradient descent trajectory, i.e., they both increase or decrease at the same time. We note that they may change in different directions very occasionally around the time when ∥A(t+ 1)∥2 − ∥A(t)∥2 changes its sign (see the experiments in Figure 2). 3.3 Detailed Analysis of Each Phase In this section, we explain the dynamics of each phase in more detail. For clarity, we first list the assumptions we need in this section. For different phases, we may need some different assumptions to simplify the arguments. Most of the assumptions are consistent with the experiments or the findings in the literature. Some of them are somewhat stronger, and we also discuss how to relax them. 3.3.1 Assumptions Used in Section 3.3 Assumption 3.1. (A-norm and sharpness) Along the gradient descent training trajectory, for all time t, the norm ∥A(t)∥ of the output layer and the sharpness Λ(t) moves in the same directions, i.e., sign(Λ(t+ 1)− Λ(t)) = sign(∥A(t+ 1)∥ − ∥A(t)∥). It is the key observation that we have discussed in Section 3.2. The following are two assumptions about the gradient descent trajectory. The first one assumes that D(t) and ∥A∥2 are updated according their first order approximations. Empirical justification of this approximation can be found in Appendix D.1.3. Assumption 3.2. (First Order Approximation of GD) Along the gradient descent trajectory, the update rule is assumed as the first order approximation D(t+ 1)−D(t) = −ηM(t)D(t), ∥A(t+ 1)∥2 − ∥A(t)∥2 = −4η n D(t)⊤F (t) (3) Assumption 3.3. (Gradient flow for the PS phase) When Λ(t) < 2/η, D(t) follows the gradient flow trajectory: dD(t)dt = −M(t)D(t). Assumption 3.3 holds empirically, especially in the progressive sharpening phase (see Figure 5 or Appendix J.1 in Cohen et al. [6]) when the networks are continuously differentiable. We include these experimental details in Appendix D. See also (Theorems 4.3 and 4.5) in Arora et al. [3] for further theoretical justification. We need this assumption for the proof in the progressive sharpening phase. Then we state an assumption on the upper bound of the sharpness to restrict the regime we discuss: Assumption 3.4. (Sharpness upper bound) If the training does not diverge, there exists some constant BΛ, such that 0 < Λ(t) ≤ BΛη for all t. This assumption states that there is an upper bound of the sharpness throughout the optimization process. Actually, in Lewkowycz et al. [19], they proved that 4/η is an upper bound of the sharpness in a two-layer linear network with one datapoint, otherwise the training process (loss) would diverge. They empirically found that similar upper bounds exist also for nonlinear activations, albeit with somewhat larger constant BΛ. In the work, We focus on the case when the loss does not diverge and hence we make Assumption 3.4. The main set of assumptions we need is about the change of M ’s eigendirections. Assumption 3.5. Denote {vi}ni=1 to be the set of eigenvectors of M(t). We have three levels of assumptions on M ’s eigenspace. (i) (fixed eigendirections) the set {vi}ni=1 is fixed throughout the phase under consideration; (ii) (eigendirections move slowly) at all time t and for any i, F (t)⊤ dvi(t)dt < λi(t)D(t) ⊤vi(t); (iii) (principal directions moves slowly) at all time t, there is a small constant ϵ2 ≥ 0 such that ⟨v1(t),v1(t+ 1)⟩ ≥ 1− ϵ2. Clearly, these three assumptions are increasingly weaker from (i) to (iii). Assumption 3.5 (i) on the eigenvectors is somewhat strong, and the eigenvectors corresponding to small eigenvalues may change notably in our experiments. We use it to illustrate a basic proof idea of the progressive sharpening phase, but later we relax this assumption to Assumption 3.5 (ii). Moreover, for the proof in Phase II and III, Assumption 3.5 (iii), which only assume that the main direction changes slightly, is sufficient for our proof. Actually, we note that v1(t) (the eigenvector corresponding to the largest eigenvalue) changes slowly and the inner product of its initial direction and its direction at the end of the phase is also large (see Appendix D for the empirical verification). For the proof in Phase II, we need another small technical assumption: Assumption 3.6. Assume D(t)⊤v1(t) ≥ cϵ2∥D(t)∥ for some c > 1 for some t = t0 at the beginning of this phase. Here ϵ2 is defined in Assumption 3.5 (iii). Assumption 3.6 says that D(t) has a non-negligible component in the direction of v1. Since ϵ2 > 0 is a small constant, this is not a strong assumption as some small perturbation (due to discrete updates) would make the assumption hold for some c > 1. 3.3.2 Detailed Analysis In each phase, we attempt to explain the main driving force of the change of the sharpness and the loss. Phase I: In this phase, we show that D(t)⊤F (t) < 0 under certain assumptions (detailed shortly) on the spectral properties of M(t) (see Lemma 3.1 below). By Assumption 3.2, we have ∥A(t + 1)∥2 − ∥A(t)∥2 > 0, implying that the sharpness Λ(t) also increases based on Assumption 3.1. This phase stops if Λ(t) grows larger than 2/η. We assume the output vector F (t) is initialized to be small (this is true if we use very small initial weights). For simplicity, we assume F (0) = 0 in the following argument. Lemma 3.1. For all t in Phase I, under Assumption 3.5 (i) and 3.3, it holds that D(t)⊤F (t) < 0. From this lemma, ∥A∥ keeps increasing by Assumption 3.2; hence the sharpness keeps increasing by Assumption 3.1 until it reaches 2/η or the loss converges to 0. In the former case, the training process enters Phase II, while the latter case is also possible when η is very small (e.g., even the largest possible sharpness value is less than 2/η). We admit that Assumption 3.5 (i) is somewhat strong. In fact, the assumption can be relaxed significantly to Assumption 3.5 (ii). We show in Appendix E.2 that under Assumption 3.5 (ii) and Assumption 3.3, we can still guarantee D(t)⊤F (t) < 0. Moreover, we provide a dynamical system view of the dynamics of D(t)⊤F (t) in that Appendix. Phase II: When the training process just enters Phase II, the sharpness keeps increasing. We show shortly that D(t)⊤v1(t) starts to increase geometrically, and this causes the sharpness to stop increasing at some point, thus entering Phase III. In this phase, we adopt a weaker assumption on the sharpness direction v1: Assumption 3.5 (iii). This assumption holds in our experiments (See Figure 17). Also, Assumption 3.6 is necessary. Lemma 3.2. Suppose Assumption 3.5 (iii) and 3.6 hold during this phase (with constants ϵ2 > 0 and c > 1). If Λ(t) = (2 + τ)/η and τ > 11−ϵ2−1/c − 1, then D(t) ⊤v1(t) increases geometrically with factor (1 + τ)(1− ϵ2 − 1/c) > 1 for t ≥ t0 in this phase. Since D(t)⊤v1(t) increases geometrically, ∥D∥ ≥ D(t)⊤v1(t) will exceed ∥Y ∥ eventually. Next, the following proposition states that when this happens, D(t)⊤F (t) > 0. Consequently, ∥A∥ decreases by Assumption 3.2, leading to the decrement of the sharpness based on our Assumption 3.1. Proposition 3.1. If ∥D(t)∥ > ∥Y ∥, then D(t)⊤F (t) > 0. Phase III: The sharpness is still larger than 2/η, but it starts decreasing. Meanwhile, the loss continues to increase rapidly due to Lemma 3.2. Eventually, the sharpness will fall below 2/η and then the training process enters phase IV. By Lemma 3.2, if the sharpness stays above 2/η, then we can have an arbitrarily large loss. According to Proposition 3.1, if the loss is large enough, the sharpness keeps decreasing. Now we show that if the sharpness stays above 2/η, ∥A(t)∥2 will decrease by a significant amount. This partially explains that the sharpness should also decrease significantly until it drops below 2/η (instead of decreasingly converging to a value above 2/η without ever entering the next phase). Proposition 3.2. Under Assumption 3.2, if ∥D(t)∥ > ∥Y ∥, then ∥A(t + 1)∥2 − ∥A(t)∥2 < − 4ηn (∥D(t)∥ − ∥Y ∥) 2. From the above argument, we can see that if D(t)⊤v1(t) is larger than ∥Y ∥, then D(t)⊤v1(t) does not decrease in Phase III, and according to Proposition 3.2, ∥A(t)∥2 decreases significantly, implying the sharpness drops below 2/η eventually. Remark: The fact that the sharpness can provably drop below 2/η in this phase can be proved more rigorously in Section 4 for the two-layer linear setting. See Theorem 2. Phase IV: First, since the training process has just left phase III, D(t)⊤F (t) is still positive and large, hence ∥A(t)∥2 keeps decreasing and the sharpness decreases as well. Since the sharpness stays below 2/η, the loss decreases due to the following descent lemma (with u replaced by D(t)). Lemma 3.3. If Λ(t) < 2/η, then for any vector u ∈ Rn, ∥u⊤(I − ηM(t))∥ ≤ (1 − ηα)∥u∥, where α = min{2/η − Λ(t), λmin(M(t))}. In particular, replacing u with D(t), we can see ∥D(t+ 1)∥ ≤ (1− ηα)2∥D(t)∥. Next we argue that D(t)⊤F (t) will become negative eventually, which indicates that ∥A(t)∥2 and hence the sharpness will grow again. Since the sharpness is below 2/η, D(t)⊤v1(t) decreases geometrically due to Lemma 3.3 (replacing u with D(t)v1(t)v1(t)⊤). In fact, D can be decomposed into the v1-component v1v⊤1 D and the remaining part R defined as R(t) := (I − v1(t)v1(t)⊤)D(t). Then we have D(t)⊤F (t) = (v1(t)v1(t) ⊤D(t))⊤(v1(t)v1(t) ⊤D(t) + Y ) +R(t)⊤(R(t) + Y ). (4) As shown in the next subsection, R(t) almost follows a similar gradient descent trajectory R′(t) (Lemma 3.5). More precisely, R′(t) is defined as R′(t+1) = (I − ηM(t)(I −v1(t)v1(t)⊤))R′(t) (Lemma 3.4). While D’s dynamics is D(t+1) = (I−ηM(t))D(t), R′ follows a similar dynamics R(t + 1) = (I − ηM ′(t))R′(t), where M ′(t) = M(t)(I − v1(t)v1(t)⊤). Note that M ′(t) has eigenvalues smaller than 1/η for any time t (by Assumption 3.7), hence with an assumption similar to Assumption 3.5 (i) (or a similar version of our relaxed assumption in Appendix E.2 for M ′), we can prove that R′(t)⊤(R′(t) + Y ) < 0 for any time t (See Appendix E.1 for the rigorous proof). Since R(t) ≈ R′(t), the second term in the decomposition (4) is always negative and the first term (v1 direction term) is decreasing geometrically. Therefore, there are only two possible cases. The first possibility is that the first term decreases to a small value near 0 and the second term remains largely negative. Then their sum will be negative, which is D(t)⊤F (t) < 0, thus implying the training enters Phase I. The second possibility is that when the first term decreases to a small value near 0, the second term is also a small negative value. In this case both R(t) and D(t)⊤v1(t) are small, implying the loss is almost 0, which is indeed the end of the training. 3.4 Explaining Non-monotonic Loss Decrement In this subsection, we attempt to explain the non-monotonic decrement of the loss during the entire GD trajectory. See Figure 3(b). As defined in the last section, we decompose D into the v1-component v1v ⊤ 1 D and the remaining part R. Below, we prove that R(t) is not affected much by the exponential growth of the loss (Proposition 3.5) in Phase II and III, and almost follows a converging trajectory (which is defined as R′(t) later in this section). The arguments in this subsection need Assumption 3.4 and Assumption 3.5 (iii), both very consistent with the experiments. We need an additional assumption on the spectrum of M . Assumption 3.7. All M(t)’s eigenvalues except Λ(t) = λmax(M(t)) are smaller than 1/η for all t. Recall that the largest eigenvalue is at most BΛη by Assumption 3.4. Empirically, the largest eigenvalue is an outlier in the spectrum, i.e., it is much larger than the other eigenvalues. Hence, we make Assumption 3.7 which states that all other eigenvalues are at most 1/η, which is consistent with our experiments. See Figure 3(b). Similar fact is also mentioned in [28, 29]. First, we let BD be an upper bound of D(t), i.e., for all t, ∥D(t)∥ ≤ BD. In the two-layer linear network case, we can have an explicit form of BD. (see Lemma C.9 in Appendix C.) Recall that in Assumption 3.4, BΛ is the upper bound of ηΛ. Lemma 3.4. Suppose Assumption 3.5 (iii) holds. R(t) satisfies the following: R(t+ 1) = (I − ηM(t))R(t) + e1(t), where ∥e1(t)∥ ≤ 6 √ ϵ2∥D(t)∥(BΛ − 1) Lemma 3.5. Define an auxiliary sequence R′(t) by R′(0) = R(0), and R′(t+1) = (I−ηM(t)(I− v1(t)v1(t) ⊤))R′(t). If Assumption 3.4, Assumption 3.5 (iii), Assumption 3.7 hold, and for any time t there exists a quantity λr > 0, such that the smallest eigenvalue of M(t), i.e. λmin(M(t)) > λr, then there exists a constant cr > 0 such that ∥R(t)−R′(t)∥ ≤ cr BD(BΛ−1) √ ϵ2 ηλr . Now, in light of Lemma 3.2 and Lemma 3.5, we arrive at an interesting explanation of the phenomena of non-monotonic decrease of the loss. Basically, D can be decomposed into the v1-component v1v ⊤ 1 D and the remaining part R = (I − v1v⊤1 )D. The v1-component may increase geometrically during the EOS (Lemma 3.2), but the behavior of the remaining part R(t) is close to R′(t), which follows the simple updating rule R′(t+ 1) = (I − ηM(t))R′(t), so Lemma 3.3 implies that the R part almost keeps decreasing during the entire trajectory (here Lemma 3.3 applies with u replaced by R′(t), noticing that the eigenvalues except the first are well below 2/η). Hence, the non-monotonicity of the loss is mainly due to the v1-component of D, and the rest part R is optimized in the classical regime (step size well below 2/(the operator norm)) and hence steadily decreases. See Figure 3(b). 4 A Theoretical Analysis for 2-Layer Linear NN In this section, we aim to provide a more rigorous explanation of the EOS phenomenon in two-layer linear networks. The proof ideas follow similar high-level intuition as the proofs in Section 3.3. In particular, we can remove or replace the assumptions in Section 3.3 with arguably weaker assumptions. Due to space limit, we state our main theoretical results and elaborate their relation with the proofs in Section 3.3. The detailed settings and proof are more tedious and can be found in Appendix C. 4.1 Setting and basic notations Model: In this section, we study a two-layer neural network with linear activation, i.e. f(x) =∑m q=1 1√ m aqwqx = 1√ m A⊤Wx where W = [w1, ...,wm]⊤ ∈ Rm×d, A = [a1, ..., am] ∈ Rm. Dataset: For simplicity, we assume yi = ±1 for all i ∈ [n], and ∥X⊤X∥2 = Θ(n). We assume X⊤X has rank r, and we decompose X⊤X and Y according to the orthonormal basis {vi}, the eigenvectors of X⊤X: X⊤X = ∑r i=1 λiviv ⊤ i , Y = ∑r i=1(Y ⊤vi)vi := ∑r i=1 zivi where vi is the eigenvector corresponding to the i-th largest eigenvalue λi of X⊤X. zi = Y ⊤vi is the projection of Y onto the direction vi. Here we suppose n ≫ r and the global minimum (A∗,W ∗) exists. Update rule: We write explicitly the GD dynamics of D(t): D(t+1) = (I−ηM∗(t))D(t), where M∗(t) = 2mn (∥A(t)∥ 2X⊤X+X⊤W⊤(t)W (t)X)− 4ηn2m (D(t) ⊤F (t))X⊤X is the Gram matrix combined with second order terms. 4.2 Main Theorem and The Proof Sketch Phase I and Progressive Sharpening: Assumption 4.1. There exists some constant χ > 1, s.t. for all i ∈ [r − 1], λi(X⊤X) ≤ χλi+1(X ⊤X). Moreover, λ1(X⊤X) ≥ 2λ2(X⊤X). Assumption 4.2. There exists κ = Ω(r−1) such that mini∈[r]{zi/ √ n} ≥ κ. The first assumption is about the eigenvalue spectrum of X⊤X. 3 The second assumes that all component zi = Y ⊤vi are not too small. Theorem 1 (Informal). Suppose Assumption 4.1, Assumption 4.2 hold, the smallest nonzero eigenvalue λr = λr(X⊤X) > 0 and λ1 = λmax(X⊤X) = c1n. Then for any ϵ > 0, if m = Ω( c1n 2 λ2r ), and n = Ω( λ2r κ4ϵ2 ), we have the progressive sharpening property: Λ(t+1)−Λ(t) > 0 for t = 1, 2, ..., t0−1 where t0 is the time when ∥D(t)∥2 ≤ O(ϵ2) or λmax(M∗(t)) > 1/η for the first time. In the proof of this theorem, we show that the Gram matrix M(t) ≈ 2mn (∥A(t)∥ 2 + md )X ⊤X, which serves as a justification of Assumption 3.5 we made in Section 3.3. That shows all M(t) 3It guarantees the gap between two adjacent eigenvalues is not very large, and there is a gap between the largest and the second largest eigenvalue. Note the second part of the assumption is a relaxed version of Assumption 3.7. In our CIFAR-10 1k-subset with samples’ mean subtracted, λ1/λ2 = χ ≈ 3 (See Figure 19). approximately share the same set of eigenvectors as X⊤X. In our proof, we also prove more rigorously that ∥A(t)∥2 is an indicator of the sharpness in this simpler setting. Edge of Stability (Phase II - IV): Assumption 4.3. There exists some constant c2 > 0, such that ∥Γ(t)∥ ≤ c2m . This assumption is based on Theorem 1. In Theorem 1, we state that in the progressive sharpening phase, ∥Γ(t)∥ has an upper bound of O(1/m). Now in the EOS phase, we assume that ∥Γ(t)∥ grows larger by at most a constant factor. Further discussions refer to Appendix D.2.2. Assumption 4.4. There exists some constant β > 0, such that Λ ≤ 4η (1− β). This assumption is consistent with Assumption 3.4, which assumes an upper bound of the sharpness. Assumption 4.5. There exist some constant c3 such that |D(t)⊤v1| > c3 √ n/m for some t = t0 at the beginning of phase II. This assumption is in the same spirit of Assumption 3.6 with the only change of the bound in terms of m and n. Now, we are ready to state our theorem in this stage. Theorem 2. Denote the smallest nonzero eigenvalue as λr ≜ λr(X⊤X) > 0 and the largest eigenvalue as λ1 ≜ λ1(X⊤X). Under Assumption 4.3, 4.4, 4.5, and λ1(X⊤X) ≥ 2λ2(X⊤X) in Assumption 4.1, there exist constants c4, c5, c6 such that if n > c6λrη,m > max{ c4d 2n2 λ2r , c5η}, then • There exists ρ = O(1) which depends on c3 such that if Λ(t0) > 2η (1+ρ) for some t0, there must exist some t1 > t0 such that Λ(t1) < 2η (1 + ρ). • If Λ(t),Λ(t+ 1) > 2η (1 + ρ), then there is a constant c7 > 0 (depending on c3) such that |D(t+ 1)⊤v1| > |D(t)⊤v1|(1 + c7). • Define R(t) := (I − v1v⊤1 )D(t), and R′(t) := (I − ηM∗(t)(I − v1v⊤1 ))R′(t − 1). It holds that ∥R(t)−R′(t)∥ = O( √ n3d λr √ m ). We can conclude the following from Theorem 2: (1) The first statement of the theorem states that if the progressive sharpening phase causes the sharpness to grow over 2/η, then the sharpness eventually goes below 2/η. This illustrates the regularization effect of gradient descent on the sharpness (this is consistent with the analysis of Phase III in Section 3.3). (2) The second states that |D(t)⊤v1| geometrically increases in Phase II and III. Note that we proved a similar Lemma 3.2 for Phase II in the more general setting in Section 3.3. (3) The third conclusion gives an upper bound for the distance between R(t)’s trajectory and R′(t)’s. This bound helps illustrate why R(t)’s trajectory is similar with R′(t) in Phase IV of Section 3.3. 5 Discussions and Open Problems In this section, we discuss the limitation of our theory and some related findings. First, our argument crucially relies on the assumption that ∥A∥ changes in the same direction as Λ does most of the time. Here, we elaborate more on this point. Seeing from a longer time scale, ∥A∥2 and the sharpness may have very different overall trends (See Figure (c) in 2), i.e., the sharpness oscillates around 2/η but ∥A∥2 increases. Moreover, the sharpness may oscillate more frequently than ∥A∥2, while the low-frequency trends seem to match well (See the late training phases in Figure (b) in 2). Currently, our theory cannot explain the high-frequency oscillation of the sharpness in Figure (b). While we still believe the change of ∥A∥ is a major driving force of the change of the sharpness, other factors (such as other layers) must be taken into consideration for a complete understanding and explanation of the sharpness dynamics. We also carry out some experiments that reveal some interesting relation between the inner layers and the sharpness, which is not yet reflected in our theory. Due to space limit, we defer it to Appendix D.3. We conclude with some open problems. It would be very interesting to remove some of our assumptions or replace them (especially those related to the spectrum of M ) by weaker or more natural assumptions on the data or architectures, or make some of the heuristic argument more rigorous (e.g., first order approximation of the dynamics (3)). Extending our results in Section 4 to deeper neural networks with nonlinear activation function is an intriguing and challenging open problem.
1. What are the contributions and strengths of the paper regarding the theoretical understanding of the "Edge of Stability" phenomenon? 2. What are the weaknesses of the paper, particularly when it comes to generalizing the analysis to non-linear networks? 3. How does the reviewer suggest reframing the paper to focus more on the two-layer linear network analysis? 4. Are there any questions regarding the dependence of sharpening on the input dataset or last-layer initialization? 5. How does the paper address its limitations?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper takes some steps towards theoretically understanding the "Edge of Stability" (EoS) phenomenon. Two big open questions are: (1) why does the sharpness tend to rise whenever gradient descent is stable, and (2) once gradient descent is destabilized, why does the sharpness go back down / stop rising? The paper has two parts: (1) a rigorous mathematical analysis of the two-layer linear network case (section 4), and (2) a nonrigorous mathematical analysis of the general neural net case combined with thorough experiments (section 3). Rigorous analysis of two-layer linear network: For a two-layer linear network, the NTK is the sum of two terms: (a) a term originating from the gradients w.r.t the first layer, and (b) a term originating from the gradients w.r.t the second layer. Term (a) has a simple form: it is the product of the second-layer weight norm ( a scalar) and the input data Gram matrix. Sharpness is defined as the maximum eigenvalue of the NTK (rather than the Hessian). The authors first prove (in Lemma C3) that term (b) of the NTK will barely change during the first phase of training (the initial sharpening phase) -- rigorously, they prove that the delta of this term will be O(1/m) where m is the width of the network. This implies that the NTK will approximately stay diagonalizable by the eigenvectors of the input data Gram matrix, and also that the sharpness is essentially totally determined by the second-layer weight norm. Thus, to prove that progressive sharpening occurs, it suffices to prove that the second-layer weight norm increases during gradient descent. In the update equation of the second-layer weight norm, the first-order term is the negative inner product between the predictions and the residual. The authors are able to rigorously prove that the higher-order terms don't matter, and thus to prove progressive sharpening it suffices to prove that the inner product between the predictions and residual is always negative. This is proved in Lemma C5. The authors next prove that after becoming destabilized, gradient descent will move to a flatter region where the sharpness is below 2 / eta. To use the authors' own words, the proof is "long and tedious," so I didn't attempt to parse the proof; but I assume that the intuition is similar to that of Proposition 3.2, which is that if the residual ever grows too big, then the last-layer weight norm will automatically decrease. Beyond two-layer linear networks The authors next attempt to show experimentally that the insights from the two-layer linear NN analysis carry over in a very literal way to the general neural network case. (Here, I write "next," but in the current draft this section occurs first chronologically.) In particular, they demonstrate experimentally that in many networks, the dynamics of the sharpness correlate well with those of the last-layer weight norm: when the sharpness rises, so does the last-layer weight norm, and when the sharpness drops, so does the last-layer weight norm. This seems to be more true in the initial phase of training, and becomes less true towards the end. To their credit, the authors admit that there are counterexamples to this trend. Strengths And Weaknesses Strength: this paper is the first to prove that some architecture (in this case, a two-layer linear network) undergoes progressive sharpening. Indeed, this is the first paper to even give any kind of explanation for why progressive sharpening might occur. (Actually, it was not even previously known that 2-layer linear nets can exhibit progressive sharpening.) Strength: I think that this paper is the first to prove that for some architecture (in this case, a two-layer linear network), instability causes the sharpness to decrease. (The catapult paper includes a handwavy explanation for this phenomenon, but not a literal proof.) Weakness: the paper is at its weakest when it attempts to argue that its EoS analysis for two-layer linear networks carries over in a very literal manner to general neural networks. First, I would point out that the correlation between sharpness and last-layer weight norm is not very robust: in Appendix B.1.2, we see that after the first few cycles of instability, there are a huge number of 'anomaly points' (steps where the change in the sharpness is not positively correlated with the change in last-layer weight norm). Second, I would point out that "all layers seem to work together to influence the sharpness," as the authors write. Overall, I would recommend framing this paper very differently. I suggest centering the two-layer linear network analysis rather than the debatable claims about general neural networks. If the authors are concerned that this analysis is very tedious, I would recommend just providing the intuition in the main paper (e.g. the fact that the leading term in the change in the sharpness is the inner product between the residual and the predictions) while deferring the complete proofs to the appendix. Then, after discussing the two-layer linear network analysis, you could mention that some of the patterns might carry over to general networks. The authors are of course free to take or leave this advice, but I think that this restructuring would make the paper more compelling. Questions For the two-layer NN: does the degree of sharpening depend on the input dataset? If we trained on random Gaussian data, would there still be sharpening? does the degree of sharpening depend on the last-layer initialization (which is much larger in scale than the standard initialization)? Limitations The paper is honest about its limitations.
NIPS
Title Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics Abstract Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and modelbased planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning. 1 Introduction A central goal of robotics research is to design intelligent machines that can solve arbitrary and formerly unseen tasks while interacting with the physical world. Reinforcement Learning (RL) (Sutton & Barto, 2018) is a generic framework to automatically learn such intelligent behavior with little human engineering. Still, teaching an RL agent to actually exhibit general-purpose problem-solving behavior is, while possible in principle, prohibitive in practice. This is due to practical restrictions including limited computational resources and limited data availability. The latter limitation is particularly dramatic in robotics, where interaction with the physical world is costly. On the other hand, for many robotics scenarios, there is a rough model of the environment available. This can be exploited, e.g., using model-based planning approaches (Mordatch et al., 2012; Kuindersma et al., 2016; Toussaint et al., 2018). Model-based planners potentially offer a more data-efficient way to reason about an agent’s interaction with the world. Model-based planners have been used in many areas of robotics, such as for indoor and aerial robots (Faust et al., 2018), visual manipulation (Jeong et al., 2020), or humanoid walking (Mordatch et al., 2015). Still, if the model does not account for stochasticity or contains systematic errors, directly following the resulting plan will not be successful. The present work starts from the observation that both pure RL approaches and pure planning approaches have strengths and limitations that are fairly complementary. RL makes no assumptions about the environment but is data-hungry, and model-based planning generally implies model simplifications but is data-efficient. For robotic manipulation tasks, it seems natural to try and integrate both approaches into one framework that combines the strengths of both. In the present work we seek to add an additional perspective to the open question of how this can be achieved best. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We introduce a novel approach that we call Learning to Execute (L2E). Our approach translates sparsereward goal-conditioned Markov Decision Processes (MDPs) (Bellman, 1957) into plan-conditioned MDPs. L2E exploits a simple planning module to create crude plans, which are then used to teach any off-the-shelf off-policy RL agent to execute them. L2E makes use of final-volume-preserving reward shaping (FV-RS) (Schubert et al., 2021), allowing it to train a universal plan-conditioned policy with high data efficiency. The contributions of this work are: • We introduce L2E, which uses RL to efficiently learn to execute approximate plans from a model-based planner in a plan-conditioned MDP. We describe formally how FV-RS can be used as a tool to construct such plan-conditioned MDPs from goal-conditioned MDPs. • We introduce plan replay strategies to efficiently learn universal plan-conditioned policies. • We demonstrate, using robotic pushing problems, that L2E exhibits increased performance when compared to pure RL methods, pure planning methods, or other methods combining learning and planning. We discuss work related to ours in section 2, explain background and notation in section 3, and introduce our method in section 4. We present our experimental results in section 5, discuss limitations in section 6, and conclude with section 7. 2 Related Work 2.1 Goal-Conditioned Policies Goal-conditioned or universal policies (Kaelbling, 1993; Moore et al., 1999; Foster & Dayan, 2002; Schaul et al., 2015; Veeriah et al., 2018; Nasiriany et al., 2019) not only act based on the state the agent finds itself in, but also based on the goal it tries to achieve. Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) is a particularly efficient way to learn universal policies. Here, achieved outcomes of the agent’s interaction with the environment are interpreted as desired goals in order to improve sample efficiency in sparse-reward settings. L2E draws great inspiration from this work, but in contrast to HER, L2E learns a universal planconditioned policy. This means that the L2E policy in general can execute multiple plans leading to the same goal. Although this presents a more complex learning task, we show in our experiments that by incorporating plan information using plan-based FV-RS, the sample efficiency of L2E is significantly improved over HER. 2.2 Plan- and Trajectory-Conditioned Policies Plan-conditioned policies create behavior that depends on plans that are input to the decision making. Lynch et al. (2020) learn plans and how to execute them from data generated by a human “playing” with a teleoperated robot. The resulting policy is conditional on a latent space of encoded plans. Our work differs from this paradigm in that human interaction is not needed. Both Lynch et al. (2020) and Co-Reyes et al. (2018) directly imitate a planned trajectory by maximizing its likelihood. In contrast, the plans used in the present work are not directly imitated. Using FV-RS guarantees that the fully trained L2E agent will reach its goal after finite time even if the plan provided is wrong. Guo et al. (2019) learn trajectory-conditioned policies to self-imitate diverse (optimal and suboptimal) trajectories from the agent’s past experience. We instead assume in this work that the plan is provided by an external model-based planner. This allows the L2E agent to use external information during training that could not be concluded from its own experience yet. 2.3 Learning from Demonstration L2E learns how to execute plans in order to achieve different tasks. In this sense, it is related to Learning from Demonstration (LfD) techniques that exploit demonstrations when learning a task. Existing work (Argall et al., 2009; Hussein et al., 2017; Ravichandar et al., 2020) differs significantly both in how the demonstration examples are collected and how the policy is then derived. Taylor et al. (2011) derive an approximate policy from human demonstration, and then use this to bias the exploration in a final RL stage. Hester et al. (2017) train a policy on both expert data and collected data, combining supervised and temporal difference losses. Salimans & Chen (2018) use a single demonstration as starting points to which the RL agent is reset at the beginning of each episode. Peng et al. (2018) use motion capture data to guide exploration by rewarding the RL agent to imitate it. In Cabi et al. (2019), demonstrations are combined with reward sketching done by a human. Interactive human feedback during training is another source of information used in Thomaz et al. (2006); Knox & Stone (2010). Kinose & Taniguchi (2020) integrate RL and demonstrations using generative adversarial imitation learning by interpreting the discriminator loss as an additional optimality signal in multi-objective RL. While these LfD approaches are related to L2E in that external information is used to increase RL efficiency, it is in contrast assumed in L2E that this external information is provided by a planner. 2.4 Combining Learning with Planning Similarly to demonstrations, external plans can be exploited to facilitate learning. Faust et al. (2018) connect short-range goal-conditioned navigation policies into complex navigation tasks using probabilistic roadmaps. In contrast, L2E learns a single plan-conditioned policy for both short-term and long-term decision making. Sekar et al. (2020) use planning in a learned model to optimize for expected future novelty. In contrast, L2E encourages the agent to stay close to the planned behavior. Zhang et al. (2016) use model-predictive control to generate control policies that are then used to regularize the RL agent. In L2E, no such intermediate control policy is created, and a reward signal is computed directly from the plan. In Guided Policy Search (Levine & Koltun, 2013), differential dynamic programming is used to create informative guiding distributions from a transition model for policy search. These distributions are used to directly regularize the policy in a supervised fashion, while L2E makes use of FV-RS as a mechanism to interface planning and RL. Christiano et al. (2016) learn an inverse dynamics model to transfer knowledge from a policy in the source domain to a policy in the target domain. The idea of integrating model-based and model-free RL has also been studied independently of planning (Pong et al., 2018; Janner et al., 2019). In contrast, in L2E the model is translated by a planner into long-horizon plans. In the experiments section, we compare L2E against two representative examples from the literature mentioned above. The first is using a plan to identify subgoals that are then pursued by an RL agent, as done in Faust et al. (2018). The second is executing the plan using an inverse model, similar to the approach in Christiano et al. (2016). These two baselines and L2E can be seen as representatives of a continuum: Christiano et al. (2016) follow the plan very closely, trying to imitate the planner at each time step. Faust et al. (2018) relax this requirement and only train the agent to reach intermediate goals. Finally, in L2E, the agent is free to deviate arbitrarily from the plan (although it is biased to stay close), as long as it reaches the goal. We find that L2E results in significantly higher success rates when compared against both baselines. 3 Background 3.1 Goal-Conditioned MDPs and RL We consider settings that can be described as discrete-time MDPs M = 〈S,A, T, γ,R, PS〉. S and A denote the set of all possible states and actions, respectively. T : S× A× S→ R+0 is the transition probability (density); T (s′|s, a) is the probability of the next state being s′ if the current state is s and a is chosen as the action. The agent receives a real-valued reward R(s, a, s′) after each transition. Immediate and future rewards are traded off by the discount factor γ ∈ [0, 1). PS : S→ R+0 is the initial state distribution. The goal of RL is to learn an optimal policy π∗ : S×A→ R+0 that maximizes the expected discounted return. In other words, RL algorithms generally try to find π∗ = argmax π ∞∑ t=0 γtEst+1∼T (·|st,at), at∼π(·|st),s0∼PS [R(st, at, st+1)] (1) from collected transition and reward data D = {(si, ai, ri, s′i)}ni=0. More specifically for this work, we are interested in applications in robotics, where both S and A are typically continuous. There exists a wide range of algorithms for this case. For the experiments in this paper, soft actor-critic (SAC) (Haarnoja et al., 2018) is used. In a goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉, the reward function RG(s, a, s′, g) has an additional input parameter, the goal g ∈ G. Here, PG : G→ R+0 is the distribution of goals. The optimal goal-conditioned policy π∗G acts optimally with respect to any of these goals. 3.2 Final-Volume-Preserving Reward Shaping We use approximate plans as an additional source of information for the RL agent. For sparsereward goal-driven MDPs, FV-RS (Schubert et al., 2021) offers an efficient way to include additional information by adding an additional term R(s, a, s′)→ RFV(s, a, s′) = R(s, a, s′) + FFV(s, a, s′) (2) to the reward function, accelerating exploration. In general, the optimal policy π∗ corresponding to the original MDP and the optimal policy π∗FV corresponding to the shaped MDP will be different. FV-RS however restricts the allowed modifications FFV(s, a, s′) in such a way that after finite time, the optimally controlled agent ends up in a subset of the volume in which it would have ended up without shaping. As a result, external information can be made available for the RL algorithm without changing the long-term behavior of the resulting optimal policy. Specifically in the present work, we consider goal-conditioned MDPs in which the goal-conditioned reward RG of the underlying MDP is either 1, if the goal is reached, or 0 everywhere else. We further assume that the L2E agent is given an external plan p, represented as an intended trajectory p = (p1, p2, . . . ) in state space. We intend to reward the agent for staying close to the plan, and for advancing towards the goal along the plan. A natural way of achieving this is to use a plan-based shaping reward (Schubert et al., 2021). The single-plan shaping function introduced there can be generalized to the multi-plan setting in the present work in the following way: FFV(s, a, s ′, p) = 1−RG(s, a, s′, f(p)) 2 k(s) + 1 L exp ( − d2(s, pk(s)) 2σ2 ) (3) Here, f(p) denotes the goal that p leads to, σ ∈ (0,∞), k(s) = argmini(d(pi, s)), and d(·, ·) is a measure of distance in state space. For the pushing experiments discussed in this work, d(·, ·) is the euclidean distance in state space ignoring the coordinates corresponding to the orientation of the box. The first term in eq. (3) ensures that the assigned shaping reward FFV is always smaller than the maximum environment reward (at most 1/2), and that if the binary environment reward is 1, no shaping reward is assigned. The second term rewards the agent for advancing towards the goal along the plan, and the third term rewards the agent for staying close to the plan. For a sufficiently high discount factor γ, FFV is final-volume preserving, meaning that the long-term behavior of the optimal agent is unchanged. 4 Learning to Execute L2E considers goal-conditioned MDPs MG (see section 3.1), for which an approximate planner Ω is available. L2E uses FV-RS to construct a corresponding plan-conditioned MDP MP from a goal-conditioned MDP MG and a planner Ω. In the following sections 4.1 to 4.3, we introduce our notion of a plan-conditioned MDP MP and describe the components of the L2E algorithm. We then summarize the L2E algorithm in section 4.4. 4.1 Plan-Conditioned MDPs Plans are provided by a model-based planner, which can be described as a distribution Ω : P×S×G→ R+0 over a set of plans P. Given an initial state and a goal, Ω(p|s, g) is the probability that the planner outputs p as a possible plan of how to achieve g from state s. The distinction between goals and plans is that plans are conditional on both a goal and an initial state. Therefore, both initial state and goal can be inferred using the plan only. In a plan-conditioned MDP MP = 〈S,P,A, T, γ,RP , PS , PP 〉, a plan p ∈ P is given to the reward function RP (s, a, s′, p) as an additional input parameter. PP : P→ R+0 is the distribution of plans. The optimal plan-conditioned policy π∗P behaves optimally with respect to any of these plans, creating a distribution π∗P (· | s, p) over actions that is conditional on the current state and the current plan. 4.2 Constructing the Plan-Conditioned MDP We use FV-RS to shape the reward function RG of the original goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉 with a plan-dependent term FFV(s, a, s′, p) (see equation 3) RG(s, a, s ′, g)→ RFVG (s, a, s′, g, p) = RG(s, a, s′, g) + FFV(s, a, s′, p) . (4) We call g = f(p) the goal for which the plan p was created. If a planner Ω should be such that g can not be recovered from the resulting plan p ∼ Ω(.|s, g), we can always construct a new p̃ ∼ Ω̃ such that p̃ = [p, g]. Since now g can be recovered from p̃ deterministically, we can assume that f always exists without loss of generality. We can interpret the shaped reward function RP (s, a, s ′, p) = RFVG (s, a, s ′, f(p), p) (5) as a plan-conditioned reward function of a plan-conditioned MDP MP = 〈S,G,A, T, γ,RP , PP 〉. The distribution over initial states and plans PP of MP is still missing, and can be constructed as PP (s, p) = ∫ Ω(p|s, g)PS(s)PG(g)dg . (6) In practice, PP can be sampled from by first sampling s ∼ PS , g ∼ PG and then subsequently sampling p ∼ Ω(·|s, g). Thus, we have constructed a plan-conditioned MDP MP by combining a goal-conditioned MDP MG with an approximate planner Ω and a FV-RS shaping function FFV. For reference later in this paper, we write as a shorthand notation MP = C(MG,Ω, FFV). Furthermore, we will refer to MP as the corresponding plan-conditioned MDP to MG and vice versa. In contrast to potential-based reward shaping (Ng et al., 1999), FV-RS does not leave the optimal policy invariant. As a result, generally ∃p ∈ P : π∗G(·|·, f(p)) 6≡ π∗P (·|·, p). In words, the optimal policy of MP and the optimal policy of MG will not result in identical behavior. In fact, while π∗G(·|·, g) learns one policy for each goal g, π∗P (·|·, p) can learn different behavior for each plan in the set of plans {p ∈ P | f(p) = g} leading towards the same goal g. 4.3 Plan Replay Strategy In order to efficiently learn a universal plan-conditioned L2E policy, the reward for experienced episodes is evaluated with respect to many different plans. In HER (Andrychowicz et al., 2017), it is assumed that each state s ∈ S can be assigned an achieved goal. Recorded episodes are then replayed with respect to goals that were achieved during the episode, i.e. the recorded transitions are re-evaluated with respect to these goals. This ensures that the recorded transitions were successful in reaching the replayed goals, resulting in highly informative data. In L2E, transitions are replayed with respect to plans. However, there is no meaningful relation between each state s ∈ S and a unique “achieved plan”. Therefore, the L2E agent replays transitions with past plans that were recorded at some point during training and were stored in its replay buffer D. The replay plans are chosen according to a plan replay strategy Sn. A plan replay strategy Sn provides a distribution over n replay plans, conditioned on the replay buffer D and the buffer containing the current episode Dep (see algorithm 1 for a definition of D and Dep). For replay, n plans are sampled according to this strategy {p1, . . . , pn} ∼ Sn(· | Dep, D). We consider two types of replay strategies. Uniform replay Sunin samples n unique plans uniformly from the replay buffer D. Reward-biased replay Sbiasnm first uniformly samples m unique plans from the replay buffer D, and then returns the n plans pi that would have resulted in the highest sum of rewards ∑ (sk,ak,s′k)∈Dep RP (sk, ak, s ′ k, pi) for the episode stored in Dep. The idea behind using reward-biased replay is to bias the replay towards transitions resulting in higher reward. 4.4 L2E Algorithm The L2E algorithm is outlined in algorithm 1. First, the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) is constructed from the original goal-conditioned MDP MG, the planner Ω and the shaping function FFV as described in section 4.2. The agent acts in the environment trying to follow one randomly sampled plan per episode. The episode is then added to the replay buffer, Algorithm 1: Learning to Execute (L2E) Input :Goal-conditioned MDP MG, approximate planner Ω, FV-RS shaping function FFV, plan replay strategy Sn, off-policy RL Algorithm A Output :Universal plan-conditioned optimal policy π∗P for the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) 1 Construct plan-conditioned MDP MP = C(MG,Ω, FFV) as detailed in section 4.2; 2 Initialize replay buffer D ← {}; 3 while π∗P not converged do 4 Initialize episode buffer Dep ← {}; 5 Sample initial state and goal (s0, g) ∼ PG; 6 Sample plan p ∼ Ω(·|s0, g); 7 s← s0; 8 while Episode not done do 9 Sample action a ∼ π∗P (· | s, p); 10 Sample transition s′ ∼ T (· | s, a); 11 Collect shaped reward r ← RP (s, a, s′, p); 12 Add to episode buffer Dep ← Dep ∪ {(s, a, r, s′, p)}; 13 s← s′; 14 end 15 Add episode to replay buffer D ← D ∪Dep; 16 Get replay plans {p1, . . . , pn} ∼ Sn(· | Dep, D); 17 for preplay in p1, . . . , pn do 18 for (s, a, r, s′, p) in Dep do 19 Calculate replay reward rreplay ← RP (s, a, s′, preplay); 20 Add replayed transition to buffer D ← D ∪ {(s, a, rreplay, s′, preplay)}; 21 end 22 end 23 Update policy using off-policy RL algorithm π∗P ← A(π∗P , D) 24 end along with data from episode replays with respect to other plans. These other plans are sampled from the replay buffer according to the replay strategy Sn. A generic off-policy RL algorithm is used to update the agent using the replay buffer. This process is repeated until convergence. We would like to emphasize that the L2E algorithm is agnostic to the exact type of off-policy RL algorithm. By combining state and plan into a “super state” for the purpose of passing the replay buffer to the off-policy RL algorithm, L2E can be interfaced with any off-the-shelf implementation. 5 Experiments We evaluate the L2E agent against several baselines using two simulated robotic manipulation tasks, namely a pushing task and an obstacle avoidance task. These two environments are chosen to compare different approaches on a variety of challenges. While the pushing task can be seen as an open-source version of the opanAI gym FetchPush-v1 task (Brockman et al., 2016), the obstacle task was chosen to represent robotic manipulation tasks with segmented state spaces. This allows us to discuss limitations of exploration in such environments as well. A video of the experiments is available in the supplementary material. The complete code to fully reproduce the figures in this paper from scratch can be found at github.com/ischubert/l2e and in the supplementary material. This includes the implementation of the environments, the implementation of the L2E agents and the baselines, and the specific code used for the experiments in this paper. The experiments section is structured as follows. In section 5.1 we discuss the environments and planners that are used in the experiments. We briefly introduce the plan embedding used for the L2E agent in section 5.2, additional experiments on this can be found in section A.5 In section 5.3 we introduce the baselines against which we compare our method. In section 5.4 we discuss our experimental results. Implementation details of the L2E agent are given in section A.1 5.1 Environments and Planners Figure 1a and Figure 1c show renderings of the basic pushing environment and obstacle pushing environment, respectively. We use the open-source Nvidia PhysX engine (phy, 2021) to simulate a box of size 0.4× 0.4 being pushed on a table of size 3× 3 by a spherical end effector of radius 0.06. The 10D state space of both the goal-conditioned MDP MG and the corresponding plan-conditioned MDP MP consists of the 3D position of the end effector, the 3D position of the box, and the 4D quaternion for the orientation of the box. The agent controls the 3D velocity of the end effector. The maximum velocity in any direction is 0.1 per time step. The end effector movement resulting from the agent’s actions is slightly distorted by random noise. In the obstacle pushing environment, the agent additionally has to evade an obstacle in the middle of the table. In the goal-conditioned MDP MG, each goal is represented as a desired 2D box position on the table. The goal-dependent sparse reward function RG is 1 if the box is within 0.1 of this desired goal, and 0 if not. The initial state-goal distribution PG is uniform across the table for the non-colliding box position and goal position. The end effector is always initialized at the origin and the box is always initialized with a fixed orientation parallel to the table. For the basic pushing environment, we use a crude manhattan-like planner Ω that deterministically outputs plans consisting of two separate contacts leading the initial state to the goal as shown in Figure 1a. For the obstacle pushing environment, plans consist of four contacts, corresponding to an additional intermediate box position which is chosen at random (see Figure 1c). Thus, the agent learns to execute an infinite number of plans for each combination of start and goal. Plans are represented as a trajectory of length 50 for the basic pushing environment and 100 for the obstacle pushing environment, consisting of 6D elements representing end effector position and box position. For the basic pushing environment, we additionally report results for less dense plans in section A.6. The orientation of the box is not specified in the plans. We construct the plan-conditioned MDP MP as described in section 4.2, using this planner and the FV-RS function in equation 3. We use the width parameter σ = 0.5 throughout the experiments. 5.2 Plan Encoding The plans p are embedded before they are provided to the policy. A plan encoding is an injective function φ : P → C from the set of plans P to a latent space C. If P is a manifold in some highdimensional space, the dimensionality of the latent space must be at least as high as the dimensionality of the manifold. Since P is task-dependent, the encoding will be task-dependent as well. For the basic pushing environment (Figure 1a), P is a 4D manifold (since the plans only depend on the initial and final 2D box positions). For the obstacle task (Figure 1c), P is a 6D-manifold (since the plans depend on one intermediate box position as well). In the experiments discussed in the present work, we encode plans analytically using box positions as described above. We experimentally compare this with either learning the encoding or not using any encoding at all in section A.5. 5.3 Baselines We compare L2E against (1) direct plan execution, (2) plan execution with an inverse dynamics model, (3) using RL to reach subgoals, and (4) HER. We describe these baselines in detail in section A.2. 5.4 Results Both at training and evaluation time, we run episodes of length 250. For each method q (i.e., L2E and all baselines), we independently train A = 10 agents. After N environment transitions, we evaluate the agents. We reset to random initial positions and goals/plans and run the experiment until the goal is reached or until the episode ends. We repeat this process M = 30 times for each agent, and store whether the rollout was successful in reaching the goal. We denote the result of the m-th evaluation of the a-th agent for method q, evaluated after learning for N environment transitions, as F (q)am(N). As can be seen from the video given in the supplementary material, even though the L2E agent uses plan information as a guiding bias during exploration, and is encouraged to stay close to the plan by the shaping reward, it can also learn to deviate considerably from the plan if closely following it will be suboptimal for reaching the goal fast. For example, while the simple planner (see Figure 1a and Figure 1c) suggests to re-establish the contact during the sequence, the L2E agent almost always moves and turns the box using a single contact. 5.4.1 Basic Pushing Environment To allow for a fair comparison, we spent a considerable amount of effort to optimize the HER replay strategy as well as the L2E strategy. Details on this are given in section A.4. The results for the pushing setup are summarized in Figure 1b. We observe that both L2E versions outperform all baselines in terms of the asymptotical performance. L2E with biased replay strategy S10,1000 exhibits a high sample efficiency especially in the beginning, resulting in success rates significantly higher than 50% after 4000 episode rollouts or 1 Million time steps. Directly executing the plan results in very low success rates of significantly less than 20% on average. Executing the plan with an inverse model (IM) still shows significantly worse long-term performance than the RL methods. HER results in better policies than the IM baselines, but is relatively data hungry. This can be improved slightly if the HER agent is only used to reach subgoals given by the planner. Pushing is a challenging interaction that requires reasoning for several time steps ahead. A typical failure mode of the IM baseline (see also videos) is that the box moves away from the intended trajectory too much, so that the agent is not able to correct for it within one time step. In contrast, the L2E agent learns to deviate from the planned trajectory if this is required to reach the goal. We find that L2E, combining a model-based planner and a universal plan-conditioned policy, outperforms our baselines that are pure planning or pure learning approaches. In addition, L2E outperforms the two baselines that also combine learning and planning. 5.4.2 Obstacle Pushing Environment L2E performs significantly better than the pure learning HER baselines, the pure planning baseline ("Plan"), and the “Planned Subgoals + RL” baseline. While using an inverse model is initially more efficient, L2E achieves significantly better results if given enough data. Comparing the basic pushing environment (section 5.4.1) to the obstacle environment, L2E learns slower in the latter. This is in part due to the higher dimensionality of the latent space of plan encodings (see also section 5.2), posing a more challenging learning problem to the L2E agent. In contrast, the "Plan+IM" baseline is independent of the size of the plan space, and performs comparably to the experimental setting in the original version. The obstacle in the middle segments the state space into two parts. In order to move from one side to the other, an agent already has to be able to reason about long-term results of its actions. As evidenced by the results for HER, this poses a significant challenge for pure RL. Incorporating planner knowledge helps the agent to overcome this chicken-and-egg problem. 6 Discussion Learning plan-dependent policies as opposed to goal-dependent policies has the additional advantage that the former can learn to execute multiple plans that lead from the same initial state to the same goal, as shown in the obstacle environment. Thus, the policy learns multiple strategies to achieve the same outcome. In principle, this allows it to adapt to changed scenarios where some of these strategies become infeasible. If, e.g., the environment changes, it suffices to only update the planner’s crude model of the environment so that it creates plans that are feasible again. These can then be directly fed into the policy without retraining. We explore this possibility in section A.3, using a simple 2D maze environment with moving obstacles. We find that the plan-conditioned L2E policy consistently achieves 90% success rate in this quickly changing environment, while the goal-conditioned HER policy does not improve beyond 60% success rate. We used rather simple plans to support the RL agent during training, and demonstrated that these are already sufficient to significantly speed up learning in our experiments. In fact we demonstrate in section A.6 that in the basic pushing example, the L2E agent is very robust against plans of even lower quality. Using simple plans enabled us to use an analytical encoding; for very complex scenarios it might be beneficial to learn the encoding using an auxiliary objective (see, e.g., Co-Reyes et al. (2018)). We present results on using a variational autoencoder (VAE) in section A.5. The use of FV-RS biases the RL agent towards following the plan. While it was shown in the experiments that the RL agent can learn to deviate from the plan, plans that are globally misleading can act as a distraction to the agent. In the present work, it is assumed that plans can be used to guide the agent during learning, increasing sample efficiency. Independently of the specific method used to achieve this, misleading plans will always break this assumption. Comparing the basic pushing environment to the obstacle pushing environment, the amount of data needed for learning a plan-conditioned policy clearly depends on the size of the plan spaces that are considered. For very large plan spaces, more data will be needed to master the task. Still, including planner information into the learning process makes a decisive difference, as demonstrated by the relative performance of L2E and HER in the obstacle example. While SAC was used for the experiments in section 5, L2E can be used in combination with any off-policy RL algorithm. L2E reformulates a goal-conditioned MDP as a plan-conditioned MDP, and provides a replay strategy to efficiently solve the latter. It is agnostic to how this data is then used by the RL agent. The specific FV-RS shaping function used in this work applies to MDPs with sparse rewards. We focused on this since sparse rewards are common in robotic manipulation. In addition, they often present notoriously hard exploration tasks, making external plan-based information as used by L2E particularly useful. However, FV-RS in general is not restricted to sparse-reward settings, and by using a different shaping function, L2E could be applied in other settings as well. Apart from FV-RS, there are alternative schemes of reward shaping such as potential-based reward shaping (PB-RS) Ng et al. (1999). In principle, these could also be used to increase the sample efficiency of the RL agent. We chose FV-RS for two reasons. First, in the original paper Schubert et al. (2021), it was demonstrated that FV-RS leads to significantly higher sample efficiency than PB-RS. Second, since PB-RS leaves the optimal policy invariant, the behavior of the fully converged policy trained with PB-RS will only be goal-dependent, and not depend on the rest of the plan. The original HER paper (Andrychowicz et al., 2017) considers the use of a simple form of reward shaping in combination with HER as well. It is found that reward shaping dramatically reduces the performance of HER in a robotic pushing task. In the present work, we show in contrast that including plan information using FV-RS shaping improves the performance of RL in a similar task. A possible explanation to reconciliate these seemingly contradictory results is already offered by Andrychowicz et al. (2017): While simple domain-agnostic shaping functions can be a distraction for the RL agent, domain-specific reward shaping functions can be beneficial. This view is supported, e.g., by similar results by Popov et al. (2017). Andrychowicz et al. (2017) state that however “designing such shaped rewards requires a lot of domain knowledge”. In this context, one could view L2E as an automated way to extract such domain-specific knowledge from model-based planners and make it available. We specifically believe that L2E can be useful in robotic manipulation tasks, where domain knowledge is in fact readily available in many cases. Here, L2E offers a way to exploit this. 7 Conclusion We introduced L2E, an algorithm that links RL and model-based planning using FV-RS. RL generally results in well-performing policies but needs large amounts of data, while model-based planning is data-efficient but does not always result in successful policies. By combining the two, L2E seeks to exploit the strengths of both approaches. We demonstrated that L2E in fact shows both higher sample efficiency when compared to purely model-free RL, and higher success rates when compared to executing plans of a model-based planner. In addition, L2E also outperformed baseline approaches that combine learning and planning in our experiments. Acknowledgments and Disclosure of Funding The authors would like to thank Valentin N Hartmann for stimulating discussions. The research has been supported by the International Max-Planck Research School for Intelligent Systems (IMPRSIS), and by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC 2120/1–390831618 “IntCDC” and EXC 2002/1–390523135 “Science of Intelligence”.
1. What is the main contribution of the paper regarding sample complexity in reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental evaluation? 3. How does the reviewer assess the significance and novelty of the paper's ideas compared to prior works in planning and learning? 4. What are some concerns or suggestions the reviewer has regarding the presentation and explanation of the technical content, such as Equation 3 and the use of notation?
Summary Of The Paper Review
Summary Of The Paper This paper addresses the problem of sample complexity in reinforcement learning. The approach is to drive down the sample complexity using reward shaping, where the reward shaping function is given by a planner. The paper shows how to use final-volume-preserving reward shaping (FV-RS) to convert a plan into a reward shaping function. The approach is evaluated on a simple pushing domain against planning in two forms, and Hindsight Experience Replay. The L2E approach proposed here outperforms the comparison algorithms. Review Overall, I like the idea of this paper and would argue for its acceptance. The technical ideas are strong, and the paper is (mostly) well-written. The primary limitation of this work is the experimental evaluation, especially in an increasingly crowded field of prior work combining planning and learning. It is not clear to me that the most relevant baseline is HER. There are a number of more relevant works, such as PRM-RL (Faust et al, 2014) and Hoel et al (2020), both of which explicitly combine planning and learning for exactly the same purpose as L2E, and there are other references as well. I think the real novelty is the use of FV-RS as the shaping function, but the other papers in this line of research need to be considered. I would recommend substantially strengthening the experimental evaluation. The single domain of box pushing is not particularly compelling, and does not show off the strength of incorporating a planner in the reward shaping function — what would be better would be problems with a much longer horizon and places where a significant deviation from a greedy strategy would be important, but with stochasticity to be avoided. (BTW: the plan shown in figure 1 was very hard to interpret. I now realize that the reason the end effector is changing height is because it is going over the box to change sides to push, but it took quite a while to figure that out. Some more explanatory text would help considerably. Maybe also draw the box at different along the trajectory with a significant alpha channel value.) Figure 1b was not a particularly effective way of communicating the results. It was not clear to me if these are the average and variances of the 10 agents on one problem, or the average and variances of the 10 agents times 30 random start/goal states per agent. Even for these experiments, I would recommend running L2E and HER longer — it is not clear to me that L2E S 10 , 1000 b i a s has converged, especially when looking at 1d. I assume the loss in performance for 10 plans from 1000 samples is because it has experienced fewer transitions. And if indeed L2E S 10 , 1000 b i a s has converged, I would want to know why it is not exploring further. Equation 3 is very important, and it is confusing and feels somewhat arbitrary — the paper relies too much on explication from the FV-RS paper, including replicating notation without explanation. The fact that f(p) is the final state of the plan is never stated, although the fact that f(p) is the goal on the following page allows the reader work backwards. p k ( s ) should be explained, as well as the fact that the subscript is indexing the plan state. Even with the the FV-RS paper in hand, the justification for the k ( s ) + 1 / L term is unclear to me. I would encourage the authors to add explanatory text for the terms in the FV-RS reward shaping equation.
NIPS
Title Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics Abstract Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and modelbased planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning. 1 Introduction A central goal of robotics research is to design intelligent machines that can solve arbitrary and formerly unseen tasks while interacting with the physical world. Reinforcement Learning (RL) (Sutton & Barto, 2018) is a generic framework to automatically learn such intelligent behavior with little human engineering. Still, teaching an RL agent to actually exhibit general-purpose problem-solving behavior is, while possible in principle, prohibitive in practice. This is due to practical restrictions including limited computational resources and limited data availability. The latter limitation is particularly dramatic in robotics, where interaction with the physical world is costly. On the other hand, for many robotics scenarios, there is a rough model of the environment available. This can be exploited, e.g., using model-based planning approaches (Mordatch et al., 2012; Kuindersma et al., 2016; Toussaint et al., 2018). Model-based planners potentially offer a more data-efficient way to reason about an agent’s interaction with the world. Model-based planners have been used in many areas of robotics, such as for indoor and aerial robots (Faust et al., 2018), visual manipulation (Jeong et al., 2020), or humanoid walking (Mordatch et al., 2015). Still, if the model does not account for stochasticity or contains systematic errors, directly following the resulting plan will not be successful. The present work starts from the observation that both pure RL approaches and pure planning approaches have strengths and limitations that are fairly complementary. RL makes no assumptions about the environment but is data-hungry, and model-based planning generally implies model simplifications but is data-efficient. For robotic manipulation tasks, it seems natural to try and integrate both approaches into one framework that combines the strengths of both. In the present work we seek to add an additional perspective to the open question of how this can be achieved best. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We introduce a novel approach that we call Learning to Execute (L2E). Our approach translates sparsereward goal-conditioned Markov Decision Processes (MDPs) (Bellman, 1957) into plan-conditioned MDPs. L2E exploits a simple planning module to create crude plans, which are then used to teach any off-the-shelf off-policy RL agent to execute them. L2E makes use of final-volume-preserving reward shaping (FV-RS) (Schubert et al., 2021), allowing it to train a universal plan-conditioned policy with high data efficiency. The contributions of this work are: • We introduce L2E, which uses RL to efficiently learn to execute approximate plans from a model-based planner in a plan-conditioned MDP. We describe formally how FV-RS can be used as a tool to construct such plan-conditioned MDPs from goal-conditioned MDPs. • We introduce plan replay strategies to efficiently learn universal plan-conditioned policies. • We demonstrate, using robotic pushing problems, that L2E exhibits increased performance when compared to pure RL methods, pure planning methods, or other methods combining learning and planning. We discuss work related to ours in section 2, explain background and notation in section 3, and introduce our method in section 4. We present our experimental results in section 5, discuss limitations in section 6, and conclude with section 7. 2 Related Work 2.1 Goal-Conditioned Policies Goal-conditioned or universal policies (Kaelbling, 1993; Moore et al., 1999; Foster & Dayan, 2002; Schaul et al., 2015; Veeriah et al., 2018; Nasiriany et al., 2019) not only act based on the state the agent finds itself in, but also based on the goal it tries to achieve. Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) is a particularly efficient way to learn universal policies. Here, achieved outcomes of the agent’s interaction with the environment are interpreted as desired goals in order to improve sample efficiency in sparse-reward settings. L2E draws great inspiration from this work, but in contrast to HER, L2E learns a universal planconditioned policy. This means that the L2E policy in general can execute multiple plans leading to the same goal. Although this presents a more complex learning task, we show in our experiments that by incorporating plan information using plan-based FV-RS, the sample efficiency of L2E is significantly improved over HER. 2.2 Plan- and Trajectory-Conditioned Policies Plan-conditioned policies create behavior that depends on plans that are input to the decision making. Lynch et al. (2020) learn plans and how to execute them from data generated by a human “playing” with a teleoperated robot. The resulting policy is conditional on a latent space of encoded plans. Our work differs from this paradigm in that human interaction is not needed. Both Lynch et al. (2020) and Co-Reyes et al. (2018) directly imitate a planned trajectory by maximizing its likelihood. In contrast, the plans used in the present work are not directly imitated. Using FV-RS guarantees that the fully trained L2E agent will reach its goal after finite time even if the plan provided is wrong. Guo et al. (2019) learn trajectory-conditioned policies to self-imitate diverse (optimal and suboptimal) trajectories from the agent’s past experience. We instead assume in this work that the plan is provided by an external model-based planner. This allows the L2E agent to use external information during training that could not be concluded from its own experience yet. 2.3 Learning from Demonstration L2E learns how to execute plans in order to achieve different tasks. In this sense, it is related to Learning from Demonstration (LfD) techniques that exploit demonstrations when learning a task. Existing work (Argall et al., 2009; Hussein et al., 2017; Ravichandar et al., 2020) differs significantly both in how the demonstration examples are collected and how the policy is then derived. Taylor et al. (2011) derive an approximate policy from human demonstration, and then use this to bias the exploration in a final RL stage. Hester et al. (2017) train a policy on both expert data and collected data, combining supervised and temporal difference losses. Salimans & Chen (2018) use a single demonstration as starting points to which the RL agent is reset at the beginning of each episode. Peng et al. (2018) use motion capture data to guide exploration by rewarding the RL agent to imitate it. In Cabi et al. (2019), demonstrations are combined with reward sketching done by a human. Interactive human feedback during training is another source of information used in Thomaz et al. (2006); Knox & Stone (2010). Kinose & Taniguchi (2020) integrate RL and demonstrations using generative adversarial imitation learning by interpreting the discriminator loss as an additional optimality signal in multi-objective RL. While these LfD approaches are related to L2E in that external information is used to increase RL efficiency, it is in contrast assumed in L2E that this external information is provided by a planner. 2.4 Combining Learning with Planning Similarly to demonstrations, external plans can be exploited to facilitate learning. Faust et al. (2018) connect short-range goal-conditioned navigation policies into complex navigation tasks using probabilistic roadmaps. In contrast, L2E learns a single plan-conditioned policy for both short-term and long-term decision making. Sekar et al. (2020) use planning in a learned model to optimize for expected future novelty. In contrast, L2E encourages the agent to stay close to the planned behavior. Zhang et al. (2016) use model-predictive control to generate control policies that are then used to regularize the RL agent. In L2E, no such intermediate control policy is created, and a reward signal is computed directly from the plan. In Guided Policy Search (Levine & Koltun, 2013), differential dynamic programming is used to create informative guiding distributions from a transition model for policy search. These distributions are used to directly regularize the policy in a supervised fashion, while L2E makes use of FV-RS as a mechanism to interface planning and RL. Christiano et al. (2016) learn an inverse dynamics model to transfer knowledge from a policy in the source domain to a policy in the target domain. The idea of integrating model-based and model-free RL has also been studied independently of planning (Pong et al., 2018; Janner et al., 2019). In contrast, in L2E the model is translated by a planner into long-horizon plans. In the experiments section, we compare L2E against two representative examples from the literature mentioned above. The first is using a plan to identify subgoals that are then pursued by an RL agent, as done in Faust et al. (2018). The second is executing the plan using an inverse model, similar to the approach in Christiano et al. (2016). These two baselines and L2E can be seen as representatives of a continuum: Christiano et al. (2016) follow the plan very closely, trying to imitate the planner at each time step. Faust et al. (2018) relax this requirement and only train the agent to reach intermediate goals. Finally, in L2E, the agent is free to deviate arbitrarily from the plan (although it is biased to stay close), as long as it reaches the goal. We find that L2E results in significantly higher success rates when compared against both baselines. 3 Background 3.1 Goal-Conditioned MDPs and RL We consider settings that can be described as discrete-time MDPs M = 〈S,A, T, γ,R, PS〉. S and A denote the set of all possible states and actions, respectively. T : S× A× S→ R+0 is the transition probability (density); T (s′|s, a) is the probability of the next state being s′ if the current state is s and a is chosen as the action. The agent receives a real-valued reward R(s, a, s′) after each transition. Immediate and future rewards are traded off by the discount factor γ ∈ [0, 1). PS : S→ R+0 is the initial state distribution. The goal of RL is to learn an optimal policy π∗ : S×A→ R+0 that maximizes the expected discounted return. In other words, RL algorithms generally try to find π∗ = argmax π ∞∑ t=0 γtEst+1∼T (·|st,at), at∼π(·|st),s0∼PS [R(st, at, st+1)] (1) from collected transition and reward data D = {(si, ai, ri, s′i)}ni=0. More specifically for this work, we are interested in applications in robotics, where both S and A are typically continuous. There exists a wide range of algorithms for this case. For the experiments in this paper, soft actor-critic (SAC) (Haarnoja et al., 2018) is used. In a goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉, the reward function RG(s, a, s′, g) has an additional input parameter, the goal g ∈ G. Here, PG : G→ R+0 is the distribution of goals. The optimal goal-conditioned policy π∗G acts optimally with respect to any of these goals. 3.2 Final-Volume-Preserving Reward Shaping We use approximate plans as an additional source of information for the RL agent. For sparsereward goal-driven MDPs, FV-RS (Schubert et al., 2021) offers an efficient way to include additional information by adding an additional term R(s, a, s′)→ RFV(s, a, s′) = R(s, a, s′) + FFV(s, a, s′) (2) to the reward function, accelerating exploration. In general, the optimal policy π∗ corresponding to the original MDP and the optimal policy π∗FV corresponding to the shaped MDP will be different. FV-RS however restricts the allowed modifications FFV(s, a, s′) in such a way that after finite time, the optimally controlled agent ends up in a subset of the volume in which it would have ended up without shaping. As a result, external information can be made available for the RL algorithm without changing the long-term behavior of the resulting optimal policy. Specifically in the present work, we consider goal-conditioned MDPs in which the goal-conditioned reward RG of the underlying MDP is either 1, if the goal is reached, or 0 everywhere else. We further assume that the L2E agent is given an external plan p, represented as an intended trajectory p = (p1, p2, . . . ) in state space. We intend to reward the agent for staying close to the plan, and for advancing towards the goal along the plan. A natural way of achieving this is to use a plan-based shaping reward (Schubert et al., 2021). The single-plan shaping function introduced there can be generalized to the multi-plan setting in the present work in the following way: FFV(s, a, s ′, p) = 1−RG(s, a, s′, f(p)) 2 k(s) + 1 L exp ( − d2(s, pk(s)) 2σ2 ) (3) Here, f(p) denotes the goal that p leads to, σ ∈ (0,∞), k(s) = argmini(d(pi, s)), and d(·, ·) is a measure of distance in state space. For the pushing experiments discussed in this work, d(·, ·) is the euclidean distance in state space ignoring the coordinates corresponding to the orientation of the box. The first term in eq. (3) ensures that the assigned shaping reward FFV is always smaller than the maximum environment reward (at most 1/2), and that if the binary environment reward is 1, no shaping reward is assigned. The second term rewards the agent for advancing towards the goal along the plan, and the third term rewards the agent for staying close to the plan. For a sufficiently high discount factor γ, FFV is final-volume preserving, meaning that the long-term behavior of the optimal agent is unchanged. 4 Learning to Execute L2E considers goal-conditioned MDPs MG (see section 3.1), for which an approximate planner Ω is available. L2E uses FV-RS to construct a corresponding plan-conditioned MDP MP from a goal-conditioned MDP MG and a planner Ω. In the following sections 4.1 to 4.3, we introduce our notion of a plan-conditioned MDP MP and describe the components of the L2E algorithm. We then summarize the L2E algorithm in section 4.4. 4.1 Plan-Conditioned MDPs Plans are provided by a model-based planner, which can be described as a distribution Ω : P×S×G→ R+0 over a set of plans P. Given an initial state and a goal, Ω(p|s, g) is the probability that the planner outputs p as a possible plan of how to achieve g from state s. The distinction between goals and plans is that plans are conditional on both a goal and an initial state. Therefore, both initial state and goal can be inferred using the plan only. In a plan-conditioned MDP MP = 〈S,P,A, T, γ,RP , PS , PP 〉, a plan p ∈ P is given to the reward function RP (s, a, s′, p) as an additional input parameter. PP : P→ R+0 is the distribution of plans. The optimal plan-conditioned policy π∗P behaves optimally with respect to any of these plans, creating a distribution π∗P (· | s, p) over actions that is conditional on the current state and the current plan. 4.2 Constructing the Plan-Conditioned MDP We use FV-RS to shape the reward function RG of the original goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉 with a plan-dependent term FFV(s, a, s′, p) (see equation 3) RG(s, a, s ′, g)→ RFVG (s, a, s′, g, p) = RG(s, a, s′, g) + FFV(s, a, s′, p) . (4) We call g = f(p) the goal for which the plan p was created. If a planner Ω should be such that g can not be recovered from the resulting plan p ∼ Ω(.|s, g), we can always construct a new p̃ ∼ Ω̃ such that p̃ = [p, g]. Since now g can be recovered from p̃ deterministically, we can assume that f always exists without loss of generality. We can interpret the shaped reward function RP (s, a, s ′, p) = RFVG (s, a, s ′, f(p), p) (5) as a plan-conditioned reward function of a plan-conditioned MDP MP = 〈S,G,A, T, γ,RP , PP 〉. The distribution over initial states and plans PP of MP is still missing, and can be constructed as PP (s, p) = ∫ Ω(p|s, g)PS(s)PG(g)dg . (6) In practice, PP can be sampled from by first sampling s ∼ PS , g ∼ PG and then subsequently sampling p ∼ Ω(·|s, g). Thus, we have constructed a plan-conditioned MDP MP by combining a goal-conditioned MDP MG with an approximate planner Ω and a FV-RS shaping function FFV. For reference later in this paper, we write as a shorthand notation MP = C(MG,Ω, FFV). Furthermore, we will refer to MP as the corresponding plan-conditioned MDP to MG and vice versa. In contrast to potential-based reward shaping (Ng et al., 1999), FV-RS does not leave the optimal policy invariant. As a result, generally ∃p ∈ P : π∗G(·|·, f(p)) 6≡ π∗P (·|·, p). In words, the optimal policy of MP and the optimal policy of MG will not result in identical behavior. In fact, while π∗G(·|·, g) learns one policy for each goal g, π∗P (·|·, p) can learn different behavior for each plan in the set of plans {p ∈ P | f(p) = g} leading towards the same goal g. 4.3 Plan Replay Strategy In order to efficiently learn a universal plan-conditioned L2E policy, the reward for experienced episodes is evaluated with respect to many different plans. In HER (Andrychowicz et al., 2017), it is assumed that each state s ∈ S can be assigned an achieved goal. Recorded episodes are then replayed with respect to goals that were achieved during the episode, i.e. the recorded transitions are re-evaluated with respect to these goals. This ensures that the recorded transitions were successful in reaching the replayed goals, resulting in highly informative data. In L2E, transitions are replayed with respect to plans. However, there is no meaningful relation between each state s ∈ S and a unique “achieved plan”. Therefore, the L2E agent replays transitions with past plans that were recorded at some point during training and were stored in its replay buffer D. The replay plans are chosen according to a plan replay strategy Sn. A plan replay strategy Sn provides a distribution over n replay plans, conditioned on the replay buffer D and the buffer containing the current episode Dep (see algorithm 1 for a definition of D and Dep). For replay, n plans are sampled according to this strategy {p1, . . . , pn} ∼ Sn(· | Dep, D). We consider two types of replay strategies. Uniform replay Sunin samples n unique plans uniformly from the replay buffer D. Reward-biased replay Sbiasnm first uniformly samples m unique plans from the replay buffer D, and then returns the n plans pi that would have resulted in the highest sum of rewards ∑ (sk,ak,s′k)∈Dep RP (sk, ak, s ′ k, pi) for the episode stored in Dep. The idea behind using reward-biased replay is to bias the replay towards transitions resulting in higher reward. 4.4 L2E Algorithm The L2E algorithm is outlined in algorithm 1. First, the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) is constructed from the original goal-conditioned MDP MG, the planner Ω and the shaping function FFV as described in section 4.2. The agent acts in the environment trying to follow one randomly sampled plan per episode. The episode is then added to the replay buffer, Algorithm 1: Learning to Execute (L2E) Input :Goal-conditioned MDP MG, approximate planner Ω, FV-RS shaping function FFV, plan replay strategy Sn, off-policy RL Algorithm A Output :Universal plan-conditioned optimal policy π∗P for the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) 1 Construct plan-conditioned MDP MP = C(MG,Ω, FFV) as detailed in section 4.2; 2 Initialize replay buffer D ← {}; 3 while π∗P not converged do 4 Initialize episode buffer Dep ← {}; 5 Sample initial state and goal (s0, g) ∼ PG; 6 Sample plan p ∼ Ω(·|s0, g); 7 s← s0; 8 while Episode not done do 9 Sample action a ∼ π∗P (· | s, p); 10 Sample transition s′ ∼ T (· | s, a); 11 Collect shaped reward r ← RP (s, a, s′, p); 12 Add to episode buffer Dep ← Dep ∪ {(s, a, r, s′, p)}; 13 s← s′; 14 end 15 Add episode to replay buffer D ← D ∪Dep; 16 Get replay plans {p1, . . . , pn} ∼ Sn(· | Dep, D); 17 for preplay in p1, . . . , pn do 18 for (s, a, r, s′, p) in Dep do 19 Calculate replay reward rreplay ← RP (s, a, s′, preplay); 20 Add replayed transition to buffer D ← D ∪ {(s, a, rreplay, s′, preplay)}; 21 end 22 end 23 Update policy using off-policy RL algorithm π∗P ← A(π∗P , D) 24 end along with data from episode replays with respect to other plans. These other plans are sampled from the replay buffer according to the replay strategy Sn. A generic off-policy RL algorithm is used to update the agent using the replay buffer. This process is repeated until convergence. We would like to emphasize that the L2E algorithm is agnostic to the exact type of off-policy RL algorithm. By combining state and plan into a “super state” for the purpose of passing the replay buffer to the off-policy RL algorithm, L2E can be interfaced with any off-the-shelf implementation. 5 Experiments We evaluate the L2E agent against several baselines using two simulated robotic manipulation tasks, namely a pushing task and an obstacle avoidance task. These two environments are chosen to compare different approaches on a variety of challenges. While the pushing task can be seen as an open-source version of the opanAI gym FetchPush-v1 task (Brockman et al., 2016), the obstacle task was chosen to represent robotic manipulation tasks with segmented state spaces. This allows us to discuss limitations of exploration in such environments as well. A video of the experiments is available in the supplementary material. The complete code to fully reproduce the figures in this paper from scratch can be found at github.com/ischubert/l2e and in the supplementary material. This includes the implementation of the environments, the implementation of the L2E agents and the baselines, and the specific code used for the experiments in this paper. The experiments section is structured as follows. In section 5.1 we discuss the environments and planners that are used in the experiments. We briefly introduce the plan embedding used for the L2E agent in section 5.2, additional experiments on this can be found in section A.5 In section 5.3 we introduce the baselines against which we compare our method. In section 5.4 we discuss our experimental results. Implementation details of the L2E agent are given in section A.1 5.1 Environments and Planners Figure 1a and Figure 1c show renderings of the basic pushing environment and obstacle pushing environment, respectively. We use the open-source Nvidia PhysX engine (phy, 2021) to simulate a box of size 0.4× 0.4 being pushed on a table of size 3× 3 by a spherical end effector of radius 0.06. The 10D state space of both the goal-conditioned MDP MG and the corresponding plan-conditioned MDP MP consists of the 3D position of the end effector, the 3D position of the box, and the 4D quaternion for the orientation of the box. The agent controls the 3D velocity of the end effector. The maximum velocity in any direction is 0.1 per time step. The end effector movement resulting from the agent’s actions is slightly distorted by random noise. In the obstacle pushing environment, the agent additionally has to evade an obstacle in the middle of the table. In the goal-conditioned MDP MG, each goal is represented as a desired 2D box position on the table. The goal-dependent sparse reward function RG is 1 if the box is within 0.1 of this desired goal, and 0 if not. The initial state-goal distribution PG is uniform across the table for the non-colliding box position and goal position. The end effector is always initialized at the origin and the box is always initialized with a fixed orientation parallel to the table. For the basic pushing environment, we use a crude manhattan-like planner Ω that deterministically outputs plans consisting of two separate contacts leading the initial state to the goal as shown in Figure 1a. For the obstacle pushing environment, plans consist of four contacts, corresponding to an additional intermediate box position which is chosen at random (see Figure 1c). Thus, the agent learns to execute an infinite number of plans for each combination of start and goal. Plans are represented as a trajectory of length 50 for the basic pushing environment and 100 for the obstacle pushing environment, consisting of 6D elements representing end effector position and box position. For the basic pushing environment, we additionally report results for less dense plans in section A.6. The orientation of the box is not specified in the plans. We construct the plan-conditioned MDP MP as described in section 4.2, using this planner and the FV-RS function in equation 3. We use the width parameter σ = 0.5 throughout the experiments. 5.2 Plan Encoding The plans p are embedded before they are provided to the policy. A plan encoding is an injective function φ : P → C from the set of plans P to a latent space C. If P is a manifold in some highdimensional space, the dimensionality of the latent space must be at least as high as the dimensionality of the manifold. Since P is task-dependent, the encoding will be task-dependent as well. For the basic pushing environment (Figure 1a), P is a 4D manifold (since the plans only depend on the initial and final 2D box positions). For the obstacle task (Figure 1c), P is a 6D-manifold (since the plans depend on one intermediate box position as well). In the experiments discussed in the present work, we encode plans analytically using box positions as described above. We experimentally compare this with either learning the encoding or not using any encoding at all in section A.5. 5.3 Baselines We compare L2E against (1) direct plan execution, (2) plan execution with an inverse dynamics model, (3) using RL to reach subgoals, and (4) HER. We describe these baselines in detail in section A.2. 5.4 Results Both at training and evaluation time, we run episodes of length 250. For each method q (i.e., L2E and all baselines), we independently train A = 10 agents. After N environment transitions, we evaluate the agents. We reset to random initial positions and goals/plans and run the experiment until the goal is reached or until the episode ends. We repeat this process M = 30 times for each agent, and store whether the rollout was successful in reaching the goal. We denote the result of the m-th evaluation of the a-th agent for method q, evaluated after learning for N environment transitions, as F (q)am(N). As can be seen from the video given in the supplementary material, even though the L2E agent uses plan information as a guiding bias during exploration, and is encouraged to stay close to the plan by the shaping reward, it can also learn to deviate considerably from the plan if closely following it will be suboptimal for reaching the goal fast. For example, while the simple planner (see Figure 1a and Figure 1c) suggests to re-establish the contact during the sequence, the L2E agent almost always moves and turns the box using a single contact. 5.4.1 Basic Pushing Environment To allow for a fair comparison, we spent a considerable amount of effort to optimize the HER replay strategy as well as the L2E strategy. Details on this are given in section A.4. The results for the pushing setup are summarized in Figure 1b. We observe that both L2E versions outperform all baselines in terms of the asymptotical performance. L2E with biased replay strategy S10,1000 exhibits a high sample efficiency especially in the beginning, resulting in success rates significantly higher than 50% after 4000 episode rollouts or 1 Million time steps. Directly executing the plan results in very low success rates of significantly less than 20% on average. Executing the plan with an inverse model (IM) still shows significantly worse long-term performance than the RL methods. HER results in better policies than the IM baselines, but is relatively data hungry. This can be improved slightly if the HER agent is only used to reach subgoals given by the planner. Pushing is a challenging interaction that requires reasoning for several time steps ahead. A typical failure mode of the IM baseline (see also videos) is that the box moves away from the intended trajectory too much, so that the agent is not able to correct for it within one time step. In contrast, the L2E agent learns to deviate from the planned trajectory if this is required to reach the goal. We find that L2E, combining a model-based planner and a universal plan-conditioned policy, outperforms our baselines that are pure planning or pure learning approaches. In addition, L2E outperforms the two baselines that also combine learning and planning. 5.4.2 Obstacle Pushing Environment L2E performs significantly better than the pure learning HER baselines, the pure planning baseline ("Plan"), and the “Planned Subgoals + RL” baseline. While using an inverse model is initially more efficient, L2E achieves significantly better results if given enough data. Comparing the basic pushing environment (section 5.4.1) to the obstacle environment, L2E learns slower in the latter. This is in part due to the higher dimensionality of the latent space of plan encodings (see also section 5.2), posing a more challenging learning problem to the L2E agent. In contrast, the "Plan+IM" baseline is independent of the size of the plan space, and performs comparably to the experimental setting in the original version. The obstacle in the middle segments the state space into two parts. In order to move from one side to the other, an agent already has to be able to reason about long-term results of its actions. As evidenced by the results for HER, this poses a significant challenge for pure RL. Incorporating planner knowledge helps the agent to overcome this chicken-and-egg problem. 6 Discussion Learning plan-dependent policies as opposed to goal-dependent policies has the additional advantage that the former can learn to execute multiple plans that lead from the same initial state to the same goal, as shown in the obstacle environment. Thus, the policy learns multiple strategies to achieve the same outcome. In principle, this allows it to adapt to changed scenarios where some of these strategies become infeasible. If, e.g., the environment changes, it suffices to only update the planner’s crude model of the environment so that it creates plans that are feasible again. These can then be directly fed into the policy without retraining. We explore this possibility in section A.3, using a simple 2D maze environment with moving obstacles. We find that the plan-conditioned L2E policy consistently achieves 90% success rate in this quickly changing environment, while the goal-conditioned HER policy does not improve beyond 60% success rate. We used rather simple plans to support the RL agent during training, and demonstrated that these are already sufficient to significantly speed up learning in our experiments. In fact we demonstrate in section A.6 that in the basic pushing example, the L2E agent is very robust against plans of even lower quality. Using simple plans enabled us to use an analytical encoding; for very complex scenarios it might be beneficial to learn the encoding using an auxiliary objective (see, e.g., Co-Reyes et al. (2018)). We present results on using a variational autoencoder (VAE) in section A.5. The use of FV-RS biases the RL agent towards following the plan. While it was shown in the experiments that the RL agent can learn to deviate from the plan, plans that are globally misleading can act as a distraction to the agent. In the present work, it is assumed that plans can be used to guide the agent during learning, increasing sample efficiency. Independently of the specific method used to achieve this, misleading plans will always break this assumption. Comparing the basic pushing environment to the obstacle pushing environment, the amount of data needed for learning a plan-conditioned policy clearly depends on the size of the plan spaces that are considered. For very large plan spaces, more data will be needed to master the task. Still, including planner information into the learning process makes a decisive difference, as demonstrated by the relative performance of L2E and HER in the obstacle example. While SAC was used for the experiments in section 5, L2E can be used in combination with any off-policy RL algorithm. L2E reformulates a goal-conditioned MDP as a plan-conditioned MDP, and provides a replay strategy to efficiently solve the latter. It is agnostic to how this data is then used by the RL agent. The specific FV-RS shaping function used in this work applies to MDPs with sparse rewards. We focused on this since sparse rewards are common in robotic manipulation. In addition, they often present notoriously hard exploration tasks, making external plan-based information as used by L2E particularly useful. However, FV-RS in general is not restricted to sparse-reward settings, and by using a different shaping function, L2E could be applied in other settings as well. Apart from FV-RS, there are alternative schemes of reward shaping such as potential-based reward shaping (PB-RS) Ng et al. (1999). In principle, these could also be used to increase the sample efficiency of the RL agent. We chose FV-RS for two reasons. First, in the original paper Schubert et al. (2021), it was demonstrated that FV-RS leads to significantly higher sample efficiency than PB-RS. Second, since PB-RS leaves the optimal policy invariant, the behavior of the fully converged policy trained with PB-RS will only be goal-dependent, and not depend on the rest of the plan. The original HER paper (Andrychowicz et al., 2017) considers the use of a simple form of reward shaping in combination with HER as well. It is found that reward shaping dramatically reduces the performance of HER in a robotic pushing task. In the present work, we show in contrast that including plan information using FV-RS shaping improves the performance of RL in a similar task. A possible explanation to reconciliate these seemingly contradictory results is already offered by Andrychowicz et al. (2017): While simple domain-agnostic shaping functions can be a distraction for the RL agent, domain-specific reward shaping functions can be beneficial. This view is supported, e.g., by similar results by Popov et al. (2017). Andrychowicz et al. (2017) state that however “designing such shaped rewards requires a lot of domain knowledge”. In this context, one could view L2E as an automated way to extract such domain-specific knowledge from model-based planners and make it available. We specifically believe that L2E can be useful in robotic manipulation tasks, where domain knowledge is in fact readily available in many cases. Here, L2E offers a way to exploit this. 7 Conclusion We introduced L2E, an algorithm that links RL and model-based planning using FV-RS. RL generally results in well-performing policies but needs large amounts of data, while model-based planning is data-efficient but does not always result in successful policies. By combining the two, L2E seeks to exploit the strengths of both approaches. We demonstrated that L2E in fact shows both higher sample efficiency when compared to purely model-free RL, and higher success rates when compared to executing plans of a model-based planner. In addition, L2E also outperformed baseline approaches that combine learning and planning in our experiments. Acknowledgments and Disclosure of Funding The authors would like to thank Valentin N Hartmann for stimulating discussions. The research has been supported by the International Max-Planck Research School for Intelligent Systems (IMPRSIS), and by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC 2120/1–390831618 “IntCDC” and EXC 2002/1–390523135 “Science of Intelligence”.
1. What is the focus of the paper in terms of its contribution to reinforcement learning? 2. How does the proposed approach differ from other pre-existing methods? 3. What are the strengths and weaknesses of the paper regarding its originality and significance? 4. Are there any concerns or limitations regarding the practical application of the proposed method?
Summary Of The Paper Review
Summary Of The Paper The paper proposes Learning to Execute (L2E), which can make use of a pre-designed planner in reinforcement learning (RL) in robotics. The authors extend the idea of final-volume-preserving reward shaping (FV-RS) and formulate plan-conditioned MDPs for designing L2E. The method was evaluated through a simulation experiment. Review The paper has certain originality regarding the idea of introducing FV-RS, especially from the theoretical viewpoint. However, the validity of this method was not demonstrated sufficiently. Comparison with more pre-existing methods is expected. The quality of this paper is generally reasonable. The clarity of this paper is also reasonable. However, the contribution is not significant. It is incremental because it is a straightforward application of FV-RS. For that application, some assumptions, which seem unrealistic in the real-world robotics environment, are introduced. This point relates to the definition of "model-based planner" in this paper, which will be discussed in this review's "limitation" part. At least, the significance is not demonstrated sufficiently through the experiment.
NIPS
Title Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics Abstract Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and modelbased planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning. 1 Introduction A central goal of robotics research is to design intelligent machines that can solve arbitrary and formerly unseen tasks while interacting with the physical world. Reinforcement Learning (RL) (Sutton & Barto, 2018) is a generic framework to automatically learn such intelligent behavior with little human engineering. Still, teaching an RL agent to actually exhibit general-purpose problem-solving behavior is, while possible in principle, prohibitive in practice. This is due to practical restrictions including limited computational resources and limited data availability. The latter limitation is particularly dramatic in robotics, where interaction with the physical world is costly. On the other hand, for many robotics scenarios, there is a rough model of the environment available. This can be exploited, e.g., using model-based planning approaches (Mordatch et al., 2012; Kuindersma et al., 2016; Toussaint et al., 2018). Model-based planners potentially offer a more data-efficient way to reason about an agent’s interaction with the world. Model-based planners have been used in many areas of robotics, such as for indoor and aerial robots (Faust et al., 2018), visual manipulation (Jeong et al., 2020), or humanoid walking (Mordatch et al., 2015). Still, if the model does not account for stochasticity or contains systematic errors, directly following the resulting plan will not be successful. The present work starts from the observation that both pure RL approaches and pure planning approaches have strengths and limitations that are fairly complementary. RL makes no assumptions about the environment but is data-hungry, and model-based planning generally implies model simplifications but is data-efficient. For robotic manipulation tasks, it seems natural to try and integrate both approaches into one framework that combines the strengths of both. In the present work we seek to add an additional perspective to the open question of how this can be achieved best. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We introduce a novel approach that we call Learning to Execute (L2E). Our approach translates sparsereward goal-conditioned Markov Decision Processes (MDPs) (Bellman, 1957) into plan-conditioned MDPs. L2E exploits a simple planning module to create crude plans, which are then used to teach any off-the-shelf off-policy RL agent to execute them. L2E makes use of final-volume-preserving reward shaping (FV-RS) (Schubert et al., 2021), allowing it to train a universal plan-conditioned policy with high data efficiency. The contributions of this work are: • We introduce L2E, which uses RL to efficiently learn to execute approximate plans from a model-based planner in a plan-conditioned MDP. We describe formally how FV-RS can be used as a tool to construct such plan-conditioned MDPs from goal-conditioned MDPs. • We introduce plan replay strategies to efficiently learn universal plan-conditioned policies. • We demonstrate, using robotic pushing problems, that L2E exhibits increased performance when compared to pure RL methods, pure planning methods, or other methods combining learning and planning. We discuss work related to ours in section 2, explain background and notation in section 3, and introduce our method in section 4. We present our experimental results in section 5, discuss limitations in section 6, and conclude with section 7. 2 Related Work 2.1 Goal-Conditioned Policies Goal-conditioned or universal policies (Kaelbling, 1993; Moore et al., 1999; Foster & Dayan, 2002; Schaul et al., 2015; Veeriah et al., 2018; Nasiriany et al., 2019) not only act based on the state the agent finds itself in, but also based on the goal it tries to achieve. Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) is a particularly efficient way to learn universal policies. Here, achieved outcomes of the agent’s interaction with the environment are interpreted as desired goals in order to improve sample efficiency in sparse-reward settings. L2E draws great inspiration from this work, but in contrast to HER, L2E learns a universal planconditioned policy. This means that the L2E policy in general can execute multiple plans leading to the same goal. Although this presents a more complex learning task, we show in our experiments that by incorporating plan information using plan-based FV-RS, the sample efficiency of L2E is significantly improved over HER. 2.2 Plan- and Trajectory-Conditioned Policies Plan-conditioned policies create behavior that depends on plans that are input to the decision making. Lynch et al. (2020) learn plans and how to execute them from data generated by a human “playing” with a teleoperated robot. The resulting policy is conditional on a latent space of encoded plans. Our work differs from this paradigm in that human interaction is not needed. Both Lynch et al. (2020) and Co-Reyes et al. (2018) directly imitate a planned trajectory by maximizing its likelihood. In contrast, the plans used in the present work are not directly imitated. Using FV-RS guarantees that the fully trained L2E agent will reach its goal after finite time even if the plan provided is wrong. Guo et al. (2019) learn trajectory-conditioned policies to self-imitate diverse (optimal and suboptimal) trajectories from the agent’s past experience. We instead assume in this work that the plan is provided by an external model-based planner. This allows the L2E agent to use external information during training that could not be concluded from its own experience yet. 2.3 Learning from Demonstration L2E learns how to execute plans in order to achieve different tasks. In this sense, it is related to Learning from Demonstration (LfD) techniques that exploit demonstrations when learning a task. Existing work (Argall et al., 2009; Hussein et al., 2017; Ravichandar et al., 2020) differs significantly both in how the demonstration examples are collected and how the policy is then derived. Taylor et al. (2011) derive an approximate policy from human demonstration, and then use this to bias the exploration in a final RL stage. Hester et al. (2017) train a policy on both expert data and collected data, combining supervised and temporal difference losses. Salimans & Chen (2018) use a single demonstration as starting points to which the RL agent is reset at the beginning of each episode. Peng et al. (2018) use motion capture data to guide exploration by rewarding the RL agent to imitate it. In Cabi et al. (2019), demonstrations are combined with reward sketching done by a human. Interactive human feedback during training is another source of information used in Thomaz et al. (2006); Knox & Stone (2010). Kinose & Taniguchi (2020) integrate RL and demonstrations using generative adversarial imitation learning by interpreting the discriminator loss as an additional optimality signal in multi-objective RL. While these LfD approaches are related to L2E in that external information is used to increase RL efficiency, it is in contrast assumed in L2E that this external information is provided by a planner. 2.4 Combining Learning with Planning Similarly to demonstrations, external plans can be exploited to facilitate learning. Faust et al. (2018) connect short-range goal-conditioned navigation policies into complex navigation tasks using probabilistic roadmaps. In contrast, L2E learns a single plan-conditioned policy for both short-term and long-term decision making. Sekar et al. (2020) use planning in a learned model to optimize for expected future novelty. In contrast, L2E encourages the agent to stay close to the planned behavior. Zhang et al. (2016) use model-predictive control to generate control policies that are then used to regularize the RL agent. In L2E, no such intermediate control policy is created, and a reward signal is computed directly from the plan. In Guided Policy Search (Levine & Koltun, 2013), differential dynamic programming is used to create informative guiding distributions from a transition model for policy search. These distributions are used to directly regularize the policy in a supervised fashion, while L2E makes use of FV-RS as a mechanism to interface planning and RL. Christiano et al. (2016) learn an inverse dynamics model to transfer knowledge from a policy in the source domain to a policy in the target domain. The idea of integrating model-based and model-free RL has also been studied independently of planning (Pong et al., 2018; Janner et al., 2019). In contrast, in L2E the model is translated by a planner into long-horizon plans. In the experiments section, we compare L2E against two representative examples from the literature mentioned above. The first is using a plan to identify subgoals that are then pursued by an RL agent, as done in Faust et al. (2018). The second is executing the plan using an inverse model, similar to the approach in Christiano et al. (2016). These two baselines and L2E can be seen as representatives of a continuum: Christiano et al. (2016) follow the plan very closely, trying to imitate the planner at each time step. Faust et al. (2018) relax this requirement and only train the agent to reach intermediate goals. Finally, in L2E, the agent is free to deviate arbitrarily from the plan (although it is biased to stay close), as long as it reaches the goal. We find that L2E results in significantly higher success rates when compared against both baselines. 3 Background 3.1 Goal-Conditioned MDPs and RL We consider settings that can be described as discrete-time MDPs M = 〈S,A, T, γ,R, PS〉. S and A denote the set of all possible states and actions, respectively. T : S× A× S→ R+0 is the transition probability (density); T (s′|s, a) is the probability of the next state being s′ if the current state is s and a is chosen as the action. The agent receives a real-valued reward R(s, a, s′) after each transition. Immediate and future rewards are traded off by the discount factor γ ∈ [0, 1). PS : S→ R+0 is the initial state distribution. The goal of RL is to learn an optimal policy π∗ : S×A→ R+0 that maximizes the expected discounted return. In other words, RL algorithms generally try to find π∗ = argmax π ∞∑ t=0 γtEst+1∼T (·|st,at), at∼π(·|st),s0∼PS [R(st, at, st+1)] (1) from collected transition and reward data D = {(si, ai, ri, s′i)}ni=0. More specifically for this work, we are interested in applications in robotics, where both S and A are typically continuous. There exists a wide range of algorithms for this case. For the experiments in this paper, soft actor-critic (SAC) (Haarnoja et al., 2018) is used. In a goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉, the reward function RG(s, a, s′, g) has an additional input parameter, the goal g ∈ G. Here, PG : G→ R+0 is the distribution of goals. The optimal goal-conditioned policy π∗G acts optimally with respect to any of these goals. 3.2 Final-Volume-Preserving Reward Shaping We use approximate plans as an additional source of information for the RL agent. For sparsereward goal-driven MDPs, FV-RS (Schubert et al., 2021) offers an efficient way to include additional information by adding an additional term R(s, a, s′)→ RFV(s, a, s′) = R(s, a, s′) + FFV(s, a, s′) (2) to the reward function, accelerating exploration. In general, the optimal policy π∗ corresponding to the original MDP and the optimal policy π∗FV corresponding to the shaped MDP will be different. FV-RS however restricts the allowed modifications FFV(s, a, s′) in such a way that after finite time, the optimally controlled agent ends up in a subset of the volume in which it would have ended up without shaping. As a result, external information can be made available for the RL algorithm without changing the long-term behavior of the resulting optimal policy. Specifically in the present work, we consider goal-conditioned MDPs in which the goal-conditioned reward RG of the underlying MDP is either 1, if the goal is reached, or 0 everywhere else. We further assume that the L2E agent is given an external plan p, represented as an intended trajectory p = (p1, p2, . . . ) in state space. We intend to reward the agent for staying close to the plan, and for advancing towards the goal along the plan. A natural way of achieving this is to use a plan-based shaping reward (Schubert et al., 2021). The single-plan shaping function introduced there can be generalized to the multi-plan setting in the present work in the following way: FFV(s, a, s ′, p) = 1−RG(s, a, s′, f(p)) 2 k(s) + 1 L exp ( − d2(s, pk(s)) 2σ2 ) (3) Here, f(p) denotes the goal that p leads to, σ ∈ (0,∞), k(s) = argmini(d(pi, s)), and d(·, ·) is a measure of distance in state space. For the pushing experiments discussed in this work, d(·, ·) is the euclidean distance in state space ignoring the coordinates corresponding to the orientation of the box. The first term in eq. (3) ensures that the assigned shaping reward FFV is always smaller than the maximum environment reward (at most 1/2), and that if the binary environment reward is 1, no shaping reward is assigned. The second term rewards the agent for advancing towards the goal along the plan, and the third term rewards the agent for staying close to the plan. For a sufficiently high discount factor γ, FFV is final-volume preserving, meaning that the long-term behavior of the optimal agent is unchanged. 4 Learning to Execute L2E considers goal-conditioned MDPs MG (see section 3.1), for which an approximate planner Ω is available. L2E uses FV-RS to construct a corresponding plan-conditioned MDP MP from a goal-conditioned MDP MG and a planner Ω. In the following sections 4.1 to 4.3, we introduce our notion of a plan-conditioned MDP MP and describe the components of the L2E algorithm. We then summarize the L2E algorithm in section 4.4. 4.1 Plan-Conditioned MDPs Plans are provided by a model-based planner, which can be described as a distribution Ω : P×S×G→ R+0 over a set of plans P. Given an initial state and a goal, Ω(p|s, g) is the probability that the planner outputs p as a possible plan of how to achieve g from state s. The distinction between goals and plans is that plans are conditional on both a goal and an initial state. Therefore, both initial state and goal can be inferred using the plan only. In a plan-conditioned MDP MP = 〈S,P,A, T, γ,RP , PS , PP 〉, a plan p ∈ P is given to the reward function RP (s, a, s′, p) as an additional input parameter. PP : P→ R+0 is the distribution of plans. The optimal plan-conditioned policy π∗P behaves optimally with respect to any of these plans, creating a distribution π∗P (· | s, p) over actions that is conditional on the current state and the current plan. 4.2 Constructing the Plan-Conditioned MDP We use FV-RS to shape the reward function RG of the original goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉 with a plan-dependent term FFV(s, a, s′, p) (see equation 3) RG(s, a, s ′, g)→ RFVG (s, a, s′, g, p) = RG(s, a, s′, g) + FFV(s, a, s′, p) . (4) We call g = f(p) the goal for which the plan p was created. If a planner Ω should be such that g can not be recovered from the resulting plan p ∼ Ω(.|s, g), we can always construct a new p̃ ∼ Ω̃ such that p̃ = [p, g]. Since now g can be recovered from p̃ deterministically, we can assume that f always exists without loss of generality. We can interpret the shaped reward function RP (s, a, s ′, p) = RFVG (s, a, s ′, f(p), p) (5) as a plan-conditioned reward function of a plan-conditioned MDP MP = 〈S,G,A, T, γ,RP , PP 〉. The distribution over initial states and plans PP of MP is still missing, and can be constructed as PP (s, p) = ∫ Ω(p|s, g)PS(s)PG(g)dg . (6) In practice, PP can be sampled from by first sampling s ∼ PS , g ∼ PG and then subsequently sampling p ∼ Ω(·|s, g). Thus, we have constructed a plan-conditioned MDP MP by combining a goal-conditioned MDP MG with an approximate planner Ω and a FV-RS shaping function FFV. For reference later in this paper, we write as a shorthand notation MP = C(MG,Ω, FFV). Furthermore, we will refer to MP as the corresponding plan-conditioned MDP to MG and vice versa. In contrast to potential-based reward shaping (Ng et al., 1999), FV-RS does not leave the optimal policy invariant. As a result, generally ∃p ∈ P : π∗G(·|·, f(p)) 6≡ π∗P (·|·, p). In words, the optimal policy of MP and the optimal policy of MG will not result in identical behavior. In fact, while π∗G(·|·, g) learns one policy for each goal g, π∗P (·|·, p) can learn different behavior for each plan in the set of plans {p ∈ P | f(p) = g} leading towards the same goal g. 4.3 Plan Replay Strategy In order to efficiently learn a universal plan-conditioned L2E policy, the reward for experienced episodes is evaluated with respect to many different plans. In HER (Andrychowicz et al., 2017), it is assumed that each state s ∈ S can be assigned an achieved goal. Recorded episodes are then replayed with respect to goals that were achieved during the episode, i.e. the recorded transitions are re-evaluated with respect to these goals. This ensures that the recorded transitions were successful in reaching the replayed goals, resulting in highly informative data. In L2E, transitions are replayed with respect to plans. However, there is no meaningful relation between each state s ∈ S and a unique “achieved plan”. Therefore, the L2E agent replays transitions with past plans that were recorded at some point during training and were stored in its replay buffer D. The replay plans are chosen according to a plan replay strategy Sn. A plan replay strategy Sn provides a distribution over n replay plans, conditioned on the replay buffer D and the buffer containing the current episode Dep (see algorithm 1 for a definition of D and Dep). For replay, n plans are sampled according to this strategy {p1, . . . , pn} ∼ Sn(· | Dep, D). We consider two types of replay strategies. Uniform replay Sunin samples n unique plans uniformly from the replay buffer D. Reward-biased replay Sbiasnm first uniformly samples m unique plans from the replay buffer D, and then returns the n plans pi that would have resulted in the highest sum of rewards ∑ (sk,ak,s′k)∈Dep RP (sk, ak, s ′ k, pi) for the episode stored in Dep. The idea behind using reward-biased replay is to bias the replay towards transitions resulting in higher reward. 4.4 L2E Algorithm The L2E algorithm is outlined in algorithm 1. First, the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) is constructed from the original goal-conditioned MDP MG, the planner Ω and the shaping function FFV as described in section 4.2. The agent acts in the environment trying to follow one randomly sampled plan per episode. The episode is then added to the replay buffer, Algorithm 1: Learning to Execute (L2E) Input :Goal-conditioned MDP MG, approximate planner Ω, FV-RS shaping function FFV, plan replay strategy Sn, off-policy RL Algorithm A Output :Universal plan-conditioned optimal policy π∗P for the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) 1 Construct plan-conditioned MDP MP = C(MG,Ω, FFV) as detailed in section 4.2; 2 Initialize replay buffer D ← {}; 3 while π∗P not converged do 4 Initialize episode buffer Dep ← {}; 5 Sample initial state and goal (s0, g) ∼ PG; 6 Sample plan p ∼ Ω(·|s0, g); 7 s← s0; 8 while Episode not done do 9 Sample action a ∼ π∗P (· | s, p); 10 Sample transition s′ ∼ T (· | s, a); 11 Collect shaped reward r ← RP (s, a, s′, p); 12 Add to episode buffer Dep ← Dep ∪ {(s, a, r, s′, p)}; 13 s← s′; 14 end 15 Add episode to replay buffer D ← D ∪Dep; 16 Get replay plans {p1, . . . , pn} ∼ Sn(· | Dep, D); 17 for preplay in p1, . . . , pn do 18 for (s, a, r, s′, p) in Dep do 19 Calculate replay reward rreplay ← RP (s, a, s′, preplay); 20 Add replayed transition to buffer D ← D ∪ {(s, a, rreplay, s′, preplay)}; 21 end 22 end 23 Update policy using off-policy RL algorithm π∗P ← A(π∗P , D) 24 end along with data from episode replays with respect to other plans. These other plans are sampled from the replay buffer according to the replay strategy Sn. A generic off-policy RL algorithm is used to update the agent using the replay buffer. This process is repeated until convergence. We would like to emphasize that the L2E algorithm is agnostic to the exact type of off-policy RL algorithm. By combining state and plan into a “super state” for the purpose of passing the replay buffer to the off-policy RL algorithm, L2E can be interfaced with any off-the-shelf implementation. 5 Experiments We evaluate the L2E agent against several baselines using two simulated robotic manipulation tasks, namely a pushing task and an obstacle avoidance task. These two environments are chosen to compare different approaches on a variety of challenges. While the pushing task can be seen as an open-source version of the opanAI gym FetchPush-v1 task (Brockman et al., 2016), the obstacle task was chosen to represent robotic manipulation tasks with segmented state spaces. This allows us to discuss limitations of exploration in such environments as well. A video of the experiments is available in the supplementary material. The complete code to fully reproduce the figures in this paper from scratch can be found at github.com/ischubert/l2e and in the supplementary material. This includes the implementation of the environments, the implementation of the L2E agents and the baselines, and the specific code used for the experiments in this paper. The experiments section is structured as follows. In section 5.1 we discuss the environments and planners that are used in the experiments. We briefly introduce the plan embedding used for the L2E agent in section 5.2, additional experiments on this can be found in section A.5 In section 5.3 we introduce the baselines against which we compare our method. In section 5.4 we discuss our experimental results. Implementation details of the L2E agent are given in section A.1 5.1 Environments and Planners Figure 1a and Figure 1c show renderings of the basic pushing environment and obstacle pushing environment, respectively. We use the open-source Nvidia PhysX engine (phy, 2021) to simulate a box of size 0.4× 0.4 being pushed on a table of size 3× 3 by a spherical end effector of radius 0.06. The 10D state space of both the goal-conditioned MDP MG and the corresponding plan-conditioned MDP MP consists of the 3D position of the end effector, the 3D position of the box, and the 4D quaternion for the orientation of the box. The agent controls the 3D velocity of the end effector. The maximum velocity in any direction is 0.1 per time step. The end effector movement resulting from the agent’s actions is slightly distorted by random noise. In the obstacle pushing environment, the agent additionally has to evade an obstacle in the middle of the table. In the goal-conditioned MDP MG, each goal is represented as a desired 2D box position on the table. The goal-dependent sparse reward function RG is 1 if the box is within 0.1 of this desired goal, and 0 if not. The initial state-goal distribution PG is uniform across the table for the non-colliding box position and goal position. The end effector is always initialized at the origin and the box is always initialized with a fixed orientation parallel to the table. For the basic pushing environment, we use a crude manhattan-like planner Ω that deterministically outputs plans consisting of two separate contacts leading the initial state to the goal as shown in Figure 1a. For the obstacle pushing environment, plans consist of four contacts, corresponding to an additional intermediate box position which is chosen at random (see Figure 1c). Thus, the agent learns to execute an infinite number of plans for each combination of start and goal. Plans are represented as a trajectory of length 50 for the basic pushing environment and 100 for the obstacle pushing environment, consisting of 6D elements representing end effector position and box position. For the basic pushing environment, we additionally report results for less dense plans in section A.6. The orientation of the box is not specified in the plans. We construct the plan-conditioned MDP MP as described in section 4.2, using this planner and the FV-RS function in equation 3. We use the width parameter σ = 0.5 throughout the experiments. 5.2 Plan Encoding The plans p are embedded before they are provided to the policy. A plan encoding is an injective function φ : P → C from the set of plans P to a latent space C. If P is a manifold in some highdimensional space, the dimensionality of the latent space must be at least as high as the dimensionality of the manifold. Since P is task-dependent, the encoding will be task-dependent as well. For the basic pushing environment (Figure 1a), P is a 4D manifold (since the plans only depend on the initial and final 2D box positions). For the obstacle task (Figure 1c), P is a 6D-manifold (since the plans depend on one intermediate box position as well). In the experiments discussed in the present work, we encode plans analytically using box positions as described above. We experimentally compare this with either learning the encoding or not using any encoding at all in section A.5. 5.3 Baselines We compare L2E against (1) direct plan execution, (2) plan execution with an inverse dynamics model, (3) using RL to reach subgoals, and (4) HER. We describe these baselines in detail in section A.2. 5.4 Results Both at training and evaluation time, we run episodes of length 250. For each method q (i.e., L2E and all baselines), we independently train A = 10 agents. After N environment transitions, we evaluate the agents. We reset to random initial positions and goals/plans and run the experiment until the goal is reached or until the episode ends. We repeat this process M = 30 times for each agent, and store whether the rollout was successful in reaching the goal. We denote the result of the m-th evaluation of the a-th agent for method q, evaluated after learning for N environment transitions, as F (q)am(N). As can be seen from the video given in the supplementary material, even though the L2E agent uses plan information as a guiding bias during exploration, and is encouraged to stay close to the plan by the shaping reward, it can also learn to deviate considerably from the plan if closely following it will be suboptimal for reaching the goal fast. For example, while the simple planner (see Figure 1a and Figure 1c) suggests to re-establish the contact during the sequence, the L2E agent almost always moves and turns the box using a single contact. 5.4.1 Basic Pushing Environment To allow for a fair comparison, we spent a considerable amount of effort to optimize the HER replay strategy as well as the L2E strategy. Details on this are given in section A.4. The results for the pushing setup are summarized in Figure 1b. We observe that both L2E versions outperform all baselines in terms of the asymptotical performance. L2E with biased replay strategy S10,1000 exhibits a high sample efficiency especially in the beginning, resulting in success rates significantly higher than 50% after 4000 episode rollouts or 1 Million time steps. Directly executing the plan results in very low success rates of significantly less than 20% on average. Executing the plan with an inverse model (IM) still shows significantly worse long-term performance than the RL methods. HER results in better policies than the IM baselines, but is relatively data hungry. This can be improved slightly if the HER agent is only used to reach subgoals given by the planner. Pushing is a challenging interaction that requires reasoning for several time steps ahead. A typical failure mode of the IM baseline (see also videos) is that the box moves away from the intended trajectory too much, so that the agent is not able to correct for it within one time step. In contrast, the L2E agent learns to deviate from the planned trajectory if this is required to reach the goal. We find that L2E, combining a model-based planner and a universal plan-conditioned policy, outperforms our baselines that are pure planning or pure learning approaches. In addition, L2E outperforms the two baselines that also combine learning and planning. 5.4.2 Obstacle Pushing Environment L2E performs significantly better than the pure learning HER baselines, the pure planning baseline ("Plan"), and the “Planned Subgoals + RL” baseline. While using an inverse model is initially more efficient, L2E achieves significantly better results if given enough data. Comparing the basic pushing environment (section 5.4.1) to the obstacle environment, L2E learns slower in the latter. This is in part due to the higher dimensionality of the latent space of plan encodings (see also section 5.2), posing a more challenging learning problem to the L2E agent. In contrast, the "Plan+IM" baseline is independent of the size of the plan space, and performs comparably to the experimental setting in the original version. The obstacle in the middle segments the state space into two parts. In order to move from one side to the other, an agent already has to be able to reason about long-term results of its actions. As evidenced by the results for HER, this poses a significant challenge for pure RL. Incorporating planner knowledge helps the agent to overcome this chicken-and-egg problem. 6 Discussion Learning plan-dependent policies as opposed to goal-dependent policies has the additional advantage that the former can learn to execute multiple plans that lead from the same initial state to the same goal, as shown in the obstacle environment. Thus, the policy learns multiple strategies to achieve the same outcome. In principle, this allows it to adapt to changed scenarios where some of these strategies become infeasible. If, e.g., the environment changes, it suffices to only update the planner’s crude model of the environment so that it creates plans that are feasible again. These can then be directly fed into the policy without retraining. We explore this possibility in section A.3, using a simple 2D maze environment with moving obstacles. We find that the plan-conditioned L2E policy consistently achieves 90% success rate in this quickly changing environment, while the goal-conditioned HER policy does not improve beyond 60% success rate. We used rather simple plans to support the RL agent during training, and demonstrated that these are already sufficient to significantly speed up learning in our experiments. In fact we demonstrate in section A.6 that in the basic pushing example, the L2E agent is very robust against plans of even lower quality. Using simple plans enabled us to use an analytical encoding; for very complex scenarios it might be beneficial to learn the encoding using an auxiliary objective (see, e.g., Co-Reyes et al. (2018)). We present results on using a variational autoencoder (VAE) in section A.5. The use of FV-RS biases the RL agent towards following the plan. While it was shown in the experiments that the RL agent can learn to deviate from the plan, plans that are globally misleading can act as a distraction to the agent. In the present work, it is assumed that plans can be used to guide the agent during learning, increasing sample efficiency. Independently of the specific method used to achieve this, misleading plans will always break this assumption. Comparing the basic pushing environment to the obstacle pushing environment, the amount of data needed for learning a plan-conditioned policy clearly depends on the size of the plan spaces that are considered. For very large plan spaces, more data will be needed to master the task. Still, including planner information into the learning process makes a decisive difference, as demonstrated by the relative performance of L2E and HER in the obstacle example. While SAC was used for the experiments in section 5, L2E can be used in combination with any off-policy RL algorithm. L2E reformulates a goal-conditioned MDP as a plan-conditioned MDP, and provides a replay strategy to efficiently solve the latter. It is agnostic to how this data is then used by the RL agent. The specific FV-RS shaping function used in this work applies to MDPs with sparse rewards. We focused on this since sparse rewards are common in robotic manipulation. In addition, they often present notoriously hard exploration tasks, making external plan-based information as used by L2E particularly useful. However, FV-RS in general is not restricted to sparse-reward settings, and by using a different shaping function, L2E could be applied in other settings as well. Apart from FV-RS, there are alternative schemes of reward shaping such as potential-based reward shaping (PB-RS) Ng et al. (1999). In principle, these could also be used to increase the sample efficiency of the RL agent. We chose FV-RS for two reasons. First, in the original paper Schubert et al. (2021), it was demonstrated that FV-RS leads to significantly higher sample efficiency than PB-RS. Second, since PB-RS leaves the optimal policy invariant, the behavior of the fully converged policy trained with PB-RS will only be goal-dependent, and not depend on the rest of the plan. The original HER paper (Andrychowicz et al., 2017) considers the use of a simple form of reward shaping in combination with HER as well. It is found that reward shaping dramatically reduces the performance of HER in a robotic pushing task. In the present work, we show in contrast that including plan information using FV-RS shaping improves the performance of RL in a similar task. A possible explanation to reconciliate these seemingly contradictory results is already offered by Andrychowicz et al. (2017): While simple domain-agnostic shaping functions can be a distraction for the RL agent, domain-specific reward shaping functions can be beneficial. This view is supported, e.g., by similar results by Popov et al. (2017). Andrychowicz et al. (2017) state that however “designing such shaped rewards requires a lot of domain knowledge”. In this context, one could view L2E as an automated way to extract such domain-specific knowledge from model-based planners and make it available. We specifically believe that L2E can be useful in robotic manipulation tasks, where domain knowledge is in fact readily available in many cases. Here, L2E offers a way to exploit this. 7 Conclusion We introduced L2E, an algorithm that links RL and model-based planning using FV-RS. RL generally results in well-performing policies but needs large amounts of data, while model-based planning is data-efficient but does not always result in successful policies. By combining the two, L2E seeks to exploit the strengths of both approaches. We demonstrated that L2E in fact shows both higher sample efficiency when compared to purely model-free RL, and higher success rates when compared to executing plans of a model-based planner. In addition, L2E also outperformed baseline approaches that combine learning and planning in our experiments. Acknowledgments and Disclosure of Funding The authors would like to thank Valentin N Hartmann for stimulating discussions. The research has been supported by the International Max-Planck Research School for Intelligent Systems (IMPRSIS), and by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC 2120/1–390831618 “IntCDC” and EXC 2002/1–390523135 “Science of Intelligence”.
1. What is the focus of the paper regarding robotic manipulation tasks? 2. What are the strengths of the proposed approach, particularly in terms of intuition and hyperparameter tuning? 3. What are the weaknesses of the paper, especially regarding experimentation and clarity? 4. Do you have any questions regarding the method's clarity, such as plan embedding or plan-conditioned policy? 5. What are some missing related works that the author should consider adding to the paper?
Summary Of The Paper Review
Summary Of The Paper This paper presents a method for combining plan-conditioned policies with reward-shaping and a given approximate planner to a sparse-reward, seemingly challenging robotic manipulation task. However, the paper is lacking in experiments, which is the main reason I am voting for rejection. Review Paper Strengths Intuitive Method This paper demonstrates that a plan-conditioned policy combined with an FV-RS reward shaping function is able to allow for better performance on goal-conditioned, sparse reward tasks. The method is novel, although the contribution isn’t too surprising. Hyperparameter Tuning The authors perform a proper hyperparameter tuning scheme for HER when comparing against the baseline. Performance Improvement The method presents a modest performance improvement over the baselines. Paper Weaknesses One environment, one task The authors should evaluate on at least 2, preferably 3+ environments/tasks to truly demonstrate the advantage of their method. This is one large reason I am currently not voting for acceptance. Method Clarity Question: Are plans embedded or given in full to the plan-conditioned policy? Ablation Studies How does planner quality affect L2E? How do plan lengths affect L2E? Can you show ablations here? FV-RS Introduction The F F V reward is discussed but not given an intuitive explanation. This section, and the method in general, would be more clear if all 3 terms were explained intuitively in Eq.3 and their effects on policy behavior were discussed explicitly. Missing Related Works Combining model-free and model-based RL is presented as a contribution, however there are works that have done this before (see for example, “When to Trust Your Model: Model-Based Policy Optimization by Janner et al, and “Temporal Difference Models” by Pong et al). The authors should add citations to these works and contrast L2E with them.
NIPS
Title Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics Abstract Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and modelbased planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning. 1 Introduction A central goal of robotics research is to design intelligent machines that can solve arbitrary and formerly unseen tasks while interacting with the physical world. Reinforcement Learning (RL) (Sutton & Barto, 2018) is a generic framework to automatically learn such intelligent behavior with little human engineering. Still, teaching an RL agent to actually exhibit general-purpose problem-solving behavior is, while possible in principle, prohibitive in practice. This is due to practical restrictions including limited computational resources and limited data availability. The latter limitation is particularly dramatic in robotics, where interaction with the physical world is costly. On the other hand, for many robotics scenarios, there is a rough model of the environment available. This can be exploited, e.g., using model-based planning approaches (Mordatch et al., 2012; Kuindersma et al., 2016; Toussaint et al., 2018). Model-based planners potentially offer a more data-efficient way to reason about an agent’s interaction with the world. Model-based planners have been used in many areas of robotics, such as for indoor and aerial robots (Faust et al., 2018), visual manipulation (Jeong et al., 2020), or humanoid walking (Mordatch et al., 2015). Still, if the model does not account for stochasticity or contains systematic errors, directly following the resulting plan will not be successful. The present work starts from the observation that both pure RL approaches and pure planning approaches have strengths and limitations that are fairly complementary. RL makes no assumptions about the environment but is data-hungry, and model-based planning generally implies model simplifications but is data-efficient. For robotic manipulation tasks, it seems natural to try and integrate both approaches into one framework that combines the strengths of both. In the present work we seek to add an additional perspective to the open question of how this can be achieved best. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We introduce a novel approach that we call Learning to Execute (L2E). Our approach translates sparsereward goal-conditioned Markov Decision Processes (MDPs) (Bellman, 1957) into plan-conditioned MDPs. L2E exploits a simple planning module to create crude plans, which are then used to teach any off-the-shelf off-policy RL agent to execute them. L2E makes use of final-volume-preserving reward shaping (FV-RS) (Schubert et al., 2021), allowing it to train a universal plan-conditioned policy with high data efficiency. The contributions of this work are: • We introduce L2E, which uses RL to efficiently learn to execute approximate plans from a model-based planner in a plan-conditioned MDP. We describe formally how FV-RS can be used as a tool to construct such plan-conditioned MDPs from goal-conditioned MDPs. • We introduce plan replay strategies to efficiently learn universal plan-conditioned policies. • We demonstrate, using robotic pushing problems, that L2E exhibits increased performance when compared to pure RL methods, pure planning methods, or other methods combining learning and planning. We discuss work related to ours in section 2, explain background and notation in section 3, and introduce our method in section 4. We present our experimental results in section 5, discuss limitations in section 6, and conclude with section 7. 2 Related Work 2.1 Goal-Conditioned Policies Goal-conditioned or universal policies (Kaelbling, 1993; Moore et al., 1999; Foster & Dayan, 2002; Schaul et al., 2015; Veeriah et al., 2018; Nasiriany et al., 2019) not only act based on the state the agent finds itself in, but also based on the goal it tries to achieve. Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) is a particularly efficient way to learn universal policies. Here, achieved outcomes of the agent’s interaction with the environment are interpreted as desired goals in order to improve sample efficiency in sparse-reward settings. L2E draws great inspiration from this work, but in contrast to HER, L2E learns a universal planconditioned policy. This means that the L2E policy in general can execute multiple plans leading to the same goal. Although this presents a more complex learning task, we show in our experiments that by incorporating plan information using plan-based FV-RS, the sample efficiency of L2E is significantly improved over HER. 2.2 Plan- and Trajectory-Conditioned Policies Plan-conditioned policies create behavior that depends on plans that are input to the decision making. Lynch et al. (2020) learn plans and how to execute them from data generated by a human “playing” with a teleoperated robot. The resulting policy is conditional on a latent space of encoded plans. Our work differs from this paradigm in that human interaction is not needed. Both Lynch et al. (2020) and Co-Reyes et al. (2018) directly imitate a planned trajectory by maximizing its likelihood. In contrast, the plans used in the present work are not directly imitated. Using FV-RS guarantees that the fully trained L2E agent will reach its goal after finite time even if the plan provided is wrong. Guo et al. (2019) learn trajectory-conditioned policies to self-imitate diverse (optimal and suboptimal) trajectories from the agent’s past experience. We instead assume in this work that the plan is provided by an external model-based planner. This allows the L2E agent to use external information during training that could not be concluded from its own experience yet. 2.3 Learning from Demonstration L2E learns how to execute plans in order to achieve different tasks. In this sense, it is related to Learning from Demonstration (LfD) techniques that exploit demonstrations when learning a task. Existing work (Argall et al., 2009; Hussein et al., 2017; Ravichandar et al., 2020) differs significantly both in how the demonstration examples are collected and how the policy is then derived. Taylor et al. (2011) derive an approximate policy from human demonstration, and then use this to bias the exploration in a final RL stage. Hester et al. (2017) train a policy on both expert data and collected data, combining supervised and temporal difference losses. Salimans & Chen (2018) use a single demonstration as starting points to which the RL agent is reset at the beginning of each episode. Peng et al. (2018) use motion capture data to guide exploration by rewarding the RL agent to imitate it. In Cabi et al. (2019), demonstrations are combined with reward sketching done by a human. Interactive human feedback during training is another source of information used in Thomaz et al. (2006); Knox & Stone (2010). Kinose & Taniguchi (2020) integrate RL and demonstrations using generative adversarial imitation learning by interpreting the discriminator loss as an additional optimality signal in multi-objective RL. While these LfD approaches are related to L2E in that external information is used to increase RL efficiency, it is in contrast assumed in L2E that this external information is provided by a planner. 2.4 Combining Learning with Planning Similarly to demonstrations, external plans can be exploited to facilitate learning. Faust et al. (2018) connect short-range goal-conditioned navigation policies into complex navigation tasks using probabilistic roadmaps. In contrast, L2E learns a single plan-conditioned policy for both short-term and long-term decision making. Sekar et al. (2020) use planning in a learned model to optimize for expected future novelty. In contrast, L2E encourages the agent to stay close to the planned behavior. Zhang et al. (2016) use model-predictive control to generate control policies that are then used to regularize the RL agent. In L2E, no such intermediate control policy is created, and a reward signal is computed directly from the plan. In Guided Policy Search (Levine & Koltun, 2013), differential dynamic programming is used to create informative guiding distributions from a transition model for policy search. These distributions are used to directly regularize the policy in a supervised fashion, while L2E makes use of FV-RS as a mechanism to interface planning and RL. Christiano et al. (2016) learn an inverse dynamics model to transfer knowledge from a policy in the source domain to a policy in the target domain. The idea of integrating model-based and model-free RL has also been studied independently of planning (Pong et al., 2018; Janner et al., 2019). In contrast, in L2E the model is translated by a planner into long-horizon plans. In the experiments section, we compare L2E against two representative examples from the literature mentioned above. The first is using a plan to identify subgoals that are then pursued by an RL agent, as done in Faust et al. (2018). The second is executing the plan using an inverse model, similar to the approach in Christiano et al. (2016). These two baselines and L2E can be seen as representatives of a continuum: Christiano et al. (2016) follow the plan very closely, trying to imitate the planner at each time step. Faust et al. (2018) relax this requirement and only train the agent to reach intermediate goals. Finally, in L2E, the agent is free to deviate arbitrarily from the plan (although it is biased to stay close), as long as it reaches the goal. We find that L2E results in significantly higher success rates when compared against both baselines. 3 Background 3.1 Goal-Conditioned MDPs and RL We consider settings that can be described as discrete-time MDPs M = 〈S,A, T, γ,R, PS〉. S and A denote the set of all possible states and actions, respectively. T : S× A× S→ R+0 is the transition probability (density); T (s′|s, a) is the probability of the next state being s′ if the current state is s and a is chosen as the action. The agent receives a real-valued reward R(s, a, s′) after each transition. Immediate and future rewards are traded off by the discount factor γ ∈ [0, 1). PS : S→ R+0 is the initial state distribution. The goal of RL is to learn an optimal policy π∗ : S×A→ R+0 that maximizes the expected discounted return. In other words, RL algorithms generally try to find π∗ = argmax π ∞∑ t=0 γtEst+1∼T (·|st,at), at∼π(·|st),s0∼PS [R(st, at, st+1)] (1) from collected transition and reward data D = {(si, ai, ri, s′i)}ni=0. More specifically for this work, we are interested in applications in robotics, where both S and A are typically continuous. There exists a wide range of algorithms for this case. For the experiments in this paper, soft actor-critic (SAC) (Haarnoja et al., 2018) is used. In a goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉, the reward function RG(s, a, s′, g) has an additional input parameter, the goal g ∈ G. Here, PG : G→ R+0 is the distribution of goals. The optimal goal-conditioned policy π∗G acts optimally with respect to any of these goals. 3.2 Final-Volume-Preserving Reward Shaping We use approximate plans as an additional source of information for the RL agent. For sparsereward goal-driven MDPs, FV-RS (Schubert et al., 2021) offers an efficient way to include additional information by adding an additional term R(s, a, s′)→ RFV(s, a, s′) = R(s, a, s′) + FFV(s, a, s′) (2) to the reward function, accelerating exploration. In general, the optimal policy π∗ corresponding to the original MDP and the optimal policy π∗FV corresponding to the shaped MDP will be different. FV-RS however restricts the allowed modifications FFV(s, a, s′) in such a way that after finite time, the optimally controlled agent ends up in a subset of the volume in which it would have ended up without shaping. As a result, external information can be made available for the RL algorithm without changing the long-term behavior of the resulting optimal policy. Specifically in the present work, we consider goal-conditioned MDPs in which the goal-conditioned reward RG of the underlying MDP is either 1, if the goal is reached, or 0 everywhere else. We further assume that the L2E agent is given an external plan p, represented as an intended trajectory p = (p1, p2, . . . ) in state space. We intend to reward the agent for staying close to the plan, and for advancing towards the goal along the plan. A natural way of achieving this is to use a plan-based shaping reward (Schubert et al., 2021). The single-plan shaping function introduced there can be generalized to the multi-plan setting in the present work in the following way: FFV(s, a, s ′, p) = 1−RG(s, a, s′, f(p)) 2 k(s) + 1 L exp ( − d2(s, pk(s)) 2σ2 ) (3) Here, f(p) denotes the goal that p leads to, σ ∈ (0,∞), k(s) = argmini(d(pi, s)), and d(·, ·) is a measure of distance in state space. For the pushing experiments discussed in this work, d(·, ·) is the euclidean distance in state space ignoring the coordinates corresponding to the orientation of the box. The first term in eq. (3) ensures that the assigned shaping reward FFV is always smaller than the maximum environment reward (at most 1/2), and that if the binary environment reward is 1, no shaping reward is assigned. The second term rewards the agent for advancing towards the goal along the plan, and the third term rewards the agent for staying close to the plan. For a sufficiently high discount factor γ, FFV is final-volume preserving, meaning that the long-term behavior of the optimal agent is unchanged. 4 Learning to Execute L2E considers goal-conditioned MDPs MG (see section 3.1), for which an approximate planner Ω is available. L2E uses FV-RS to construct a corresponding plan-conditioned MDP MP from a goal-conditioned MDP MG and a planner Ω. In the following sections 4.1 to 4.3, we introduce our notion of a plan-conditioned MDP MP and describe the components of the L2E algorithm. We then summarize the L2E algorithm in section 4.4. 4.1 Plan-Conditioned MDPs Plans are provided by a model-based planner, which can be described as a distribution Ω : P×S×G→ R+0 over a set of plans P. Given an initial state and a goal, Ω(p|s, g) is the probability that the planner outputs p as a possible plan of how to achieve g from state s. The distinction between goals and plans is that plans are conditional on both a goal and an initial state. Therefore, both initial state and goal can be inferred using the plan only. In a plan-conditioned MDP MP = 〈S,P,A, T, γ,RP , PS , PP 〉, a plan p ∈ P is given to the reward function RP (s, a, s′, p) as an additional input parameter. PP : P→ R+0 is the distribution of plans. The optimal plan-conditioned policy π∗P behaves optimally with respect to any of these plans, creating a distribution π∗P (· | s, p) over actions that is conditional on the current state and the current plan. 4.2 Constructing the Plan-Conditioned MDP We use FV-RS to shape the reward function RG of the original goal-conditioned MDP MG = 〈S,G,A, T, γ,RG, PS , PG〉 with a plan-dependent term FFV(s, a, s′, p) (see equation 3) RG(s, a, s ′, g)→ RFVG (s, a, s′, g, p) = RG(s, a, s′, g) + FFV(s, a, s′, p) . (4) We call g = f(p) the goal for which the plan p was created. If a planner Ω should be such that g can not be recovered from the resulting plan p ∼ Ω(.|s, g), we can always construct a new p̃ ∼ Ω̃ such that p̃ = [p, g]. Since now g can be recovered from p̃ deterministically, we can assume that f always exists without loss of generality. We can interpret the shaped reward function RP (s, a, s ′, p) = RFVG (s, a, s ′, f(p), p) (5) as a plan-conditioned reward function of a plan-conditioned MDP MP = 〈S,G,A, T, γ,RP , PP 〉. The distribution over initial states and plans PP of MP is still missing, and can be constructed as PP (s, p) = ∫ Ω(p|s, g)PS(s)PG(g)dg . (6) In practice, PP can be sampled from by first sampling s ∼ PS , g ∼ PG and then subsequently sampling p ∼ Ω(·|s, g). Thus, we have constructed a plan-conditioned MDP MP by combining a goal-conditioned MDP MG with an approximate planner Ω and a FV-RS shaping function FFV. For reference later in this paper, we write as a shorthand notation MP = C(MG,Ω, FFV). Furthermore, we will refer to MP as the corresponding plan-conditioned MDP to MG and vice versa. In contrast to potential-based reward shaping (Ng et al., 1999), FV-RS does not leave the optimal policy invariant. As a result, generally ∃p ∈ P : π∗G(·|·, f(p)) 6≡ π∗P (·|·, p). In words, the optimal policy of MP and the optimal policy of MG will not result in identical behavior. In fact, while π∗G(·|·, g) learns one policy for each goal g, π∗P (·|·, p) can learn different behavior for each plan in the set of plans {p ∈ P | f(p) = g} leading towards the same goal g. 4.3 Plan Replay Strategy In order to efficiently learn a universal plan-conditioned L2E policy, the reward for experienced episodes is evaluated with respect to many different plans. In HER (Andrychowicz et al., 2017), it is assumed that each state s ∈ S can be assigned an achieved goal. Recorded episodes are then replayed with respect to goals that were achieved during the episode, i.e. the recorded transitions are re-evaluated with respect to these goals. This ensures that the recorded transitions were successful in reaching the replayed goals, resulting in highly informative data. In L2E, transitions are replayed with respect to plans. However, there is no meaningful relation between each state s ∈ S and a unique “achieved plan”. Therefore, the L2E agent replays transitions with past plans that were recorded at some point during training and were stored in its replay buffer D. The replay plans are chosen according to a plan replay strategy Sn. A plan replay strategy Sn provides a distribution over n replay plans, conditioned on the replay buffer D and the buffer containing the current episode Dep (see algorithm 1 for a definition of D and Dep). For replay, n plans are sampled according to this strategy {p1, . . . , pn} ∼ Sn(· | Dep, D). We consider two types of replay strategies. Uniform replay Sunin samples n unique plans uniformly from the replay buffer D. Reward-biased replay Sbiasnm first uniformly samples m unique plans from the replay buffer D, and then returns the n plans pi that would have resulted in the highest sum of rewards ∑ (sk,ak,s′k)∈Dep RP (sk, ak, s ′ k, pi) for the episode stored in Dep. The idea behind using reward-biased replay is to bias the replay towards transitions resulting in higher reward. 4.4 L2E Algorithm The L2E algorithm is outlined in algorithm 1. First, the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) is constructed from the original goal-conditioned MDP MG, the planner Ω and the shaping function FFV as described in section 4.2. The agent acts in the environment trying to follow one randomly sampled plan per episode. The episode is then added to the replay buffer, Algorithm 1: Learning to Execute (L2E) Input :Goal-conditioned MDP MG, approximate planner Ω, FV-RS shaping function FFV, plan replay strategy Sn, off-policy RL Algorithm A Output :Universal plan-conditioned optimal policy π∗P for the corresponding plan-conditioned MDP MP = C(MG,Ω, FFV) 1 Construct plan-conditioned MDP MP = C(MG,Ω, FFV) as detailed in section 4.2; 2 Initialize replay buffer D ← {}; 3 while π∗P not converged do 4 Initialize episode buffer Dep ← {}; 5 Sample initial state and goal (s0, g) ∼ PG; 6 Sample plan p ∼ Ω(·|s0, g); 7 s← s0; 8 while Episode not done do 9 Sample action a ∼ π∗P (· | s, p); 10 Sample transition s′ ∼ T (· | s, a); 11 Collect shaped reward r ← RP (s, a, s′, p); 12 Add to episode buffer Dep ← Dep ∪ {(s, a, r, s′, p)}; 13 s← s′; 14 end 15 Add episode to replay buffer D ← D ∪Dep; 16 Get replay plans {p1, . . . , pn} ∼ Sn(· | Dep, D); 17 for preplay in p1, . . . , pn do 18 for (s, a, r, s′, p) in Dep do 19 Calculate replay reward rreplay ← RP (s, a, s′, preplay); 20 Add replayed transition to buffer D ← D ∪ {(s, a, rreplay, s′, preplay)}; 21 end 22 end 23 Update policy using off-policy RL algorithm π∗P ← A(π∗P , D) 24 end along with data from episode replays with respect to other plans. These other plans are sampled from the replay buffer according to the replay strategy Sn. A generic off-policy RL algorithm is used to update the agent using the replay buffer. This process is repeated until convergence. We would like to emphasize that the L2E algorithm is agnostic to the exact type of off-policy RL algorithm. By combining state and plan into a “super state” for the purpose of passing the replay buffer to the off-policy RL algorithm, L2E can be interfaced with any off-the-shelf implementation. 5 Experiments We evaluate the L2E agent against several baselines using two simulated robotic manipulation tasks, namely a pushing task and an obstacle avoidance task. These two environments are chosen to compare different approaches on a variety of challenges. While the pushing task can be seen as an open-source version of the opanAI gym FetchPush-v1 task (Brockman et al., 2016), the obstacle task was chosen to represent robotic manipulation tasks with segmented state spaces. This allows us to discuss limitations of exploration in such environments as well. A video of the experiments is available in the supplementary material. The complete code to fully reproduce the figures in this paper from scratch can be found at github.com/ischubert/l2e and in the supplementary material. This includes the implementation of the environments, the implementation of the L2E agents and the baselines, and the specific code used for the experiments in this paper. The experiments section is structured as follows. In section 5.1 we discuss the environments and planners that are used in the experiments. We briefly introduce the plan embedding used for the L2E agent in section 5.2, additional experiments on this can be found in section A.5 In section 5.3 we introduce the baselines against which we compare our method. In section 5.4 we discuss our experimental results. Implementation details of the L2E agent are given in section A.1 5.1 Environments and Planners Figure 1a and Figure 1c show renderings of the basic pushing environment and obstacle pushing environment, respectively. We use the open-source Nvidia PhysX engine (phy, 2021) to simulate a box of size 0.4× 0.4 being pushed on a table of size 3× 3 by a spherical end effector of radius 0.06. The 10D state space of both the goal-conditioned MDP MG and the corresponding plan-conditioned MDP MP consists of the 3D position of the end effector, the 3D position of the box, and the 4D quaternion for the orientation of the box. The agent controls the 3D velocity of the end effector. The maximum velocity in any direction is 0.1 per time step. The end effector movement resulting from the agent’s actions is slightly distorted by random noise. In the obstacle pushing environment, the agent additionally has to evade an obstacle in the middle of the table. In the goal-conditioned MDP MG, each goal is represented as a desired 2D box position on the table. The goal-dependent sparse reward function RG is 1 if the box is within 0.1 of this desired goal, and 0 if not. The initial state-goal distribution PG is uniform across the table for the non-colliding box position and goal position. The end effector is always initialized at the origin and the box is always initialized with a fixed orientation parallel to the table. For the basic pushing environment, we use a crude manhattan-like planner Ω that deterministically outputs plans consisting of two separate contacts leading the initial state to the goal as shown in Figure 1a. For the obstacle pushing environment, plans consist of four contacts, corresponding to an additional intermediate box position which is chosen at random (see Figure 1c). Thus, the agent learns to execute an infinite number of plans for each combination of start and goal. Plans are represented as a trajectory of length 50 for the basic pushing environment and 100 for the obstacle pushing environment, consisting of 6D elements representing end effector position and box position. For the basic pushing environment, we additionally report results for less dense plans in section A.6. The orientation of the box is not specified in the plans. We construct the plan-conditioned MDP MP as described in section 4.2, using this planner and the FV-RS function in equation 3. We use the width parameter σ = 0.5 throughout the experiments. 5.2 Plan Encoding The plans p are embedded before they are provided to the policy. A plan encoding is an injective function φ : P → C from the set of plans P to a latent space C. If P is a manifold in some highdimensional space, the dimensionality of the latent space must be at least as high as the dimensionality of the manifold. Since P is task-dependent, the encoding will be task-dependent as well. For the basic pushing environment (Figure 1a), P is a 4D manifold (since the plans only depend on the initial and final 2D box positions). For the obstacle task (Figure 1c), P is a 6D-manifold (since the plans depend on one intermediate box position as well). In the experiments discussed in the present work, we encode plans analytically using box positions as described above. We experimentally compare this with either learning the encoding or not using any encoding at all in section A.5. 5.3 Baselines We compare L2E against (1) direct plan execution, (2) plan execution with an inverse dynamics model, (3) using RL to reach subgoals, and (4) HER. We describe these baselines in detail in section A.2. 5.4 Results Both at training and evaluation time, we run episodes of length 250. For each method q (i.e., L2E and all baselines), we independently train A = 10 agents. After N environment transitions, we evaluate the agents. We reset to random initial positions and goals/plans and run the experiment until the goal is reached or until the episode ends. We repeat this process M = 30 times for each agent, and store whether the rollout was successful in reaching the goal. We denote the result of the m-th evaluation of the a-th agent for method q, evaluated after learning for N environment transitions, as F (q)am(N). As can be seen from the video given in the supplementary material, even though the L2E agent uses plan information as a guiding bias during exploration, and is encouraged to stay close to the plan by the shaping reward, it can also learn to deviate considerably from the plan if closely following it will be suboptimal for reaching the goal fast. For example, while the simple planner (see Figure 1a and Figure 1c) suggests to re-establish the contact during the sequence, the L2E agent almost always moves and turns the box using a single contact. 5.4.1 Basic Pushing Environment To allow for a fair comparison, we spent a considerable amount of effort to optimize the HER replay strategy as well as the L2E strategy. Details on this are given in section A.4. The results for the pushing setup are summarized in Figure 1b. We observe that both L2E versions outperform all baselines in terms of the asymptotical performance. L2E with biased replay strategy S10,1000 exhibits a high sample efficiency especially in the beginning, resulting in success rates significantly higher than 50% after 4000 episode rollouts or 1 Million time steps. Directly executing the plan results in very low success rates of significantly less than 20% on average. Executing the plan with an inverse model (IM) still shows significantly worse long-term performance than the RL methods. HER results in better policies than the IM baselines, but is relatively data hungry. This can be improved slightly if the HER agent is only used to reach subgoals given by the planner. Pushing is a challenging interaction that requires reasoning for several time steps ahead. A typical failure mode of the IM baseline (see also videos) is that the box moves away from the intended trajectory too much, so that the agent is not able to correct for it within one time step. In contrast, the L2E agent learns to deviate from the planned trajectory if this is required to reach the goal. We find that L2E, combining a model-based planner and a universal plan-conditioned policy, outperforms our baselines that are pure planning or pure learning approaches. In addition, L2E outperforms the two baselines that also combine learning and planning. 5.4.2 Obstacle Pushing Environment L2E performs significantly better than the pure learning HER baselines, the pure planning baseline ("Plan"), and the “Planned Subgoals + RL” baseline. While using an inverse model is initially more efficient, L2E achieves significantly better results if given enough data. Comparing the basic pushing environment (section 5.4.1) to the obstacle environment, L2E learns slower in the latter. This is in part due to the higher dimensionality of the latent space of plan encodings (see also section 5.2), posing a more challenging learning problem to the L2E agent. In contrast, the "Plan+IM" baseline is independent of the size of the plan space, and performs comparably to the experimental setting in the original version. The obstacle in the middle segments the state space into two parts. In order to move from one side to the other, an agent already has to be able to reason about long-term results of its actions. As evidenced by the results for HER, this poses a significant challenge for pure RL. Incorporating planner knowledge helps the agent to overcome this chicken-and-egg problem. 6 Discussion Learning plan-dependent policies as opposed to goal-dependent policies has the additional advantage that the former can learn to execute multiple plans that lead from the same initial state to the same goal, as shown in the obstacle environment. Thus, the policy learns multiple strategies to achieve the same outcome. In principle, this allows it to adapt to changed scenarios where some of these strategies become infeasible. If, e.g., the environment changes, it suffices to only update the planner’s crude model of the environment so that it creates plans that are feasible again. These can then be directly fed into the policy without retraining. We explore this possibility in section A.3, using a simple 2D maze environment with moving obstacles. We find that the plan-conditioned L2E policy consistently achieves 90% success rate in this quickly changing environment, while the goal-conditioned HER policy does not improve beyond 60% success rate. We used rather simple plans to support the RL agent during training, and demonstrated that these are already sufficient to significantly speed up learning in our experiments. In fact we demonstrate in section A.6 that in the basic pushing example, the L2E agent is very robust against plans of even lower quality. Using simple plans enabled us to use an analytical encoding; for very complex scenarios it might be beneficial to learn the encoding using an auxiliary objective (see, e.g., Co-Reyes et al. (2018)). We present results on using a variational autoencoder (VAE) in section A.5. The use of FV-RS biases the RL agent towards following the plan. While it was shown in the experiments that the RL agent can learn to deviate from the plan, plans that are globally misleading can act as a distraction to the agent. In the present work, it is assumed that plans can be used to guide the agent during learning, increasing sample efficiency. Independently of the specific method used to achieve this, misleading plans will always break this assumption. Comparing the basic pushing environment to the obstacle pushing environment, the amount of data needed for learning a plan-conditioned policy clearly depends on the size of the plan spaces that are considered. For very large plan spaces, more data will be needed to master the task. Still, including planner information into the learning process makes a decisive difference, as demonstrated by the relative performance of L2E and HER in the obstacle example. While SAC was used for the experiments in section 5, L2E can be used in combination with any off-policy RL algorithm. L2E reformulates a goal-conditioned MDP as a plan-conditioned MDP, and provides a replay strategy to efficiently solve the latter. It is agnostic to how this data is then used by the RL agent. The specific FV-RS shaping function used in this work applies to MDPs with sparse rewards. We focused on this since sparse rewards are common in robotic manipulation. In addition, they often present notoriously hard exploration tasks, making external plan-based information as used by L2E particularly useful. However, FV-RS in general is not restricted to sparse-reward settings, and by using a different shaping function, L2E could be applied in other settings as well. Apart from FV-RS, there are alternative schemes of reward shaping such as potential-based reward shaping (PB-RS) Ng et al. (1999). In principle, these could also be used to increase the sample efficiency of the RL agent. We chose FV-RS for two reasons. First, in the original paper Schubert et al. (2021), it was demonstrated that FV-RS leads to significantly higher sample efficiency than PB-RS. Second, since PB-RS leaves the optimal policy invariant, the behavior of the fully converged policy trained with PB-RS will only be goal-dependent, and not depend on the rest of the plan. The original HER paper (Andrychowicz et al., 2017) considers the use of a simple form of reward shaping in combination with HER as well. It is found that reward shaping dramatically reduces the performance of HER in a robotic pushing task. In the present work, we show in contrast that including plan information using FV-RS shaping improves the performance of RL in a similar task. A possible explanation to reconciliate these seemingly contradictory results is already offered by Andrychowicz et al. (2017): While simple domain-agnostic shaping functions can be a distraction for the RL agent, domain-specific reward shaping functions can be beneficial. This view is supported, e.g., by similar results by Popov et al. (2017). Andrychowicz et al. (2017) state that however “designing such shaped rewards requires a lot of domain knowledge”. In this context, one could view L2E as an automated way to extract such domain-specific knowledge from model-based planners and make it available. We specifically believe that L2E can be useful in robotic manipulation tasks, where domain knowledge is in fact readily available in many cases. Here, L2E offers a way to exploit this. 7 Conclusion We introduced L2E, an algorithm that links RL and model-based planning using FV-RS. RL generally results in well-performing policies but needs large amounts of data, while model-based planning is data-efficient but does not always result in successful policies. By combining the two, L2E seeks to exploit the strengths of both approaches. We demonstrated that L2E in fact shows both higher sample efficiency when compared to purely model-free RL, and higher success rates when compared to executing plans of a model-based planner. In addition, L2E also outperformed baseline approaches that combine learning and planning in our experiments. Acknowledgments and Disclosure of Funding The authors would like to thank Valentin N Hartmann for stimulating discussions. The research has been supported by the International Max-Planck Research School for Intelligent Systems (IMPRSIS), and by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC 2120/1–390831618 “IntCDC” and EXC 2002/1–390523135 “Science of Intelligence”.
1. What is the novel idea proposed by the paper regarding policy conditioning? 2. How does the proposed method differ from goal-conditioned RL and other imitation learning methods like GPS and DeepMimic? 3. What are the strengths and weaknesses of the proposed approach, particularly in terms of sample efficiency and success rate? 4. How does the choice of encoding plans into a 4D latent space affect the performance of the algorithm? 5. Why is plan conditioning preferred over goal conditioning, and what advantages does it offer in general? 6. How does the proposed method compare to other learning + planning algorithms, and what are the limitations of its applicability to various environments?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to condition policies on plans. Therefore, the policy learns to execute an approximate plan and correct it if necessary. This idea is instantiated in an algorithm that uses SAC for policy optimization and a "crude manhattan-like planner" for planning. The algorithm is evaluated on a box pushing task, where an improvement in sample-efficiency over goal-condition RL is shown, and an improvement in success rate compared to model-based plan execution with learned inverse dynamics model is demonstrated. Review The idea of conditioning policies on expert plans as presented in this paper is novel according to my knowledge. The paper is of good quality and it is written clearly. Together with the code in the supplementary materials, the paper provides sufficient details to reproduce the results. However, the evaluation is performed only in one environment and there is no discussion or evaluation of the influence of the function that encodes the plans before passing them to the policy. This lack of evaluations makes it hard to judge the significance of the proposed method as it is not clear if it generalizes to other settings. Since the main competitive approach considered in the paper is HER, it might make sense to add evaluations of the proposed L2E algorithm in the environments from the original HER paper. Major comments line 231: "we analytically encode the plans into a 4D latent space". Provide the details of this mapping in the paper since this is one of the crucial elements of the proposed approach. Why 4D? What properties should this mapping satisfy? Does the dimensionality of the latent space depend on the task? Either theoretical study or empirical evaluations of the choices of this mapping should definitely be provided, because conditioning the policy on plans instead of goals is the main difference of the proposed method compared to goal-conditioned RL. lines 272-276 say that L2E agent significantly deviates from the plans. This appears to imply that conditioning on the plan may not be so informative to the agent. What is the advantage of having a plan-conditioned policy instead of a goal-conditioned one? It is shown in the pushing experiment that with the chosen hyperparameters one achieves faster convergence, but are there any benefits in general that one should expect from feeding in plans instead of goals? it would be beneficial to the reader to contrast the proposed approach to guided policy search (GPS) and DeepMimic, as those methods also involve imitation of an imperfect teacher, although they don't explicitly feed the plans as input to the policy Minor comments line 178: plan replay strategy S_n is defined as "a distribution over n replay plans". However, in lines 181-182 it is said that the uniform replay strategy samples n unique plans uniformly from the replay buffer D, which contains more than n plans. Therefore, I think the authors mean that S_n is a distribution over integers 1, 2, ..., len(D). So, the subscript n seems misleading in S_n because the distribution itself does not depend on how many samples are drawn from it. lines 198-199: what is meant by "unstable dynamics" in the pushing task? Every state of the system seems to be a stable equilibrium state if no control input is applied. Maybe a different term instead of "unstable" would be more appropriate here. The same in line 285. typo: line 146, policy should be conditioned on state instead of action Comments after rebuttal I thank the authors for addressing my questions and for providing additional experiments. I raise my score from 5 to 6. There are still quite a few concerns raised by other reviewers, such as providing comparisons to more directly related learning+planning algorithms instead of HER. Furthermore, the environment used for additional experiments is still a custom made environment, therefore it is hard to directly relate it to other papers. I am also not convinced by the argument that having a plan-conditioned policy is necessary to have a multi-modal policy: in soft Q-learning this is achieved by using an energy-based model for the policy, and if it is goal-conditioned, it may produce different solutions as samples from a multimodal distribution. For all these reasons, I can only say that the paper is marginally above the acceptance threshold. Final comments The authors addressed all my concerns and promised to add the results provided in the anonymous github repository to the paper. With these additions, I consider the paper appropriate for publication and raise my score to 7.
NIPS
Title Towards Learning Universal Hyperparameter Optimizers with Transformers Abstract Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OPTFORMER, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OPTFORMER can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OPTFORMER also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. 1 Introduction The emergence of public machine learning data platforms such as OpenML [1] and hyperparameter optimization (HPO) services such as Google Vizier [2], Amazon SageMaker [3] and Microsoft Azure [4] have made large-scale datasets containing hyperparameter evaluations accessible. For our use-case in this paper, Google Vizier is the de-facto HPO service across Google, having optimized some of Google’s largest products and research efforts, and contains a collection of valuable tuning data within the last 5 years. While there is growing interest in leveraging such data to meta-learn hyperparameter optimization algorithms [5–8], dealing with large datasets consisting of experimental trials in the wild can be challenging, due to large variations in HPO problems and their associated text metadata (e.g. shown later in Table 1). Thus, most meta and transfer-learning HPO methods [7–16] consider a restrictive setting where all tasks must share the same set of hyperparameters so that the input data can be represented as fixed-sized vectors. Consequently, such methods only exploit a small portion of the available data to learn priors. This drawback is more severe for large datasets which contain significant amounts of useful information. To overcome these limitations, we introduce the OPTFORMER, a general hyperparameter optimization framework based on Transformers [17]. Transformers have demonstrated excellent performance in many data tasks, ranging from natural language [18], images [19, 20], biological data [21, 22], code [23, 24], and control [25, 26]. Here, we investigate how to use a Transformer as a universal interface for modelling experimental data and learn HPO algorithms, as given a sufficient amount of data, a Transformer can potentially learn a more complex prior distribution than standard Bayesian Optimization (BO) with Gaussian Processes (GPs), especially as the Transformer possesses certain computational advantages over GPs for large datasets. Code: https://github.com/google-research/optformer. Google AI Blog: https:// ai.googleblog.com/2022/08/optformer-towards-universal.html. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). We introduce a serialization scheme to convert a combination of any metadata and an optimization trajectory into text, represented as a sequence of tokens, and formulate the HPO task as a sequence modeling problem. We adopt a supervised learning approach, by learning to predict parameters and hyperparameter response functions from offline tuning data (See Fig. 1). In order to further improve optimization performance, we augment the model by utilizing its own function prediction during inference (Section 4.3). Extensive experiments on both public and private datasets demonstrate the OPTFORMER’s competitive tuning and generalization abilities. In summary, our contributions are as follows: • We formulate, to the best of our knowledge, the first meta-learning HPO framework to learn both policy and function priors from data across different search spaces. • The OPTFORMER is capable of learning the behaviors of 7 diverse blackbox optimization algorithms relying on a broad class of methods (non-adaptive, evolutionary, and Bayesian). • Furthermore, the OPTFORMER learns the prior over objective functions and provides both accurate and well calibrated predictions, in many cases significantly surpassing GPs in log-predictive likelihood and expected calibration error (ECE) [27]. • Lastly, OPTFORMER policies augmented with model-based optimization, such as the use of Expected Improvement acquisition functions, are competitive HPO algorithms. To the best of our knowledge, this is the first time Transformers are augmented with acquisition functions for online adaptation. 2 Preliminaries 2.1 Meta-learning for hyperparameter optimization HPO aims to find a set of hyperparameters x from search space X to maximize a model performance metric, y = f(x), often referred to as a response function. Table 1 shows an example of HPO experimental data. Following the HPO nomenclature [2, 28], an experimental study consists of metadata (m) and a history of trials (h). The metadata contains arbitrary unstructured information, including but not limited to descriptions of the problem, optimization algorithm, names, types and value ranges of hyperparameters. The history after t trials, ht = (x1, y1, . . . ,xt, yt), contains a sequence of trials, each of which consists of a parameter suggestion x and function value y. The goal of the meta-learning approach for HPO is to learn the shared knowledge among the objective functions f from a dataset of multiple tuning experiments represented as studies and to obtain an optimal HPO algorithm for new hyperparameter tuning tasks from a similar distribution to those in the dataset. An HPO algorithm π maps the metadata and history to a distribution over hyperparameter suggestions, i.e. π(xt+1|m,ht). Using the terminology of offline RL [29], we refer to the algorithm used to generate the trajectories in a dataset as the behavior policy πb. We primarily consider search spaces X with a fixed number D of hyperparameters per task, and hence x = (x(1), . . . , x(D)), with each hyperparameter x(d) being of type DOUBLE, INTEGER, DISCRETE, or CATEGORICAL (see Appendix A.1 for details). More complex search spaces can be supported as discussed in Section 7. 2.2 Transformer model }} The Transformer model is an efficient attention-based neural network architecture for sequence modeling [17]. We adopt the T5 Transformer encoder-decoder architecture [30]. The encoder and decoder each consist of a stack of multi-head selfattention layers which construct pairwise interactions between positions, followed by position-wise feed-forward networks. The encoder converts a sequence of input token representations m, to a sequence of continuous embeddings, which is fed to the decoder to generate a sequence of output tokens h one element at a time (see Fig. 1). 3 Related work There has been a rich set of works in meta-learning and transfer learning by modifying specific core components of the BO pipeline, such as the acquisition function or the GP, in order to tackle BO’s myopic behavior, or obtaining more information from similar tasks. For instance, approaches include learning new acquisition functions [31], multi-task BO [7–13] and BO for transfer learning using contextual GPs [14–16]. [32] also studies the use of meta-BO for hyperparameter tuning tasks in machine learning. However, all of these works consider a fixed search space. A more radical meta-learning approach to non-differentiable optimization trains recurrent neural networks (RNNs) as neural optimizers from scratch. [33] first proposed training an RNN with gradient descent to optimize blackbox functions and hyperparameters while [34, 35] train RNNs using reinforcement learning (RL) to solve RL tasks. Unfortunately, prior works are limited to fixed search spaces and only use online generated data, constraining the training objectives to be cheaply computable. In this work, we wish to overcome the limitations of previous works by exploiting the Transformer architecture. Numerous works have demonstrated Transformers’ strong capabilities in flexible symbolic and numerical manipulation. On the symbolic side, Transformers have been shown able to manipulate symbolic mathematical expressions [36–38] and generate code [23, 24]. Furthermore, on the numerical side, Transformers have also been shown able to perform linear algebra computations [39], Bayesian Inference [40], and offline RL [25, 26, 41]. For AutoML specifically, [42] has demonstrated Transformers’ and analogous graph neural networks’ abilities to use dataset descriptions and metadata to generate classification and data preprocessing pipelines. However, to date, there has been little effort in attacking the full problem of hyperparameter tuning in the blackbox optimization setting. In this paper, the challenging task of learning algorithms from blackbox optimization trajectories can be seen as a significant extension of both symbolic and numerical manipulation. Since the underlying algorithm can be composed of multiple symbolic and mathematical operations with unbounded complexity, the model must infer potentially very complex behavior over long horizons. 4 Universal interface and model for hyperparameter optimization In this section, we provide a universal interface for modeling HPO studies with mixed textual and numerical information as a sequence of discrete tokens. We train our OPTFORMER as a generative model on a given dataset and explain how to use the OPTFORMER’s parameter and function prediction abilities to implement an HPO policy. 4.1 Study tokenization To generalize over HPO problems of different parameter sizes, types, and metadata, we propose to serialize the study as a one-dimensional textual sequence, also advocated in [26]. Unfortunately, a naive serialization approach, e.g. via JSON [43], will produce unnecessarily long sequences. To improve scalability, we compress the textual representation of metadata m by removing redundant phrases and punctuation (e.g., "parameter", quotes) and encoding keywords (e.g., "name", "algorithm") and enumerating types (e.g. "DOUBLE") into single tokens. For the historical sequence h, we convert every DOUBLE and INTEGER parameter along with every function value into a single token, by normalizing and discretizing them into integers, with an quantization level of Q = 1000; e.g. x̄ = int[xnorm ·Q], where xnorm = (x− xmin)/(xmax − xmin). (1) The range of x is defined by the search space and the range of y is obtained from observed values in h. For other types, we use the index in their value set. The shortened text string is then converted to a sequence of tokens via the SentencePiece tokenizer [44] (see Table 2 for an example). Every trial is represented by text, which is represented as a sequence of normalized and quantized tokens, [ x̄ (1) t , . . . , x̄ (D) t , ?, ȳt, "|" ] , where the token ? separates parameter and function values and "|" separates trials. See Appendix A.2 for further details on tokenization. 4.2 Model and training loss After tokenization, the converted historical sequence is as follows: h̄t = [ x̄ (1) 1 , x̄ (2) 1 , . . . , x̄ (D) 1 , ?, ȳ1, "|", . . . , x̄ (1) t , x̄ (2) t , . . . , x̄ (D) t , ?, ȳt ] . (2) We can now apply a Transformer model to learn the conditional distribution of tokens in h̄ using the chain rule, given the metadata m̄, as depicted in Fig. 1. The joint distribution is presented in Appendix D.1. Given a dataset D of hyperparameter optimization studies, we train the OPTFORMER by maximizing the weighted log-likelihood for each study (m,h) ∼ D: L(θ;m,h) = ∑ n wn logPθ(h̄ (n)|m̄, h̄(1:n−1)), (3) with wn = 0 if h̄(n) ∈ {?, "|"} and wn = 1 otherwise. That is, we mask out the separator tokens (?, "|") and predict parameter x̄ and function tokens ȳ only. Note that h̄(n) denotes the n-th token, that is the n-th element of the list in Equation (2), and h̄(1:n−1) denotes all tokens up to the (n− 1)-th token. Further details and data augmentations are provided in Appendix D.2. 4.3 Inference and decoding Parameter prediction: To decode the predicted parameter token x̄(d)t back to its original parameter range, we truncate the output distribution to the vocabulary range corresponding to valid parameter values [0, Q) and reverse our tokenization procedure in Section 4.1. For a DOUBLE or INTEGER parameter x, we use a piecewise constant distribution: pθ(x| . . . ) = Q · Pθ(x̄| . . . )/(xmax − xmin), if x ∈ [xmin, xmax], otherwise 0 . (4) For all other parameter types, x̄ corresponds to the index of the set of feasible values. Putting these together, we may now sample parameter xt from the model’s prior distribution and thus define an HPO policy: πprior(xt|m,ht−1) = D∏ d=1 pθ(x (d) t |m,ht−1,x (1:d−1) t ). (5) As we use a supervised learning loss, we expect πprior to approximate the behavior policy πb. Note that traditional BO algorithms require running Bayesian inference and then conducting a global search in the hyperparameter space with an acquisition function. Thus the runtime complexity of making one hyperparameter suggestion is cubic in t for a typical GP-based BO method that performs ARD each iteration [45]. In contrast, generating one suggestion by the OPTFORMER consists of decoding D parameter tokens with an input sequence of (D + 3)t tokens, which are then parsed into the D parameter values, producing a runtime of O(D2t) linear in t, with proper caching. Function prediction: To decode the real-valued function yt from the discrete distribution Pθ(ȳt|m̄, h̄t−1, x̄t), we construct the same piecewise constant distribution as in Eq. (4) with the range [ymin, ymax] used in tokenization. Note that the limited support of y will not be a concern for HPO when either the range is known or we set the range large enough compared to observed values. For more general use as a few-shot function prediction model, one could consider adopting the Riemann Distribution in [40], which supports an unbounded range. Augmented HPO policies with function prediction: At best, the learned policy πprior can only perform as well as the original policy πb when using behavioral cloning. However, we can take advantage of the model’s simultaneous function prediction ability to improve the policy with modelbased planning or offline RL techniques. While a comprehensive study of policy improvements for Transformers is out of the scope of this work, we consider here a simple yet effective policy improvement operator: sampling M = 100 candidate suggestions from πprior and choosing the suggestion with the highest score defined by an acquisition function u(·) as follows: πu(xt|m,ht−1) = argmax {x(i)}Mi=1 u(pθ(·|m,ht−1,x(i))), with x(i) i.i.d.∼ πprior(x|m,ht−1). (6) Common acquisition functions include Expected Improvement (EI), Probability of Improvement (PI), Upper Confidence Bound (UCB), and Thompson Sampling, see for example [46]. At a high level, this approach to combining imitated policies with function prediction is reminiscent of the idea behind the offline RL approach of BCQ [47]. Because we apply a linear mapping from the original y value to the quantized value ȳ before discretization, we can simply define the acquisition functions on the discrete distribution Pθ(ȳ|m̄, h̄t−1, x̄t) as follows: uEI(x|ȳ∗) = EPθ(ȳ|m,ht−1,x) [max(ȳ − ȳ ∗, 0)] , (7) uUCB(x|α) = Quantile(Pθ(ȳ|m,ht−1,xt), α) , (8) uPI(x|ȳ∗) = ∑ ȳ>ȳ∗ Pθ(ȳ|m,ht−1,x) , (9) uTS(x) = ȳ, with ȳ ∼ Pθ(ȳ|m,ht−1,xt) , (10) where ȳ∗ = maxτ≤t−1 ȳτ in EI and PI is the threshold to measure improvement. We define the UCB acquisition function with a quantile parameter α. Our TS acquisition is defined as a sampled function value at a given location from the marginal predictive distribution. It is inspired by the traditional Thompson Sampling method [45] but different in that the correlation between different locations is ignored. 5 Data Training the OPTFORMER requires HPO studies with optimization trajectories. The most natural dataset we possess is the entire Google Vizier [2] database, one of the world’s largest collections of real world hyperparameter tuning studies, which we denote as RealWorldData. There are around 750K studies, each with on average 300 trials, covering a vast class of production and machine learning applications at Google, ranging from vision, speech, NLP and robotics, and representing one of the most representative distributions of HPO tasks for machine learning models in practice. These studies were generated with a mixture of non-adaptive, evolutionary, and BO algorithms. However, as the dataset does not contain sufficient algorithm information, we have to treat the corresponding behavior policy as a randomly mixed algorithm πb. In addition, we create two new datasets based on public benchmarks. HPO-B is the largest public benchmark for HPO containing about 1.9K tuning tasks, most of which use one of 16 shared search spaces. In the continuous evaluation setting, it fits an XGBoost model to the trial data of every tuning task as the objective function. For further control over specific function dimensions and properties, we use the blackbox optimization benchmark BBOB [48], consisting of 24 types of synthetic functions with customizable properties (dimension sizes, rotations, shifts, discretizations, noise types) we randomize over. For each of the two public benchmarks (HPO-B and BBOB), we apply a fixed set of 7 HPO algorithms to generate a dataset of optimization trajectories. In contrast to RealWorldData, we specify the algorithm name in the metadata m as part of the conditioning input for our model. The controlled algorithms used are: (1) Grid Search, (2) Shuffled Grid Search, (3) Random Search, (4) Regularized Evolution [49], (5) Hill-Climbing, (6) Eagle Strategy [50], and (7) Vizier’s GP-UCB [2]. Appendix B contains detailed explanations of the algorithms. 6 Experiments We train a single Transformer model with 250M parameters on the union of the three datasets described above, RealWorldData, HPO-B, and BBOB (hyperparameter details in Appendix D.2). Each dataset contains a corresponding “test” set of functions, either using synthetic functions (BBOB) or fitting a machine learning model to obtain the objective (RealWorldData, HPO-B). We evaluate mainly on the two natural HPO benchmarks, RealWorldData and HPO-B. The train/test subsets of RealWorldData are split temporally to avoid information leak (see Appendix C for details). To aggregate results across functions with different output scaling, we normalize all the test functions. This is standard practice in the literature [2, 5, 51–54]. We define our performance metric at trial t as the best-so-far normalized function value maxi∈{1:t}(yi − yrand)/(ymax − yrand), where yrand is the median of function values randomly sampled in the search space to be robust to outliers, and ymax is the maximum, if known, or best value found by any algorithm. For the HPO-B benchmark, we use the recommended bounds provided in [5]. We also consider other metrics when comparing different algorithms in Appendix E.3, including the performance profile and average ranking. We find our results are consistent over different metrics. Because the OPTFORMER is trained to predict the conditional distributions of parameter and function values, we would like to answer the following questions when evaluating on unseen test problems: 1. Can the OPTFORMER learn to imitate multiple HPO algorithms with one model? (Section 6.1) 2. Can the OPTFORMER learn a good prior over hyperparameter response functions? (Section 6.2) 3. Is the OPTFORMER a competitive approach for HPO? (Section 6.3) 6.1 Imitating HPO policies We first evaluate how well the OPTFORMER can learn the conditional distribution of parameter suggestions given by the behavior policies in the dataset, and how well it can imitate multiple algorithms. As the algorithm’s name is contained in the metadata m, we can modify the behaviour of the policy πprior(xt+1|m,ht) simply by altering this variable. Fig. 2a compares two different policies to the OPTFORMER, when it is conditioned on the corresponding policy name. We observe a good match between the imitated algorithms and the OPTFORMER (additional algorithms are shown in Appendix E.1). In Fig. 2b we run target policies on the BBOB dataset’s test functions and compare the optimization trajectories of the algorithms and the OPTFORMER. In Fig. 2c we compare the average and standard deviation of the best normalized function values at trial 100. Our model imitates most algorithms very accurately in both the mean and variance except for the most complicated algorithm, Vizier, where πprior is slightly worse in the LUNACEK benchmark. We expand on this in Appendix E.1. Because Vizier is the best performing HPO algorithm among all considered, the OPTFORMER will imitate Vizier faithfully, although not perfectly, in the following experiments. 6.2 Learning priors for hyperparameter response functions In this section, we assess the OPTFORMER’s ability to learn the conditional distribution of the function value as a few-shot function regressor. Specifically, for every function in each test dataset, we repeatedly sample up to 200 random trials (x1, y1, . . .xt, yt), t ≤ 200, and predict the conditional distribution p(yt|x1, y1, . . . ,xt). We compare with a GP model with output warping — details provided in Appendix B. We report the log-predictive likelihood log p(yt|xt, . . . ) in Table 4. As uncertainty estimation is important for HPO, we also evaluate how well the function predictive distribution is calibrated. When a predictive distribution pθ(y| . . . ) matches the true distribution, the estimated CDF F (y) = ∫ y −∞ pθ(y ′| . . . )dy′ will be uniformly distributed. In Fig. 3, we plot the cumulative histogram of F (y) on RealWorldData test set and check the deviation from the diagonal line to assess goodness-of-fit as proposed by Rosenblatt [55]. The OPTFORMER has a smaller Table 4: Log-predictive likelihood (with 1-std. standard error, higher is better (↑)) and ECE (percentage of error, lower is better (↓)) on RealWorldData and HPO-B test sets. Log-predictive likelihood ↑ Model RealWorldData HPO-B GP 0.83(0.06) 4.03(0.04) OPTFORMER 2.12 (0.05) 6.16 (0.04) ECE (percent %) ↓ Model RealWorldData HPO-B GP 5.34 (0.06) 2.39 (0.05) OPTFORMER 1.11 (0.02) 1.89 (0.01) 0.0 0.2 0.4 0.6 0.8 1.0 CDF level F 0.0 0.2 0.4 0.6 0.8 1.0 Pe rc en ta ge o f d at a wi th CD F( y) F GP OptFormer Figure 3: Cumulative histogram of predicted CDF(y) on RealWorldData test set. deviation than the GP almost across the entire range. We also compare calibration performance using the expected calibration error (ECE) [27]. Readers are referred to [27] and Appendix E.2 for a detailed explanation of ECE. We observe from Table 4 that the OPTFORMER achieves better predictive likelihood and ECE than the GP on both datasets. 6.3 Augmenting a prior policy with function prediction We evaluate the OPTFORMER as a hyperparameter optimization algorithm on two benchmarks, RealWorldData and HPO-B. We compare our prior policy, the OPTFORMER, and an augmented policy with Expected Improvement, the OPTFORMER (EI), against standard HPO baselines, including Random Search, our implementation of GP-UCB, and the well-tuned Vizier service. For HPO-B, we also include the GP (not to be confused with our GP-UCB) and DGP (GP with deep kernel) baseline results provided by the original paper [5]. Additionally, we include three recent transferlearning methods based on multi-task GP models: ABLR [12, 56], FSBO [7], and HyperBO [57, 58] (implementation details in Appendix B). Please note that all of these transfer learning methods require learning GPs on multiple tasks sharing the same search space. Therefore, none of them apply to the RealWorldData benchmark where every study has its own search space. We show the trajectory of the best normalized function value averaged over all functions from each benchmark in Fig. 4. While the prior policy returned by the OPTFORMER does not perform as well as Vizier, it is comparable or slightly better than our GP-UCB baseline and ABLR. The most significant improvement is achieved when we augment our prior policy with the Expected Improvement acquisition function. The resulting OPTFORMER (EI) outperforms all baselines across the board on both benchmarks. This illustrates that the OPTFORMER is able to learn the distribution of functions in the meta-training split and transfers to the meta-testing split. It is worth noting that to run 100 trials for about half of the test functions, the required history token sequence is longer than the 1024-token length used in training, with the maximum length about twice the training horizon. The superior performance of the OPTFORMER (EI) thus demonstrates its good generalization performance beyond the optimization horizon it is trained for. 6.4 Ablations We provide further ablations on three important components for our policy: Training dataset. To understand the impact of the training datasets on the OPTFORMER, we train three variants on individual datasets (OPTFORMER-"R","H","B" respectively for RealWorldData, HPO-B, BBOB) and study their transfer learning performances on HPO-B. Fig. 5a verifies that training with in-domain data ("H") gives better performance than training over the more diverse across-domain RealWorldData HPO dataset ("R"), which is better than training over the synthetic BBOB data ("B"). Nonetheless, training on RealWorldData is enough to give comparable performance to the best transfer learning baseline at the end of 100 trials. Lastly, training on all of the datasets (OPTFORMER) gives a further advantage over OPTFORMER-H. This suggests that more data does not hurt the model’s performance but rather may improve it, even if the extra data is out-of-domain. Meta-data m. We have demonstrated how the OPTFORMER’s behavior can be controlled by the algorithm name in metadata m in Section 6.1. Here we study whether the OPTFORMER learns to depend on other meta information. At inference time, we provide minimum information in m (OPTFORMER-min) by excluding all textual information and parameter value ranges. We only keep necessary information such as parameter types and algorithm names. Fig. 5b shows that the prior policy of OPTFORMER-min performs comparably with the OPTFORMER, partly due to the use of data augmentation (see Appendix D.2). The augmented policy OPTFORMER-min (EI) (dashed orange) improves upon the prior policy but is significantly worse than the full model, suggesting that the missing metadata impacts the model’s predictions on function values. Prior policy. Section 6.3 demonstrated the benefit of adding an acquisition function to the prior policy. A natural question is whether a good prior policy is needed at all. In Fig. 5c, we replace the prior policy in the OPTFORMER (EI) with random search (Random Search (EI), dashed blue line). While adding Expected Improvement still improves this random search policy’s performance, the best method requires both a good prior policy and the acquisition function. Choice of acquisition function. In Fig. 5d, we compare the Expected Improvement (EI) with Thompson Sampling (TS), Probability of Improvement (PI), and Upper Confidence Bound (UCB) with a confidence level of 0.9. We observe that the prior policy is improved by all the acquisition functions. Particularly, OPTFORMER (EI) is the best among all the choices though the difference is relatively small compared to the advantage over other baselines and OPTFORMER prior policy. We provide additional analysis with results on both the RealWorldData and HPO-B datasets, as well as other evaluation metrics in Appendix E.4. 7 Limitations and future extensions We list a few limitations of this work and discuss some potential extensions. (1) We did not consider parameters that do not always apply or are subject to dynamic constraints depending on other parameter values. Such parameters are common in AutoML [59] and NAS applications [60]. Our work can be extended to support these applications, by providing the conditional specifications as text in metadata m. (2) We also considered only sequential optimization with a batch size of 1. To support parallel suggestions, one could apply random masking to input function value observations to simulate scenarios with parallel pending trials [33]. (3) While we trained the Transformer to clone the behavior policy offline, there are extensive literature on offline RL [29] that could be applied here [25, 47, 61–64]. One could also consider meta-training acquisition functions as in [31] within the same model and online fine-tuning as in [7, 41]. (4) We considered a single objective function, though multiple objectives can be easily included by outputting multiple function tokens in a trial. (5) The maximum sequence length is limited by the quadratic memory size requirement of a Transformer, which could be mitigated with more scalable architecture variants such as Performer [65]. 8 Conclusion We presented first step to learning a universal Transformer model for hyperparameter optimization from large scale datasets containing tuning experiments with vastly different search spaces and experiment descriptions. By training on a diverse set of synthetic and real-world tuning trajectories, we demonstrated the capacity of a single Transformer model to imitate 7 fundamentally different HPO policies, learn to make well calibrated few-shot function predictions, and provide competitive optimization performance on unseen test functions comparable with the existing, long-tried GP-based baselines. Many extensions are readily conceivable for future exploration. Acknowledgments We would like to thank Chris Dyer, Luke Metz, Kevin Murphy, Yannis Assael, and Esteban Real for providing valuable feedback during their reviews of this paper. We further thank Sebastian Pineda Arango for technical discussions on the HPO-B benchmark and Christof Angermueller on biological benchmarks. In addition, we thank Daniel Golovin, Daiyi Peng, Yingjie Miao, Jack Parker-Holder, Jie Tan, Lucio Dery, and Aleksandra Faust for multiple useful conversations.
1. What is the focus and contribution of the paper regarding Transformer and hyper-parameter optimization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its comprehensiveness and novelty? 3. Do you have any concerns or questions about the evaluation metric used in the paper? 4. How do the authors justify their claims regarding the performance of OPT-Former compared to GP-UCB? 5. Are there any limitations or potential improvements regarding the proposed method's ability to capture the landscape of black-box optimization solvers?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose using Transformer to imitate the hyper-parameter optimization. The authors claim this is the first work in the area, and claim the proposed method exceeds several classic HPO methods. Strengths And Weaknesses Strength: very comprehensive evaluations. seemly reasonable idea. Weakness: limited novelty: perhaps applying transformer to HPO could be counted as a novel point, but I feel this is not enough. The paper is difficult to read, please try to improve the readability. Many places in the paper tries to impress the readers with sophisticated terms even on very simple concepts. Questions how do you justify max_i\in{1:t}(y_i-y_rand)/(y_max-y_rand) a good evaluation metric? Why don't you use the y_i and plot as a range? Fig.4 made a few strong claims, especially that OPT-Former performs better than GP-UCB. I'm trying to understand the underly causes. Here is my guess: opt-former is trained on datasets that potentially contain tasks with similar distributions tested in Fig.4. The advantage of GP-UCB is to start without any priors and gradually approximate the underlying function contours by sampling. Without any prior data, I'm surprised that OPT-Former can perform better than GP-UCB. I'd be happy to see if I'm wrong, and it will be compelling if the authors can provide an anonymous link to the code for a quick comparisons. (key factor for me to improve the score) The methods used in this paper does not fully capture the landscape of black box optimization solvers today. The authors may find the following repo to be useful. (feel free use at your discretion, it is just a suggestion) a. https://botorch.org/ b. https://github.com/facebookresearch/nevergrad c. https://github.com/facebookresearch/LaMCTS d. https://github.com/uber-research/TuRBO These repos have encapsulate several exciting BBO algorithms today. Limitations no limitation found
NIPS
Title Towards Learning Universal Hyperparameter Optimizers with Transformers Abstract Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OPTFORMER, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OPTFORMER can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OPTFORMER also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. 1 Introduction The emergence of public machine learning data platforms such as OpenML [1] and hyperparameter optimization (HPO) services such as Google Vizier [2], Amazon SageMaker [3] and Microsoft Azure [4] have made large-scale datasets containing hyperparameter evaluations accessible. For our use-case in this paper, Google Vizier is the de-facto HPO service across Google, having optimized some of Google’s largest products and research efforts, and contains a collection of valuable tuning data within the last 5 years. While there is growing interest in leveraging such data to meta-learn hyperparameter optimization algorithms [5–8], dealing with large datasets consisting of experimental trials in the wild can be challenging, due to large variations in HPO problems and their associated text metadata (e.g. shown later in Table 1). Thus, most meta and transfer-learning HPO methods [7–16] consider a restrictive setting where all tasks must share the same set of hyperparameters so that the input data can be represented as fixed-sized vectors. Consequently, such methods only exploit a small portion of the available data to learn priors. This drawback is more severe for large datasets which contain significant amounts of useful information. To overcome these limitations, we introduce the OPTFORMER, a general hyperparameter optimization framework based on Transformers [17]. Transformers have demonstrated excellent performance in many data tasks, ranging from natural language [18], images [19, 20], biological data [21, 22], code [23, 24], and control [25, 26]. Here, we investigate how to use a Transformer as a universal interface for modelling experimental data and learn HPO algorithms, as given a sufficient amount of data, a Transformer can potentially learn a more complex prior distribution than standard Bayesian Optimization (BO) with Gaussian Processes (GPs), especially as the Transformer possesses certain computational advantages over GPs for large datasets. Code: https://github.com/google-research/optformer. Google AI Blog: https:// ai.googleblog.com/2022/08/optformer-towards-universal.html. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). We introduce a serialization scheme to convert a combination of any metadata and an optimization trajectory into text, represented as a sequence of tokens, and formulate the HPO task as a sequence modeling problem. We adopt a supervised learning approach, by learning to predict parameters and hyperparameter response functions from offline tuning data (See Fig. 1). In order to further improve optimization performance, we augment the model by utilizing its own function prediction during inference (Section 4.3). Extensive experiments on both public and private datasets demonstrate the OPTFORMER’s competitive tuning and generalization abilities. In summary, our contributions are as follows: • We formulate, to the best of our knowledge, the first meta-learning HPO framework to learn both policy and function priors from data across different search spaces. • The OPTFORMER is capable of learning the behaviors of 7 diverse blackbox optimization algorithms relying on a broad class of methods (non-adaptive, evolutionary, and Bayesian). • Furthermore, the OPTFORMER learns the prior over objective functions and provides both accurate and well calibrated predictions, in many cases significantly surpassing GPs in log-predictive likelihood and expected calibration error (ECE) [27]. • Lastly, OPTFORMER policies augmented with model-based optimization, such as the use of Expected Improvement acquisition functions, are competitive HPO algorithms. To the best of our knowledge, this is the first time Transformers are augmented with acquisition functions for online adaptation. 2 Preliminaries 2.1 Meta-learning for hyperparameter optimization HPO aims to find a set of hyperparameters x from search space X to maximize a model performance metric, y = f(x), often referred to as a response function. Table 1 shows an example of HPO experimental data. Following the HPO nomenclature [2, 28], an experimental study consists of metadata (m) and a history of trials (h). The metadata contains arbitrary unstructured information, including but not limited to descriptions of the problem, optimization algorithm, names, types and value ranges of hyperparameters. The history after t trials, ht = (x1, y1, . . . ,xt, yt), contains a sequence of trials, each of which consists of a parameter suggestion x and function value y. The goal of the meta-learning approach for HPO is to learn the shared knowledge among the objective functions f from a dataset of multiple tuning experiments represented as studies and to obtain an optimal HPO algorithm for new hyperparameter tuning tasks from a similar distribution to those in the dataset. An HPO algorithm π maps the metadata and history to a distribution over hyperparameter suggestions, i.e. π(xt+1|m,ht). Using the terminology of offline RL [29], we refer to the algorithm used to generate the trajectories in a dataset as the behavior policy πb. We primarily consider search spaces X with a fixed number D of hyperparameters per task, and hence x = (x(1), . . . , x(D)), with each hyperparameter x(d) being of type DOUBLE, INTEGER, DISCRETE, or CATEGORICAL (see Appendix A.1 for details). More complex search spaces can be supported as discussed in Section 7. 2.2 Transformer model }} The Transformer model is an efficient attention-based neural network architecture for sequence modeling [17]. We adopt the T5 Transformer encoder-decoder architecture [30]. The encoder and decoder each consist of a stack of multi-head selfattention layers which construct pairwise interactions between positions, followed by position-wise feed-forward networks. The encoder converts a sequence of input token representations m, to a sequence of continuous embeddings, which is fed to the decoder to generate a sequence of output tokens h one element at a time (see Fig. 1). 3 Related work There has been a rich set of works in meta-learning and transfer learning by modifying specific core components of the BO pipeline, such as the acquisition function or the GP, in order to tackle BO’s myopic behavior, or obtaining more information from similar tasks. For instance, approaches include learning new acquisition functions [31], multi-task BO [7–13] and BO for transfer learning using contextual GPs [14–16]. [32] also studies the use of meta-BO for hyperparameter tuning tasks in machine learning. However, all of these works consider a fixed search space. A more radical meta-learning approach to non-differentiable optimization trains recurrent neural networks (RNNs) as neural optimizers from scratch. [33] first proposed training an RNN with gradient descent to optimize blackbox functions and hyperparameters while [34, 35] train RNNs using reinforcement learning (RL) to solve RL tasks. Unfortunately, prior works are limited to fixed search spaces and only use online generated data, constraining the training objectives to be cheaply computable. In this work, we wish to overcome the limitations of previous works by exploiting the Transformer architecture. Numerous works have demonstrated Transformers’ strong capabilities in flexible symbolic and numerical manipulation. On the symbolic side, Transformers have been shown able to manipulate symbolic mathematical expressions [36–38] and generate code [23, 24]. Furthermore, on the numerical side, Transformers have also been shown able to perform linear algebra computations [39], Bayesian Inference [40], and offline RL [25, 26, 41]. For AutoML specifically, [42] has demonstrated Transformers’ and analogous graph neural networks’ abilities to use dataset descriptions and metadata to generate classification and data preprocessing pipelines. However, to date, there has been little effort in attacking the full problem of hyperparameter tuning in the blackbox optimization setting. In this paper, the challenging task of learning algorithms from blackbox optimization trajectories can be seen as a significant extension of both symbolic and numerical manipulation. Since the underlying algorithm can be composed of multiple symbolic and mathematical operations with unbounded complexity, the model must infer potentially very complex behavior over long horizons. 4 Universal interface and model for hyperparameter optimization In this section, we provide a universal interface for modeling HPO studies with mixed textual and numerical information as a sequence of discrete tokens. We train our OPTFORMER as a generative model on a given dataset and explain how to use the OPTFORMER’s parameter and function prediction abilities to implement an HPO policy. 4.1 Study tokenization To generalize over HPO problems of different parameter sizes, types, and metadata, we propose to serialize the study as a one-dimensional textual sequence, also advocated in [26]. Unfortunately, a naive serialization approach, e.g. via JSON [43], will produce unnecessarily long sequences. To improve scalability, we compress the textual representation of metadata m by removing redundant phrases and punctuation (e.g., "parameter", quotes) and encoding keywords (e.g., "name", "algorithm") and enumerating types (e.g. "DOUBLE") into single tokens. For the historical sequence h, we convert every DOUBLE and INTEGER parameter along with every function value into a single token, by normalizing and discretizing them into integers, with an quantization level of Q = 1000; e.g. x̄ = int[xnorm ·Q], where xnorm = (x− xmin)/(xmax − xmin). (1) The range of x is defined by the search space and the range of y is obtained from observed values in h. For other types, we use the index in their value set. The shortened text string is then converted to a sequence of tokens via the SentencePiece tokenizer [44] (see Table 2 for an example). Every trial is represented by text, which is represented as a sequence of normalized and quantized tokens, [ x̄ (1) t , . . . , x̄ (D) t , ?, ȳt, "|" ] , where the token ? separates parameter and function values and "|" separates trials. See Appendix A.2 for further details on tokenization. 4.2 Model and training loss After tokenization, the converted historical sequence is as follows: h̄t = [ x̄ (1) 1 , x̄ (2) 1 , . . . , x̄ (D) 1 , ?, ȳ1, "|", . . . , x̄ (1) t , x̄ (2) t , . . . , x̄ (D) t , ?, ȳt ] . (2) We can now apply a Transformer model to learn the conditional distribution of tokens in h̄ using the chain rule, given the metadata m̄, as depicted in Fig. 1. The joint distribution is presented in Appendix D.1. Given a dataset D of hyperparameter optimization studies, we train the OPTFORMER by maximizing the weighted log-likelihood for each study (m,h) ∼ D: L(θ;m,h) = ∑ n wn logPθ(h̄ (n)|m̄, h̄(1:n−1)), (3) with wn = 0 if h̄(n) ∈ {?, "|"} and wn = 1 otherwise. That is, we mask out the separator tokens (?, "|") and predict parameter x̄ and function tokens ȳ only. Note that h̄(n) denotes the n-th token, that is the n-th element of the list in Equation (2), and h̄(1:n−1) denotes all tokens up to the (n− 1)-th token. Further details and data augmentations are provided in Appendix D.2. 4.3 Inference and decoding Parameter prediction: To decode the predicted parameter token x̄(d)t back to its original parameter range, we truncate the output distribution to the vocabulary range corresponding to valid parameter values [0, Q) and reverse our tokenization procedure in Section 4.1. For a DOUBLE or INTEGER parameter x, we use a piecewise constant distribution: pθ(x| . . . ) = Q · Pθ(x̄| . . . )/(xmax − xmin), if x ∈ [xmin, xmax], otherwise 0 . (4) For all other parameter types, x̄ corresponds to the index of the set of feasible values. Putting these together, we may now sample parameter xt from the model’s prior distribution and thus define an HPO policy: πprior(xt|m,ht−1) = D∏ d=1 pθ(x (d) t |m,ht−1,x (1:d−1) t ). (5) As we use a supervised learning loss, we expect πprior to approximate the behavior policy πb. Note that traditional BO algorithms require running Bayesian inference and then conducting a global search in the hyperparameter space with an acquisition function. Thus the runtime complexity of making one hyperparameter suggestion is cubic in t for a typical GP-based BO method that performs ARD each iteration [45]. In contrast, generating one suggestion by the OPTFORMER consists of decoding D parameter tokens with an input sequence of (D + 3)t tokens, which are then parsed into the D parameter values, producing a runtime of O(D2t) linear in t, with proper caching. Function prediction: To decode the real-valued function yt from the discrete distribution Pθ(ȳt|m̄, h̄t−1, x̄t), we construct the same piecewise constant distribution as in Eq. (4) with the range [ymin, ymax] used in tokenization. Note that the limited support of y will not be a concern for HPO when either the range is known or we set the range large enough compared to observed values. For more general use as a few-shot function prediction model, one could consider adopting the Riemann Distribution in [40], which supports an unbounded range. Augmented HPO policies with function prediction: At best, the learned policy πprior can only perform as well as the original policy πb when using behavioral cloning. However, we can take advantage of the model’s simultaneous function prediction ability to improve the policy with modelbased planning or offline RL techniques. While a comprehensive study of policy improvements for Transformers is out of the scope of this work, we consider here a simple yet effective policy improvement operator: sampling M = 100 candidate suggestions from πprior and choosing the suggestion with the highest score defined by an acquisition function u(·) as follows: πu(xt|m,ht−1) = argmax {x(i)}Mi=1 u(pθ(·|m,ht−1,x(i))), with x(i) i.i.d.∼ πprior(x|m,ht−1). (6) Common acquisition functions include Expected Improvement (EI), Probability of Improvement (PI), Upper Confidence Bound (UCB), and Thompson Sampling, see for example [46]. At a high level, this approach to combining imitated policies with function prediction is reminiscent of the idea behind the offline RL approach of BCQ [47]. Because we apply a linear mapping from the original y value to the quantized value ȳ before discretization, we can simply define the acquisition functions on the discrete distribution Pθ(ȳ|m̄, h̄t−1, x̄t) as follows: uEI(x|ȳ∗) = EPθ(ȳ|m,ht−1,x) [max(ȳ − ȳ ∗, 0)] , (7) uUCB(x|α) = Quantile(Pθ(ȳ|m,ht−1,xt), α) , (8) uPI(x|ȳ∗) = ∑ ȳ>ȳ∗ Pθ(ȳ|m,ht−1,x) , (9) uTS(x) = ȳ, with ȳ ∼ Pθ(ȳ|m,ht−1,xt) , (10) where ȳ∗ = maxτ≤t−1 ȳτ in EI and PI is the threshold to measure improvement. We define the UCB acquisition function with a quantile parameter α. Our TS acquisition is defined as a sampled function value at a given location from the marginal predictive distribution. It is inspired by the traditional Thompson Sampling method [45] but different in that the correlation between different locations is ignored. 5 Data Training the OPTFORMER requires HPO studies with optimization trajectories. The most natural dataset we possess is the entire Google Vizier [2] database, one of the world’s largest collections of real world hyperparameter tuning studies, which we denote as RealWorldData. There are around 750K studies, each with on average 300 trials, covering a vast class of production and machine learning applications at Google, ranging from vision, speech, NLP and robotics, and representing one of the most representative distributions of HPO tasks for machine learning models in practice. These studies were generated with a mixture of non-adaptive, evolutionary, and BO algorithms. However, as the dataset does not contain sufficient algorithm information, we have to treat the corresponding behavior policy as a randomly mixed algorithm πb. In addition, we create two new datasets based on public benchmarks. HPO-B is the largest public benchmark for HPO containing about 1.9K tuning tasks, most of which use one of 16 shared search spaces. In the continuous evaluation setting, it fits an XGBoost model to the trial data of every tuning task as the objective function. For further control over specific function dimensions and properties, we use the blackbox optimization benchmark BBOB [48], consisting of 24 types of synthetic functions with customizable properties (dimension sizes, rotations, shifts, discretizations, noise types) we randomize over. For each of the two public benchmarks (HPO-B and BBOB), we apply a fixed set of 7 HPO algorithms to generate a dataset of optimization trajectories. In contrast to RealWorldData, we specify the algorithm name in the metadata m as part of the conditioning input for our model. The controlled algorithms used are: (1) Grid Search, (2) Shuffled Grid Search, (3) Random Search, (4) Regularized Evolution [49], (5) Hill-Climbing, (6) Eagle Strategy [50], and (7) Vizier’s GP-UCB [2]. Appendix B contains detailed explanations of the algorithms. 6 Experiments We train a single Transformer model with 250M parameters on the union of the three datasets described above, RealWorldData, HPO-B, and BBOB (hyperparameter details in Appendix D.2). Each dataset contains a corresponding “test” set of functions, either using synthetic functions (BBOB) or fitting a machine learning model to obtain the objective (RealWorldData, HPO-B). We evaluate mainly on the two natural HPO benchmarks, RealWorldData and HPO-B. The train/test subsets of RealWorldData are split temporally to avoid information leak (see Appendix C for details). To aggregate results across functions with different output scaling, we normalize all the test functions. This is standard practice in the literature [2, 5, 51–54]. We define our performance metric at trial t as the best-so-far normalized function value maxi∈{1:t}(yi − yrand)/(ymax − yrand), where yrand is the median of function values randomly sampled in the search space to be robust to outliers, and ymax is the maximum, if known, or best value found by any algorithm. For the HPO-B benchmark, we use the recommended bounds provided in [5]. We also consider other metrics when comparing different algorithms in Appendix E.3, including the performance profile and average ranking. We find our results are consistent over different metrics. Because the OPTFORMER is trained to predict the conditional distributions of parameter and function values, we would like to answer the following questions when evaluating on unseen test problems: 1. Can the OPTFORMER learn to imitate multiple HPO algorithms with one model? (Section 6.1) 2. Can the OPTFORMER learn a good prior over hyperparameter response functions? (Section 6.2) 3. Is the OPTFORMER a competitive approach for HPO? (Section 6.3) 6.1 Imitating HPO policies We first evaluate how well the OPTFORMER can learn the conditional distribution of parameter suggestions given by the behavior policies in the dataset, and how well it can imitate multiple algorithms. As the algorithm’s name is contained in the metadata m, we can modify the behaviour of the policy πprior(xt+1|m,ht) simply by altering this variable. Fig. 2a compares two different policies to the OPTFORMER, when it is conditioned on the corresponding policy name. We observe a good match between the imitated algorithms and the OPTFORMER (additional algorithms are shown in Appendix E.1). In Fig. 2b we run target policies on the BBOB dataset’s test functions and compare the optimization trajectories of the algorithms and the OPTFORMER. In Fig. 2c we compare the average and standard deviation of the best normalized function values at trial 100. Our model imitates most algorithms very accurately in both the mean and variance except for the most complicated algorithm, Vizier, where πprior is slightly worse in the LUNACEK benchmark. We expand on this in Appendix E.1. Because Vizier is the best performing HPO algorithm among all considered, the OPTFORMER will imitate Vizier faithfully, although not perfectly, in the following experiments. 6.2 Learning priors for hyperparameter response functions In this section, we assess the OPTFORMER’s ability to learn the conditional distribution of the function value as a few-shot function regressor. Specifically, for every function in each test dataset, we repeatedly sample up to 200 random trials (x1, y1, . . .xt, yt), t ≤ 200, and predict the conditional distribution p(yt|x1, y1, . . . ,xt). We compare with a GP model with output warping — details provided in Appendix B. We report the log-predictive likelihood log p(yt|xt, . . . ) in Table 4. As uncertainty estimation is important for HPO, we also evaluate how well the function predictive distribution is calibrated. When a predictive distribution pθ(y| . . . ) matches the true distribution, the estimated CDF F (y) = ∫ y −∞ pθ(y ′| . . . )dy′ will be uniformly distributed. In Fig. 3, we plot the cumulative histogram of F (y) on RealWorldData test set and check the deviation from the diagonal line to assess goodness-of-fit as proposed by Rosenblatt [55]. The OPTFORMER has a smaller Table 4: Log-predictive likelihood (with 1-std. standard error, higher is better (↑)) and ECE (percentage of error, lower is better (↓)) on RealWorldData and HPO-B test sets. Log-predictive likelihood ↑ Model RealWorldData HPO-B GP 0.83(0.06) 4.03(0.04) OPTFORMER 2.12 (0.05) 6.16 (0.04) ECE (percent %) ↓ Model RealWorldData HPO-B GP 5.34 (0.06) 2.39 (0.05) OPTFORMER 1.11 (0.02) 1.89 (0.01) 0.0 0.2 0.4 0.6 0.8 1.0 CDF level F 0.0 0.2 0.4 0.6 0.8 1.0 Pe rc en ta ge o f d at a wi th CD F( y) F GP OptFormer Figure 3: Cumulative histogram of predicted CDF(y) on RealWorldData test set. deviation than the GP almost across the entire range. We also compare calibration performance using the expected calibration error (ECE) [27]. Readers are referred to [27] and Appendix E.2 for a detailed explanation of ECE. We observe from Table 4 that the OPTFORMER achieves better predictive likelihood and ECE than the GP on both datasets. 6.3 Augmenting a prior policy with function prediction We evaluate the OPTFORMER as a hyperparameter optimization algorithm on two benchmarks, RealWorldData and HPO-B. We compare our prior policy, the OPTFORMER, and an augmented policy with Expected Improvement, the OPTFORMER (EI), against standard HPO baselines, including Random Search, our implementation of GP-UCB, and the well-tuned Vizier service. For HPO-B, we also include the GP (not to be confused with our GP-UCB) and DGP (GP with deep kernel) baseline results provided by the original paper [5]. Additionally, we include three recent transferlearning methods based on multi-task GP models: ABLR [12, 56], FSBO [7], and HyperBO [57, 58] (implementation details in Appendix B). Please note that all of these transfer learning methods require learning GPs on multiple tasks sharing the same search space. Therefore, none of them apply to the RealWorldData benchmark where every study has its own search space. We show the trajectory of the best normalized function value averaged over all functions from each benchmark in Fig. 4. While the prior policy returned by the OPTFORMER does not perform as well as Vizier, it is comparable or slightly better than our GP-UCB baseline and ABLR. The most significant improvement is achieved when we augment our prior policy with the Expected Improvement acquisition function. The resulting OPTFORMER (EI) outperforms all baselines across the board on both benchmarks. This illustrates that the OPTFORMER is able to learn the distribution of functions in the meta-training split and transfers to the meta-testing split. It is worth noting that to run 100 trials for about half of the test functions, the required history token sequence is longer than the 1024-token length used in training, with the maximum length about twice the training horizon. The superior performance of the OPTFORMER (EI) thus demonstrates its good generalization performance beyond the optimization horizon it is trained for. 6.4 Ablations We provide further ablations on three important components for our policy: Training dataset. To understand the impact of the training datasets on the OPTFORMER, we train three variants on individual datasets (OPTFORMER-"R","H","B" respectively for RealWorldData, HPO-B, BBOB) and study their transfer learning performances on HPO-B. Fig. 5a verifies that training with in-domain data ("H") gives better performance than training over the more diverse across-domain RealWorldData HPO dataset ("R"), which is better than training over the synthetic BBOB data ("B"). Nonetheless, training on RealWorldData is enough to give comparable performance to the best transfer learning baseline at the end of 100 trials. Lastly, training on all of the datasets (OPTFORMER) gives a further advantage over OPTFORMER-H. This suggests that more data does not hurt the model’s performance but rather may improve it, even if the extra data is out-of-domain. Meta-data m. We have demonstrated how the OPTFORMER’s behavior can be controlled by the algorithm name in metadata m in Section 6.1. Here we study whether the OPTFORMER learns to depend on other meta information. At inference time, we provide minimum information in m (OPTFORMER-min) by excluding all textual information and parameter value ranges. We only keep necessary information such as parameter types and algorithm names. Fig. 5b shows that the prior policy of OPTFORMER-min performs comparably with the OPTFORMER, partly due to the use of data augmentation (see Appendix D.2). The augmented policy OPTFORMER-min (EI) (dashed orange) improves upon the prior policy but is significantly worse than the full model, suggesting that the missing metadata impacts the model’s predictions on function values. Prior policy. Section 6.3 demonstrated the benefit of adding an acquisition function to the prior policy. A natural question is whether a good prior policy is needed at all. In Fig. 5c, we replace the prior policy in the OPTFORMER (EI) with random search (Random Search (EI), dashed blue line). While adding Expected Improvement still improves this random search policy’s performance, the best method requires both a good prior policy and the acquisition function. Choice of acquisition function. In Fig. 5d, we compare the Expected Improvement (EI) with Thompson Sampling (TS), Probability of Improvement (PI), and Upper Confidence Bound (UCB) with a confidence level of 0.9. We observe that the prior policy is improved by all the acquisition functions. Particularly, OPTFORMER (EI) is the best among all the choices though the difference is relatively small compared to the advantage over other baselines and OPTFORMER prior policy. We provide additional analysis with results on both the RealWorldData and HPO-B datasets, as well as other evaluation metrics in Appendix E.4. 7 Limitations and future extensions We list a few limitations of this work and discuss some potential extensions. (1) We did not consider parameters that do not always apply or are subject to dynamic constraints depending on other parameter values. Such parameters are common in AutoML [59] and NAS applications [60]. Our work can be extended to support these applications, by providing the conditional specifications as text in metadata m. (2) We also considered only sequential optimization with a batch size of 1. To support parallel suggestions, one could apply random masking to input function value observations to simulate scenarios with parallel pending trials [33]. (3) While we trained the Transformer to clone the behavior policy offline, there are extensive literature on offline RL [29] that could be applied here [25, 47, 61–64]. One could also consider meta-training acquisition functions as in [31] within the same model and online fine-tuning as in [7, 41]. (4) We considered a single objective function, though multiple objectives can be easily included by outputting multiple function tokens in a trial. (5) The maximum sequence length is limited by the quadratic memory size requirement of a Transformer, which could be mitigated with more scalable architecture variants such as Performer [65]. 8 Conclusion We presented first step to learning a universal Transformer model for hyperparameter optimization from large scale datasets containing tuning experiments with vastly different search spaces and experiment descriptions. By training on a diverse set of synthetic and real-world tuning trajectories, we demonstrated the capacity of a single Transformer model to imitate 7 fundamentally different HPO policies, learn to make well calibrated few-shot function predictions, and provide competitive optimization performance on unseen test functions comparable with the existing, long-tried GP-based baselines. Many extensions are readily conceivable for future exploration. Acknowledgments We would like to thank Chris Dyer, Luke Metz, Kevin Murphy, Yannis Assael, and Esteban Real for providing valuable feedback during their reviews of this paper. We further thank Sebastian Pineda Arango for technical discussions on the HPO-B benchmark and Christof Angermueller on biological benchmarks. In addition, we thank Daniel Golovin, Daiyi Peng, Yingjie Miao, Jack Parker-Holder, Jie Tan, Lucio Dery, and Aleksandra Faust for multiple useful conversations.
1. What is the focus and contribution of the paper regarding meta-learning for hyperparameter optimization? 2. What are the strengths of the proposed approach, particularly in its ability to generalize across different dimensions? 3. What are the weaknesses of the paper, especially regarding the method's ability to generalize beyond the training data and scalability with search space dimensionality? 4. Do you have any questions regarding the pre-training process of the model and its predictive distribution? 5. What are the limitations of the method compared to other state-of-the-art methods in hyperparameter optimization?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper describes a new meta-learning approach for hyperparameter optimization (HPO) based on a transformer model. The model is trained on offline generated data that includes the metadata that characterizes the optimization problem, for example the search space and and the history of observed trials, i.e function values and input configuration. During inference time, the model can be combined with HPO policies, such as Thompson sampling or upper confidence bounds to suggest new hyperparameter configurations. Strengths And Weaknesses Reason for overall rating Current transfer learning approaches for HPO are limited to a fixed search space and the same underlying machine learning model and only transfer knowledge across different datasets. This paper presents, to the best of my knowledge, the first approach that enables meta-learning across these different dimensions. While I don't think the method is ready for a practice yet, the paper marks a first important step towards more universal HPO methods. Strengths The paper aims to learn a more general meta-learning approach for HPO, that generalizes not only across datasets, but also machine learning methods and search spaces. This, in theory, allows to access a much large amount of offline data and allows to generalize across different domains. Overall, I found the different parts of the paper, e.g tokenization, inference and decoding of the model, well motivated and clearly explained. `The empirical evaluation of the paper contains sensible set of baselines. Also the ablation study provides convincing insights in the proposed approach. Weaknesses It remains a bit unclear how well this model generalizes beyond the training data. For example, what would happen if the method is applied to other problem domains, such as neural architecture search or general gradient-free optimization problems. Similarly, how does the method scale with the dimensionality of the search space? The dataset generation seems a bit ad-hoc. Is it really necessary to include trajectories of such a large variety of optimizers or would it be sufficient to limit to few state-of-the-art optimizers? This could potentially reduce the dataset size and would allow to us a smaller architecture. The paper could elaborate on the pre-training of the model. For example, how did different design decision of the network architecture effect downstream performance? How difficult was the pre-training, e.g did you have to restart from previous checkpoints, etc ? Questions Section 6.2: How do you compute the predictive distribution p(y|...)? My understanding is that the transformer only predicts discrete outputs with [0, Q) Section 6.4 prior policy: How is Random Search combined with Thompson sampling (Random Search-TS)? What was the computational budget to train the transformer model and how long did it train? Do you plan to open-source the dataset and the code to reproduce the results? Limitations While the method improves across a set of baselines, it does not improve yet over more sophisticated algorithms such as Vizier on real world datasets. I assume that it would also not outperform current state-of-the-art methods that early stop poorly performing configurations, such as Hyperband or BOHB. However, I think this is fine for a research paper, but it would not be sufficient for production.
NIPS
Title Towards Learning Universal Hyperparameter Optimizers with Transformers Abstract Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OPTFORMER, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OPTFORMER can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OPTFORMER also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. 1 Introduction The emergence of public machine learning data platforms such as OpenML [1] and hyperparameter optimization (HPO) services such as Google Vizier [2], Amazon SageMaker [3] and Microsoft Azure [4] have made large-scale datasets containing hyperparameter evaluations accessible. For our use-case in this paper, Google Vizier is the de-facto HPO service across Google, having optimized some of Google’s largest products and research efforts, and contains a collection of valuable tuning data within the last 5 years. While there is growing interest in leveraging such data to meta-learn hyperparameter optimization algorithms [5–8], dealing with large datasets consisting of experimental trials in the wild can be challenging, due to large variations in HPO problems and their associated text metadata (e.g. shown later in Table 1). Thus, most meta and transfer-learning HPO methods [7–16] consider a restrictive setting where all tasks must share the same set of hyperparameters so that the input data can be represented as fixed-sized vectors. Consequently, such methods only exploit a small portion of the available data to learn priors. This drawback is more severe for large datasets which contain significant amounts of useful information. To overcome these limitations, we introduce the OPTFORMER, a general hyperparameter optimization framework based on Transformers [17]. Transformers have demonstrated excellent performance in many data tasks, ranging from natural language [18], images [19, 20], biological data [21, 22], code [23, 24], and control [25, 26]. Here, we investigate how to use a Transformer as a universal interface for modelling experimental data and learn HPO algorithms, as given a sufficient amount of data, a Transformer can potentially learn a more complex prior distribution than standard Bayesian Optimization (BO) with Gaussian Processes (GPs), especially as the Transformer possesses certain computational advantages over GPs for large datasets. Code: https://github.com/google-research/optformer. Google AI Blog: https:// ai.googleblog.com/2022/08/optformer-towards-universal.html. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). We introduce a serialization scheme to convert a combination of any metadata and an optimization trajectory into text, represented as a sequence of tokens, and formulate the HPO task as a sequence modeling problem. We adopt a supervised learning approach, by learning to predict parameters and hyperparameter response functions from offline tuning data (See Fig. 1). In order to further improve optimization performance, we augment the model by utilizing its own function prediction during inference (Section 4.3). Extensive experiments on both public and private datasets demonstrate the OPTFORMER’s competitive tuning and generalization abilities. In summary, our contributions are as follows: • We formulate, to the best of our knowledge, the first meta-learning HPO framework to learn both policy and function priors from data across different search spaces. • The OPTFORMER is capable of learning the behaviors of 7 diverse blackbox optimization algorithms relying on a broad class of methods (non-adaptive, evolutionary, and Bayesian). • Furthermore, the OPTFORMER learns the prior over objective functions and provides both accurate and well calibrated predictions, in many cases significantly surpassing GPs in log-predictive likelihood and expected calibration error (ECE) [27]. • Lastly, OPTFORMER policies augmented with model-based optimization, such as the use of Expected Improvement acquisition functions, are competitive HPO algorithms. To the best of our knowledge, this is the first time Transformers are augmented with acquisition functions for online adaptation. 2 Preliminaries 2.1 Meta-learning for hyperparameter optimization HPO aims to find a set of hyperparameters x from search space X to maximize a model performance metric, y = f(x), often referred to as a response function. Table 1 shows an example of HPO experimental data. Following the HPO nomenclature [2, 28], an experimental study consists of metadata (m) and a history of trials (h). The metadata contains arbitrary unstructured information, including but not limited to descriptions of the problem, optimization algorithm, names, types and value ranges of hyperparameters. The history after t trials, ht = (x1, y1, . . . ,xt, yt), contains a sequence of trials, each of which consists of a parameter suggestion x and function value y. The goal of the meta-learning approach for HPO is to learn the shared knowledge among the objective functions f from a dataset of multiple tuning experiments represented as studies and to obtain an optimal HPO algorithm for new hyperparameter tuning tasks from a similar distribution to those in the dataset. An HPO algorithm π maps the metadata and history to a distribution over hyperparameter suggestions, i.e. π(xt+1|m,ht). Using the terminology of offline RL [29], we refer to the algorithm used to generate the trajectories in a dataset as the behavior policy πb. We primarily consider search spaces X with a fixed number D of hyperparameters per task, and hence x = (x(1), . . . , x(D)), with each hyperparameter x(d) being of type DOUBLE, INTEGER, DISCRETE, or CATEGORICAL (see Appendix A.1 for details). More complex search spaces can be supported as discussed in Section 7. 2.2 Transformer model }} The Transformer model is an efficient attention-based neural network architecture for sequence modeling [17]. We adopt the T5 Transformer encoder-decoder architecture [30]. The encoder and decoder each consist of a stack of multi-head selfattention layers which construct pairwise interactions between positions, followed by position-wise feed-forward networks. The encoder converts a sequence of input token representations m, to a sequence of continuous embeddings, which is fed to the decoder to generate a sequence of output tokens h one element at a time (see Fig. 1). 3 Related work There has been a rich set of works in meta-learning and transfer learning by modifying specific core components of the BO pipeline, such as the acquisition function or the GP, in order to tackle BO’s myopic behavior, or obtaining more information from similar tasks. For instance, approaches include learning new acquisition functions [31], multi-task BO [7–13] and BO for transfer learning using contextual GPs [14–16]. [32] also studies the use of meta-BO for hyperparameter tuning tasks in machine learning. However, all of these works consider a fixed search space. A more radical meta-learning approach to non-differentiable optimization trains recurrent neural networks (RNNs) as neural optimizers from scratch. [33] first proposed training an RNN with gradient descent to optimize blackbox functions and hyperparameters while [34, 35] train RNNs using reinforcement learning (RL) to solve RL tasks. Unfortunately, prior works are limited to fixed search spaces and only use online generated data, constraining the training objectives to be cheaply computable. In this work, we wish to overcome the limitations of previous works by exploiting the Transformer architecture. Numerous works have demonstrated Transformers’ strong capabilities in flexible symbolic and numerical manipulation. On the symbolic side, Transformers have been shown able to manipulate symbolic mathematical expressions [36–38] and generate code [23, 24]. Furthermore, on the numerical side, Transformers have also been shown able to perform linear algebra computations [39], Bayesian Inference [40], and offline RL [25, 26, 41]. For AutoML specifically, [42] has demonstrated Transformers’ and analogous graph neural networks’ abilities to use dataset descriptions and metadata to generate classification and data preprocessing pipelines. However, to date, there has been little effort in attacking the full problem of hyperparameter tuning in the blackbox optimization setting. In this paper, the challenging task of learning algorithms from blackbox optimization trajectories can be seen as a significant extension of both symbolic and numerical manipulation. Since the underlying algorithm can be composed of multiple symbolic and mathematical operations with unbounded complexity, the model must infer potentially very complex behavior over long horizons. 4 Universal interface and model for hyperparameter optimization In this section, we provide a universal interface for modeling HPO studies with mixed textual and numerical information as a sequence of discrete tokens. We train our OPTFORMER as a generative model on a given dataset and explain how to use the OPTFORMER’s parameter and function prediction abilities to implement an HPO policy. 4.1 Study tokenization To generalize over HPO problems of different parameter sizes, types, and metadata, we propose to serialize the study as a one-dimensional textual sequence, also advocated in [26]. Unfortunately, a naive serialization approach, e.g. via JSON [43], will produce unnecessarily long sequences. To improve scalability, we compress the textual representation of metadata m by removing redundant phrases and punctuation (e.g., "parameter", quotes) and encoding keywords (e.g., "name", "algorithm") and enumerating types (e.g. "DOUBLE") into single tokens. For the historical sequence h, we convert every DOUBLE and INTEGER parameter along with every function value into a single token, by normalizing and discretizing them into integers, with an quantization level of Q = 1000; e.g. x̄ = int[xnorm ·Q], where xnorm = (x− xmin)/(xmax − xmin). (1) The range of x is defined by the search space and the range of y is obtained from observed values in h. For other types, we use the index in their value set. The shortened text string is then converted to a sequence of tokens via the SentencePiece tokenizer [44] (see Table 2 for an example). Every trial is represented by text, which is represented as a sequence of normalized and quantized tokens, [ x̄ (1) t , . . . , x̄ (D) t , ?, ȳt, "|" ] , where the token ? separates parameter and function values and "|" separates trials. See Appendix A.2 for further details on tokenization. 4.2 Model and training loss After tokenization, the converted historical sequence is as follows: h̄t = [ x̄ (1) 1 , x̄ (2) 1 , . . . , x̄ (D) 1 , ?, ȳ1, "|", . . . , x̄ (1) t , x̄ (2) t , . . . , x̄ (D) t , ?, ȳt ] . (2) We can now apply a Transformer model to learn the conditional distribution of tokens in h̄ using the chain rule, given the metadata m̄, as depicted in Fig. 1. The joint distribution is presented in Appendix D.1. Given a dataset D of hyperparameter optimization studies, we train the OPTFORMER by maximizing the weighted log-likelihood for each study (m,h) ∼ D: L(θ;m,h) = ∑ n wn logPθ(h̄ (n)|m̄, h̄(1:n−1)), (3) with wn = 0 if h̄(n) ∈ {?, "|"} and wn = 1 otherwise. That is, we mask out the separator tokens (?, "|") and predict parameter x̄ and function tokens ȳ only. Note that h̄(n) denotes the n-th token, that is the n-th element of the list in Equation (2), and h̄(1:n−1) denotes all tokens up to the (n− 1)-th token. Further details and data augmentations are provided in Appendix D.2. 4.3 Inference and decoding Parameter prediction: To decode the predicted parameter token x̄(d)t back to its original parameter range, we truncate the output distribution to the vocabulary range corresponding to valid parameter values [0, Q) and reverse our tokenization procedure in Section 4.1. For a DOUBLE or INTEGER parameter x, we use a piecewise constant distribution: pθ(x| . . . ) = Q · Pθ(x̄| . . . )/(xmax − xmin), if x ∈ [xmin, xmax], otherwise 0 . (4) For all other parameter types, x̄ corresponds to the index of the set of feasible values. Putting these together, we may now sample parameter xt from the model’s prior distribution and thus define an HPO policy: πprior(xt|m,ht−1) = D∏ d=1 pθ(x (d) t |m,ht−1,x (1:d−1) t ). (5) As we use a supervised learning loss, we expect πprior to approximate the behavior policy πb. Note that traditional BO algorithms require running Bayesian inference and then conducting a global search in the hyperparameter space with an acquisition function. Thus the runtime complexity of making one hyperparameter suggestion is cubic in t for a typical GP-based BO method that performs ARD each iteration [45]. In contrast, generating one suggestion by the OPTFORMER consists of decoding D parameter tokens with an input sequence of (D + 3)t tokens, which are then parsed into the D parameter values, producing a runtime of O(D2t) linear in t, with proper caching. Function prediction: To decode the real-valued function yt from the discrete distribution Pθ(ȳt|m̄, h̄t−1, x̄t), we construct the same piecewise constant distribution as in Eq. (4) with the range [ymin, ymax] used in tokenization. Note that the limited support of y will not be a concern for HPO when either the range is known or we set the range large enough compared to observed values. For more general use as a few-shot function prediction model, one could consider adopting the Riemann Distribution in [40], which supports an unbounded range. Augmented HPO policies with function prediction: At best, the learned policy πprior can only perform as well as the original policy πb when using behavioral cloning. However, we can take advantage of the model’s simultaneous function prediction ability to improve the policy with modelbased planning or offline RL techniques. While a comprehensive study of policy improvements for Transformers is out of the scope of this work, we consider here a simple yet effective policy improvement operator: sampling M = 100 candidate suggestions from πprior and choosing the suggestion with the highest score defined by an acquisition function u(·) as follows: πu(xt|m,ht−1) = argmax {x(i)}Mi=1 u(pθ(·|m,ht−1,x(i))), with x(i) i.i.d.∼ πprior(x|m,ht−1). (6) Common acquisition functions include Expected Improvement (EI), Probability of Improvement (PI), Upper Confidence Bound (UCB), and Thompson Sampling, see for example [46]. At a high level, this approach to combining imitated policies with function prediction is reminiscent of the idea behind the offline RL approach of BCQ [47]. Because we apply a linear mapping from the original y value to the quantized value ȳ before discretization, we can simply define the acquisition functions on the discrete distribution Pθ(ȳ|m̄, h̄t−1, x̄t) as follows: uEI(x|ȳ∗) = EPθ(ȳ|m,ht−1,x) [max(ȳ − ȳ ∗, 0)] , (7) uUCB(x|α) = Quantile(Pθ(ȳ|m,ht−1,xt), α) , (8) uPI(x|ȳ∗) = ∑ ȳ>ȳ∗ Pθ(ȳ|m,ht−1,x) , (9) uTS(x) = ȳ, with ȳ ∼ Pθ(ȳ|m,ht−1,xt) , (10) where ȳ∗ = maxτ≤t−1 ȳτ in EI and PI is the threshold to measure improvement. We define the UCB acquisition function with a quantile parameter α. Our TS acquisition is defined as a sampled function value at a given location from the marginal predictive distribution. It is inspired by the traditional Thompson Sampling method [45] but different in that the correlation between different locations is ignored. 5 Data Training the OPTFORMER requires HPO studies with optimization trajectories. The most natural dataset we possess is the entire Google Vizier [2] database, one of the world’s largest collections of real world hyperparameter tuning studies, which we denote as RealWorldData. There are around 750K studies, each with on average 300 trials, covering a vast class of production and machine learning applications at Google, ranging from vision, speech, NLP and robotics, and representing one of the most representative distributions of HPO tasks for machine learning models in practice. These studies were generated with a mixture of non-adaptive, evolutionary, and BO algorithms. However, as the dataset does not contain sufficient algorithm information, we have to treat the corresponding behavior policy as a randomly mixed algorithm πb. In addition, we create two new datasets based on public benchmarks. HPO-B is the largest public benchmark for HPO containing about 1.9K tuning tasks, most of which use one of 16 shared search spaces. In the continuous evaluation setting, it fits an XGBoost model to the trial data of every tuning task as the objective function. For further control over specific function dimensions and properties, we use the blackbox optimization benchmark BBOB [48], consisting of 24 types of synthetic functions with customizable properties (dimension sizes, rotations, shifts, discretizations, noise types) we randomize over. For each of the two public benchmarks (HPO-B and BBOB), we apply a fixed set of 7 HPO algorithms to generate a dataset of optimization trajectories. In contrast to RealWorldData, we specify the algorithm name in the metadata m as part of the conditioning input for our model. The controlled algorithms used are: (1) Grid Search, (2) Shuffled Grid Search, (3) Random Search, (4) Regularized Evolution [49], (5) Hill-Climbing, (6) Eagle Strategy [50], and (7) Vizier’s GP-UCB [2]. Appendix B contains detailed explanations of the algorithms. 6 Experiments We train a single Transformer model with 250M parameters on the union of the three datasets described above, RealWorldData, HPO-B, and BBOB (hyperparameter details in Appendix D.2). Each dataset contains a corresponding “test” set of functions, either using synthetic functions (BBOB) or fitting a machine learning model to obtain the objective (RealWorldData, HPO-B). We evaluate mainly on the two natural HPO benchmarks, RealWorldData and HPO-B. The train/test subsets of RealWorldData are split temporally to avoid information leak (see Appendix C for details). To aggregate results across functions with different output scaling, we normalize all the test functions. This is standard practice in the literature [2, 5, 51–54]. We define our performance metric at trial t as the best-so-far normalized function value maxi∈{1:t}(yi − yrand)/(ymax − yrand), where yrand is the median of function values randomly sampled in the search space to be robust to outliers, and ymax is the maximum, if known, or best value found by any algorithm. For the HPO-B benchmark, we use the recommended bounds provided in [5]. We also consider other metrics when comparing different algorithms in Appendix E.3, including the performance profile and average ranking. We find our results are consistent over different metrics. Because the OPTFORMER is trained to predict the conditional distributions of parameter and function values, we would like to answer the following questions when evaluating on unseen test problems: 1. Can the OPTFORMER learn to imitate multiple HPO algorithms with one model? (Section 6.1) 2. Can the OPTFORMER learn a good prior over hyperparameter response functions? (Section 6.2) 3. Is the OPTFORMER a competitive approach for HPO? (Section 6.3) 6.1 Imitating HPO policies We first evaluate how well the OPTFORMER can learn the conditional distribution of parameter suggestions given by the behavior policies in the dataset, and how well it can imitate multiple algorithms. As the algorithm’s name is contained in the metadata m, we can modify the behaviour of the policy πprior(xt+1|m,ht) simply by altering this variable. Fig. 2a compares two different policies to the OPTFORMER, when it is conditioned on the corresponding policy name. We observe a good match between the imitated algorithms and the OPTFORMER (additional algorithms are shown in Appendix E.1). In Fig. 2b we run target policies on the BBOB dataset’s test functions and compare the optimization trajectories of the algorithms and the OPTFORMER. In Fig. 2c we compare the average and standard deviation of the best normalized function values at trial 100. Our model imitates most algorithms very accurately in both the mean and variance except for the most complicated algorithm, Vizier, where πprior is slightly worse in the LUNACEK benchmark. We expand on this in Appendix E.1. Because Vizier is the best performing HPO algorithm among all considered, the OPTFORMER will imitate Vizier faithfully, although not perfectly, in the following experiments. 6.2 Learning priors for hyperparameter response functions In this section, we assess the OPTFORMER’s ability to learn the conditional distribution of the function value as a few-shot function regressor. Specifically, for every function in each test dataset, we repeatedly sample up to 200 random trials (x1, y1, . . .xt, yt), t ≤ 200, and predict the conditional distribution p(yt|x1, y1, . . . ,xt). We compare with a GP model with output warping — details provided in Appendix B. We report the log-predictive likelihood log p(yt|xt, . . . ) in Table 4. As uncertainty estimation is important for HPO, we also evaluate how well the function predictive distribution is calibrated. When a predictive distribution pθ(y| . . . ) matches the true distribution, the estimated CDF F (y) = ∫ y −∞ pθ(y ′| . . . )dy′ will be uniformly distributed. In Fig. 3, we plot the cumulative histogram of F (y) on RealWorldData test set and check the deviation from the diagonal line to assess goodness-of-fit as proposed by Rosenblatt [55]. The OPTFORMER has a smaller Table 4: Log-predictive likelihood (with 1-std. standard error, higher is better (↑)) and ECE (percentage of error, lower is better (↓)) on RealWorldData and HPO-B test sets. Log-predictive likelihood ↑ Model RealWorldData HPO-B GP 0.83(0.06) 4.03(0.04) OPTFORMER 2.12 (0.05) 6.16 (0.04) ECE (percent %) ↓ Model RealWorldData HPO-B GP 5.34 (0.06) 2.39 (0.05) OPTFORMER 1.11 (0.02) 1.89 (0.01) 0.0 0.2 0.4 0.6 0.8 1.0 CDF level F 0.0 0.2 0.4 0.6 0.8 1.0 Pe rc en ta ge o f d at a wi th CD F( y) F GP OptFormer Figure 3: Cumulative histogram of predicted CDF(y) on RealWorldData test set. deviation than the GP almost across the entire range. We also compare calibration performance using the expected calibration error (ECE) [27]. Readers are referred to [27] and Appendix E.2 for a detailed explanation of ECE. We observe from Table 4 that the OPTFORMER achieves better predictive likelihood and ECE than the GP on both datasets. 6.3 Augmenting a prior policy with function prediction We evaluate the OPTFORMER as a hyperparameter optimization algorithm on two benchmarks, RealWorldData and HPO-B. We compare our prior policy, the OPTFORMER, and an augmented policy with Expected Improvement, the OPTFORMER (EI), against standard HPO baselines, including Random Search, our implementation of GP-UCB, and the well-tuned Vizier service. For HPO-B, we also include the GP (not to be confused with our GP-UCB) and DGP (GP with deep kernel) baseline results provided by the original paper [5]. Additionally, we include three recent transferlearning methods based on multi-task GP models: ABLR [12, 56], FSBO [7], and HyperBO [57, 58] (implementation details in Appendix B). Please note that all of these transfer learning methods require learning GPs on multiple tasks sharing the same search space. Therefore, none of them apply to the RealWorldData benchmark where every study has its own search space. We show the trajectory of the best normalized function value averaged over all functions from each benchmark in Fig. 4. While the prior policy returned by the OPTFORMER does not perform as well as Vizier, it is comparable or slightly better than our GP-UCB baseline and ABLR. The most significant improvement is achieved when we augment our prior policy with the Expected Improvement acquisition function. The resulting OPTFORMER (EI) outperforms all baselines across the board on both benchmarks. This illustrates that the OPTFORMER is able to learn the distribution of functions in the meta-training split and transfers to the meta-testing split. It is worth noting that to run 100 trials for about half of the test functions, the required history token sequence is longer than the 1024-token length used in training, with the maximum length about twice the training horizon. The superior performance of the OPTFORMER (EI) thus demonstrates its good generalization performance beyond the optimization horizon it is trained for. 6.4 Ablations We provide further ablations on three important components for our policy: Training dataset. To understand the impact of the training datasets on the OPTFORMER, we train three variants on individual datasets (OPTFORMER-"R","H","B" respectively for RealWorldData, HPO-B, BBOB) and study their transfer learning performances on HPO-B. Fig. 5a verifies that training with in-domain data ("H") gives better performance than training over the more diverse across-domain RealWorldData HPO dataset ("R"), which is better than training over the synthetic BBOB data ("B"). Nonetheless, training on RealWorldData is enough to give comparable performance to the best transfer learning baseline at the end of 100 trials. Lastly, training on all of the datasets (OPTFORMER) gives a further advantage over OPTFORMER-H. This suggests that more data does not hurt the model’s performance but rather may improve it, even if the extra data is out-of-domain. Meta-data m. We have demonstrated how the OPTFORMER’s behavior can be controlled by the algorithm name in metadata m in Section 6.1. Here we study whether the OPTFORMER learns to depend on other meta information. At inference time, we provide minimum information in m (OPTFORMER-min) by excluding all textual information and parameter value ranges. We only keep necessary information such as parameter types and algorithm names. Fig. 5b shows that the prior policy of OPTFORMER-min performs comparably with the OPTFORMER, partly due to the use of data augmentation (see Appendix D.2). The augmented policy OPTFORMER-min (EI) (dashed orange) improves upon the prior policy but is significantly worse than the full model, suggesting that the missing metadata impacts the model’s predictions on function values. Prior policy. Section 6.3 demonstrated the benefit of adding an acquisition function to the prior policy. A natural question is whether a good prior policy is needed at all. In Fig. 5c, we replace the prior policy in the OPTFORMER (EI) with random search (Random Search (EI), dashed blue line). While adding Expected Improvement still improves this random search policy’s performance, the best method requires both a good prior policy and the acquisition function. Choice of acquisition function. In Fig. 5d, we compare the Expected Improvement (EI) with Thompson Sampling (TS), Probability of Improvement (PI), and Upper Confidence Bound (UCB) with a confidence level of 0.9. We observe that the prior policy is improved by all the acquisition functions. Particularly, OPTFORMER (EI) is the best among all the choices though the difference is relatively small compared to the advantage over other baselines and OPTFORMER prior policy. We provide additional analysis with results on both the RealWorldData and HPO-B datasets, as well as other evaluation metrics in Appendix E.4. 7 Limitations and future extensions We list a few limitations of this work and discuss some potential extensions. (1) We did not consider parameters that do not always apply or are subject to dynamic constraints depending on other parameter values. Such parameters are common in AutoML [59] and NAS applications [60]. Our work can be extended to support these applications, by providing the conditional specifications as text in metadata m. (2) We also considered only sequential optimization with a batch size of 1. To support parallel suggestions, one could apply random masking to input function value observations to simulate scenarios with parallel pending trials [33]. (3) While we trained the Transformer to clone the behavior policy offline, there are extensive literature on offline RL [29] that could be applied here [25, 47, 61–64]. One could also consider meta-training acquisition functions as in [31] within the same model and online fine-tuning as in [7, 41]. (4) We considered a single objective function, though multiple objectives can be easily included by outputting multiple function tokens in a trial. (5) The maximum sequence length is limited by the quadratic memory size requirement of a Transformer, which could be mitigated with more scalable architecture variants such as Performer [65]. 8 Conclusion We presented first step to learning a universal Transformer model for hyperparameter optimization from large scale datasets containing tuning experiments with vastly different search spaces and experiment descriptions. By training on a diverse set of synthetic and real-world tuning trajectories, we demonstrated the capacity of a single Transformer model to imitate 7 fundamentally different HPO policies, learn to make well calibrated few-shot function predictions, and provide competitive optimization performance on unseen test functions comparable with the existing, long-tried GP-based baselines. Many extensions are readily conceivable for future exploration. Acknowledgments We would like to thank Chris Dyer, Luke Metz, Kevin Murphy, Yannis Assael, and Esteban Real for providing valuable feedback during their reviews of this paper. We further thank Sebastian Pineda Arango for technical discussions on the HPO-B benchmark and Christof Angermueller on biological benchmarks. In addition, we thank Daniel Golovin, Daiyi Peng, Yingjie Miao, Jack Parker-Holder, Jie Tan, Lucio Dery, and Aleksandra Faust for multiple useful conversations.
1. What is the focus and contribution of the paper regarding hyperparameter transfer with metadata? 2. What are the strengths of the proposed approach, particularly in its ability to relax limitations and improve efficiency? 3. What are the weaknesses of the paper, especially regarding the uncertainty of learned meta knowledge and potential bias? 4. Do you have any questions or concerns regarding the experimental results and comparisons with other works? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies hyperparameter transfer with metadata in an "open-set" setting -- allowing different configuration spaces across tasks. A transformer-based hyperparameter tuner, namely OptFromer, is proposed to predict policy and response function values (e.g., validation performance) in a sequence-to-sequence training style, where the learned policy maps text-based metadata to pre-discretized hyperparameter configurations. To my best knowledge, the proposed method is the first HPO method that learns prior knowledge from the collected text-based configurations. Experimental results on one collected real-world dataset and two public benchmarks were provided in terms of the policy behavior imitation, response function prediction, and HPO. Strengths And Weaknesses Pros: The proposed hyperparameter prior learning method is orthogonal to existing GP-based hyperparameter transfer methods in two folds: 1) it relaxes the limitation of sharing the same configuration space across different tasks, and 2) the data-driven approach given by training a transformer model significantly improves the efficiency. It is interesting and novel to learn prior knowledge from text-based metadata. The seq-2-seq supervised learning framework is technically sounded with proper practical treatments. Moreover, the transformer structure is well-motivated to capture both symbolic and numerical manipulation. While some necessary implementation details are missing in the manuscript, the augmented HPO policy with Thompson Sampling provides a good implementation similar to offline RL. Extensive experimental results demonstrate the effectiveness of the learned HPO policy on its well-calibrated predictions and utility performance. Cons: The main concern of this work is its unclear meta knowledge learned from text-based metadata. How does the proposed OptFormer indeed imitate the other HPO algorithms? Does the transformer simply memorize the choices given by different HPO algorithms? Can the proposed method adapt to more complex algorithms (e.g., hypergradient-based) and large-scale hyperparameters? The learned prior seems risky to be biased on closed-set model architectures and tasks (datasets). It remains unclear if the HPO policy can be generalized to unseen tasks. I may miss something in the appendix; yet, it will be helpful to give more details about the split of training/test set in the RealWorldData. A non-overlapping split over the tasks or algorithms will be more convincing. One major technical contribution of this work is to introduce transformers for learning HPO priors. Hence, it is expected to give an ablation study regarding this architecture choice. One baseline based on RNN (e.g., LSTMs or GRUs) will be useful to validate this point empirically. Questions Table 4 shows more calibrated results of OptFormer than the GP-based methods. It is well known that GP could provide well-calibrated uncertainty estimates. Yet, in accordance with Eq (3-5), it seems like the proposed method just follows the standard supervised training strategy. Any insights into why OptFormer could show a better calibration result? Is the reason due to using the transformer architectures [40]? I'm also curious about the ECE comparison between OptFormer and OptFormer (TS) As shown in Fig. 4, the Vizier performs slightly better than OptFormer (TS) on the RealWoldData dataset. While the paper implies the reason as the GP surrogate-based test functions, it is unclear why OptFormer performs much better than GP-UCB. Also, the comparison results between RealWorldData (mixed algorithms) and HPO-B (controlled algorithms) cast a shadow on the biased issue of the proposed OptFormer. Post after rebuttal Thanks for providing a detailed response to the questions. Due to my travel schedule, I haven't got a chance to further discuss with the authors. Yet, most of my previous concerns were well addressed. Particularly, it would be interesting to add an LSTM baseline in future work and explore more the calibration of HPO from a pre-training perspective. I would like to champion this work by upgrading my score. Limitations N/A
NIPS
Title Towards Learning Universal Hyperparameter Optimizers with Transformers Abstract Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OPTFORMER, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OPTFORMER can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OPTFORMER also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. 1 Introduction The emergence of public machine learning data platforms such as OpenML [1] and hyperparameter optimization (HPO) services such as Google Vizier [2], Amazon SageMaker [3] and Microsoft Azure [4] have made large-scale datasets containing hyperparameter evaluations accessible. For our use-case in this paper, Google Vizier is the de-facto HPO service across Google, having optimized some of Google’s largest products and research efforts, and contains a collection of valuable tuning data within the last 5 years. While there is growing interest in leveraging such data to meta-learn hyperparameter optimization algorithms [5–8], dealing with large datasets consisting of experimental trials in the wild can be challenging, due to large variations in HPO problems and their associated text metadata (e.g. shown later in Table 1). Thus, most meta and transfer-learning HPO methods [7–16] consider a restrictive setting where all tasks must share the same set of hyperparameters so that the input data can be represented as fixed-sized vectors. Consequently, such methods only exploit a small portion of the available data to learn priors. This drawback is more severe for large datasets which contain significant amounts of useful information. To overcome these limitations, we introduce the OPTFORMER, a general hyperparameter optimization framework based on Transformers [17]. Transformers have demonstrated excellent performance in many data tasks, ranging from natural language [18], images [19, 20], biological data [21, 22], code [23, 24], and control [25, 26]. Here, we investigate how to use a Transformer as a universal interface for modelling experimental data and learn HPO algorithms, as given a sufficient amount of data, a Transformer can potentially learn a more complex prior distribution than standard Bayesian Optimization (BO) with Gaussian Processes (GPs), especially as the Transformer possesses certain computational advantages over GPs for large datasets. Code: https://github.com/google-research/optformer. Google AI Blog: https:// ai.googleblog.com/2022/08/optformer-towards-universal.html. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). We introduce a serialization scheme to convert a combination of any metadata and an optimization trajectory into text, represented as a sequence of tokens, and formulate the HPO task as a sequence modeling problem. We adopt a supervised learning approach, by learning to predict parameters and hyperparameter response functions from offline tuning data (See Fig. 1). In order to further improve optimization performance, we augment the model by utilizing its own function prediction during inference (Section 4.3). Extensive experiments on both public and private datasets demonstrate the OPTFORMER’s competitive tuning and generalization abilities. In summary, our contributions are as follows: • We formulate, to the best of our knowledge, the first meta-learning HPO framework to learn both policy and function priors from data across different search spaces. • The OPTFORMER is capable of learning the behaviors of 7 diverse blackbox optimization algorithms relying on a broad class of methods (non-adaptive, evolutionary, and Bayesian). • Furthermore, the OPTFORMER learns the prior over objective functions and provides both accurate and well calibrated predictions, in many cases significantly surpassing GPs in log-predictive likelihood and expected calibration error (ECE) [27]. • Lastly, OPTFORMER policies augmented with model-based optimization, such as the use of Expected Improvement acquisition functions, are competitive HPO algorithms. To the best of our knowledge, this is the first time Transformers are augmented with acquisition functions for online adaptation. 2 Preliminaries 2.1 Meta-learning for hyperparameter optimization HPO aims to find a set of hyperparameters x from search space X to maximize a model performance metric, y = f(x), often referred to as a response function. Table 1 shows an example of HPO experimental data. Following the HPO nomenclature [2, 28], an experimental study consists of metadata (m) and a history of trials (h). The metadata contains arbitrary unstructured information, including but not limited to descriptions of the problem, optimization algorithm, names, types and value ranges of hyperparameters. The history after t trials, ht = (x1, y1, . . . ,xt, yt), contains a sequence of trials, each of which consists of a parameter suggestion x and function value y. The goal of the meta-learning approach for HPO is to learn the shared knowledge among the objective functions f from a dataset of multiple tuning experiments represented as studies and to obtain an optimal HPO algorithm for new hyperparameter tuning tasks from a similar distribution to those in the dataset. An HPO algorithm π maps the metadata and history to a distribution over hyperparameter suggestions, i.e. π(xt+1|m,ht). Using the terminology of offline RL [29], we refer to the algorithm used to generate the trajectories in a dataset as the behavior policy πb. We primarily consider search spaces X with a fixed number D of hyperparameters per task, and hence x = (x(1), . . . , x(D)), with each hyperparameter x(d) being of type DOUBLE, INTEGER, DISCRETE, or CATEGORICAL (see Appendix A.1 for details). More complex search spaces can be supported as discussed in Section 7. 2.2 Transformer model }} The Transformer model is an efficient attention-based neural network architecture for sequence modeling [17]. We adopt the T5 Transformer encoder-decoder architecture [30]. The encoder and decoder each consist of a stack of multi-head selfattention layers which construct pairwise interactions between positions, followed by position-wise feed-forward networks. The encoder converts a sequence of input token representations m, to a sequence of continuous embeddings, which is fed to the decoder to generate a sequence of output tokens h one element at a time (see Fig. 1). 3 Related work There has been a rich set of works in meta-learning and transfer learning by modifying specific core components of the BO pipeline, such as the acquisition function or the GP, in order to tackle BO’s myopic behavior, or obtaining more information from similar tasks. For instance, approaches include learning new acquisition functions [31], multi-task BO [7–13] and BO for transfer learning using contextual GPs [14–16]. [32] also studies the use of meta-BO for hyperparameter tuning tasks in machine learning. However, all of these works consider a fixed search space. A more radical meta-learning approach to non-differentiable optimization trains recurrent neural networks (RNNs) as neural optimizers from scratch. [33] first proposed training an RNN with gradient descent to optimize blackbox functions and hyperparameters while [34, 35] train RNNs using reinforcement learning (RL) to solve RL tasks. Unfortunately, prior works are limited to fixed search spaces and only use online generated data, constraining the training objectives to be cheaply computable. In this work, we wish to overcome the limitations of previous works by exploiting the Transformer architecture. Numerous works have demonstrated Transformers’ strong capabilities in flexible symbolic and numerical manipulation. On the symbolic side, Transformers have been shown able to manipulate symbolic mathematical expressions [36–38] and generate code [23, 24]. Furthermore, on the numerical side, Transformers have also been shown able to perform linear algebra computations [39], Bayesian Inference [40], and offline RL [25, 26, 41]. For AutoML specifically, [42] has demonstrated Transformers’ and analogous graph neural networks’ abilities to use dataset descriptions and metadata to generate classification and data preprocessing pipelines. However, to date, there has been little effort in attacking the full problem of hyperparameter tuning in the blackbox optimization setting. In this paper, the challenging task of learning algorithms from blackbox optimization trajectories can be seen as a significant extension of both symbolic and numerical manipulation. Since the underlying algorithm can be composed of multiple symbolic and mathematical operations with unbounded complexity, the model must infer potentially very complex behavior over long horizons. 4 Universal interface and model for hyperparameter optimization In this section, we provide a universal interface for modeling HPO studies with mixed textual and numerical information as a sequence of discrete tokens. We train our OPTFORMER as a generative model on a given dataset and explain how to use the OPTFORMER’s parameter and function prediction abilities to implement an HPO policy. 4.1 Study tokenization To generalize over HPO problems of different parameter sizes, types, and metadata, we propose to serialize the study as a one-dimensional textual sequence, also advocated in [26]. Unfortunately, a naive serialization approach, e.g. via JSON [43], will produce unnecessarily long sequences. To improve scalability, we compress the textual representation of metadata m by removing redundant phrases and punctuation (e.g., "parameter", quotes) and encoding keywords (e.g., "name", "algorithm") and enumerating types (e.g. "DOUBLE") into single tokens. For the historical sequence h, we convert every DOUBLE and INTEGER parameter along with every function value into a single token, by normalizing and discretizing them into integers, with an quantization level of Q = 1000; e.g. x̄ = int[xnorm ·Q], where xnorm = (x− xmin)/(xmax − xmin). (1) The range of x is defined by the search space and the range of y is obtained from observed values in h. For other types, we use the index in their value set. The shortened text string is then converted to a sequence of tokens via the SentencePiece tokenizer [44] (see Table 2 for an example). Every trial is represented by text, which is represented as a sequence of normalized and quantized tokens, [ x̄ (1) t , . . . , x̄ (D) t , ?, ȳt, "|" ] , where the token ? separates parameter and function values and "|" separates trials. See Appendix A.2 for further details on tokenization. 4.2 Model and training loss After tokenization, the converted historical sequence is as follows: h̄t = [ x̄ (1) 1 , x̄ (2) 1 , . . . , x̄ (D) 1 , ?, ȳ1, "|", . . . , x̄ (1) t , x̄ (2) t , . . . , x̄ (D) t , ?, ȳt ] . (2) We can now apply a Transformer model to learn the conditional distribution of tokens in h̄ using the chain rule, given the metadata m̄, as depicted in Fig. 1. The joint distribution is presented in Appendix D.1. Given a dataset D of hyperparameter optimization studies, we train the OPTFORMER by maximizing the weighted log-likelihood for each study (m,h) ∼ D: L(θ;m,h) = ∑ n wn logPθ(h̄ (n)|m̄, h̄(1:n−1)), (3) with wn = 0 if h̄(n) ∈ {?, "|"} and wn = 1 otherwise. That is, we mask out the separator tokens (?, "|") and predict parameter x̄ and function tokens ȳ only. Note that h̄(n) denotes the n-th token, that is the n-th element of the list in Equation (2), and h̄(1:n−1) denotes all tokens up to the (n− 1)-th token. Further details and data augmentations are provided in Appendix D.2. 4.3 Inference and decoding Parameter prediction: To decode the predicted parameter token x̄(d)t back to its original parameter range, we truncate the output distribution to the vocabulary range corresponding to valid parameter values [0, Q) and reverse our tokenization procedure in Section 4.1. For a DOUBLE or INTEGER parameter x, we use a piecewise constant distribution: pθ(x| . . . ) = Q · Pθ(x̄| . . . )/(xmax − xmin), if x ∈ [xmin, xmax], otherwise 0 . (4) For all other parameter types, x̄ corresponds to the index of the set of feasible values. Putting these together, we may now sample parameter xt from the model’s prior distribution and thus define an HPO policy: πprior(xt|m,ht−1) = D∏ d=1 pθ(x (d) t |m,ht−1,x (1:d−1) t ). (5) As we use a supervised learning loss, we expect πprior to approximate the behavior policy πb. Note that traditional BO algorithms require running Bayesian inference and then conducting a global search in the hyperparameter space with an acquisition function. Thus the runtime complexity of making one hyperparameter suggestion is cubic in t for a typical GP-based BO method that performs ARD each iteration [45]. In contrast, generating one suggestion by the OPTFORMER consists of decoding D parameter tokens with an input sequence of (D + 3)t tokens, which are then parsed into the D parameter values, producing a runtime of O(D2t) linear in t, with proper caching. Function prediction: To decode the real-valued function yt from the discrete distribution Pθ(ȳt|m̄, h̄t−1, x̄t), we construct the same piecewise constant distribution as in Eq. (4) with the range [ymin, ymax] used in tokenization. Note that the limited support of y will not be a concern for HPO when either the range is known or we set the range large enough compared to observed values. For more general use as a few-shot function prediction model, one could consider adopting the Riemann Distribution in [40], which supports an unbounded range. Augmented HPO policies with function prediction: At best, the learned policy πprior can only perform as well as the original policy πb when using behavioral cloning. However, we can take advantage of the model’s simultaneous function prediction ability to improve the policy with modelbased planning or offline RL techniques. While a comprehensive study of policy improvements for Transformers is out of the scope of this work, we consider here a simple yet effective policy improvement operator: sampling M = 100 candidate suggestions from πprior and choosing the suggestion with the highest score defined by an acquisition function u(·) as follows: πu(xt|m,ht−1) = argmax {x(i)}Mi=1 u(pθ(·|m,ht−1,x(i))), with x(i) i.i.d.∼ πprior(x|m,ht−1). (6) Common acquisition functions include Expected Improvement (EI), Probability of Improvement (PI), Upper Confidence Bound (UCB), and Thompson Sampling, see for example [46]. At a high level, this approach to combining imitated policies with function prediction is reminiscent of the idea behind the offline RL approach of BCQ [47]. Because we apply a linear mapping from the original y value to the quantized value ȳ before discretization, we can simply define the acquisition functions on the discrete distribution Pθ(ȳ|m̄, h̄t−1, x̄t) as follows: uEI(x|ȳ∗) = EPθ(ȳ|m,ht−1,x) [max(ȳ − ȳ ∗, 0)] , (7) uUCB(x|α) = Quantile(Pθ(ȳ|m,ht−1,xt), α) , (8) uPI(x|ȳ∗) = ∑ ȳ>ȳ∗ Pθ(ȳ|m,ht−1,x) , (9) uTS(x) = ȳ, with ȳ ∼ Pθ(ȳ|m,ht−1,xt) , (10) where ȳ∗ = maxτ≤t−1 ȳτ in EI and PI is the threshold to measure improvement. We define the UCB acquisition function with a quantile parameter α. Our TS acquisition is defined as a sampled function value at a given location from the marginal predictive distribution. It is inspired by the traditional Thompson Sampling method [45] but different in that the correlation between different locations is ignored. 5 Data Training the OPTFORMER requires HPO studies with optimization trajectories. The most natural dataset we possess is the entire Google Vizier [2] database, one of the world’s largest collections of real world hyperparameter tuning studies, which we denote as RealWorldData. There are around 750K studies, each with on average 300 trials, covering a vast class of production and machine learning applications at Google, ranging from vision, speech, NLP and robotics, and representing one of the most representative distributions of HPO tasks for machine learning models in practice. These studies were generated with a mixture of non-adaptive, evolutionary, and BO algorithms. However, as the dataset does not contain sufficient algorithm information, we have to treat the corresponding behavior policy as a randomly mixed algorithm πb. In addition, we create two new datasets based on public benchmarks. HPO-B is the largest public benchmark for HPO containing about 1.9K tuning tasks, most of which use one of 16 shared search spaces. In the continuous evaluation setting, it fits an XGBoost model to the trial data of every tuning task as the objective function. For further control over specific function dimensions and properties, we use the blackbox optimization benchmark BBOB [48], consisting of 24 types of synthetic functions with customizable properties (dimension sizes, rotations, shifts, discretizations, noise types) we randomize over. For each of the two public benchmarks (HPO-B and BBOB), we apply a fixed set of 7 HPO algorithms to generate a dataset of optimization trajectories. In contrast to RealWorldData, we specify the algorithm name in the metadata m as part of the conditioning input for our model. The controlled algorithms used are: (1) Grid Search, (2) Shuffled Grid Search, (3) Random Search, (4) Regularized Evolution [49], (5) Hill-Climbing, (6) Eagle Strategy [50], and (7) Vizier’s GP-UCB [2]. Appendix B contains detailed explanations of the algorithms. 6 Experiments We train a single Transformer model with 250M parameters on the union of the three datasets described above, RealWorldData, HPO-B, and BBOB (hyperparameter details in Appendix D.2). Each dataset contains a corresponding “test” set of functions, either using synthetic functions (BBOB) or fitting a machine learning model to obtain the objective (RealWorldData, HPO-B). We evaluate mainly on the two natural HPO benchmarks, RealWorldData and HPO-B. The train/test subsets of RealWorldData are split temporally to avoid information leak (see Appendix C for details). To aggregate results across functions with different output scaling, we normalize all the test functions. This is standard practice in the literature [2, 5, 51–54]. We define our performance metric at trial t as the best-so-far normalized function value maxi∈{1:t}(yi − yrand)/(ymax − yrand), where yrand is the median of function values randomly sampled in the search space to be robust to outliers, and ymax is the maximum, if known, or best value found by any algorithm. For the HPO-B benchmark, we use the recommended bounds provided in [5]. We also consider other metrics when comparing different algorithms in Appendix E.3, including the performance profile and average ranking. We find our results are consistent over different metrics. Because the OPTFORMER is trained to predict the conditional distributions of parameter and function values, we would like to answer the following questions when evaluating on unseen test problems: 1. Can the OPTFORMER learn to imitate multiple HPO algorithms with one model? (Section 6.1) 2. Can the OPTFORMER learn a good prior over hyperparameter response functions? (Section 6.2) 3. Is the OPTFORMER a competitive approach for HPO? (Section 6.3) 6.1 Imitating HPO policies We first evaluate how well the OPTFORMER can learn the conditional distribution of parameter suggestions given by the behavior policies in the dataset, and how well it can imitate multiple algorithms. As the algorithm’s name is contained in the metadata m, we can modify the behaviour of the policy πprior(xt+1|m,ht) simply by altering this variable. Fig. 2a compares two different policies to the OPTFORMER, when it is conditioned on the corresponding policy name. We observe a good match between the imitated algorithms and the OPTFORMER (additional algorithms are shown in Appendix E.1). In Fig. 2b we run target policies on the BBOB dataset’s test functions and compare the optimization trajectories of the algorithms and the OPTFORMER. In Fig. 2c we compare the average and standard deviation of the best normalized function values at trial 100. Our model imitates most algorithms very accurately in both the mean and variance except for the most complicated algorithm, Vizier, where πprior is slightly worse in the LUNACEK benchmark. We expand on this in Appendix E.1. Because Vizier is the best performing HPO algorithm among all considered, the OPTFORMER will imitate Vizier faithfully, although not perfectly, in the following experiments. 6.2 Learning priors for hyperparameter response functions In this section, we assess the OPTFORMER’s ability to learn the conditional distribution of the function value as a few-shot function regressor. Specifically, for every function in each test dataset, we repeatedly sample up to 200 random trials (x1, y1, . . .xt, yt), t ≤ 200, and predict the conditional distribution p(yt|x1, y1, . . . ,xt). We compare with a GP model with output warping — details provided in Appendix B. We report the log-predictive likelihood log p(yt|xt, . . . ) in Table 4. As uncertainty estimation is important for HPO, we also evaluate how well the function predictive distribution is calibrated. When a predictive distribution pθ(y| . . . ) matches the true distribution, the estimated CDF F (y) = ∫ y −∞ pθ(y ′| . . . )dy′ will be uniformly distributed. In Fig. 3, we plot the cumulative histogram of F (y) on RealWorldData test set and check the deviation from the diagonal line to assess goodness-of-fit as proposed by Rosenblatt [55]. The OPTFORMER has a smaller Table 4: Log-predictive likelihood (with 1-std. standard error, higher is better (↑)) and ECE (percentage of error, lower is better (↓)) on RealWorldData and HPO-B test sets. Log-predictive likelihood ↑ Model RealWorldData HPO-B GP 0.83(0.06) 4.03(0.04) OPTFORMER 2.12 (0.05) 6.16 (0.04) ECE (percent %) ↓ Model RealWorldData HPO-B GP 5.34 (0.06) 2.39 (0.05) OPTFORMER 1.11 (0.02) 1.89 (0.01) 0.0 0.2 0.4 0.6 0.8 1.0 CDF level F 0.0 0.2 0.4 0.6 0.8 1.0 Pe rc en ta ge o f d at a wi th CD F( y) F GP OptFormer Figure 3: Cumulative histogram of predicted CDF(y) on RealWorldData test set. deviation than the GP almost across the entire range. We also compare calibration performance using the expected calibration error (ECE) [27]. Readers are referred to [27] and Appendix E.2 for a detailed explanation of ECE. We observe from Table 4 that the OPTFORMER achieves better predictive likelihood and ECE than the GP on both datasets. 6.3 Augmenting a prior policy with function prediction We evaluate the OPTFORMER as a hyperparameter optimization algorithm on two benchmarks, RealWorldData and HPO-B. We compare our prior policy, the OPTFORMER, and an augmented policy with Expected Improvement, the OPTFORMER (EI), against standard HPO baselines, including Random Search, our implementation of GP-UCB, and the well-tuned Vizier service. For HPO-B, we also include the GP (not to be confused with our GP-UCB) and DGP (GP with deep kernel) baseline results provided by the original paper [5]. Additionally, we include three recent transferlearning methods based on multi-task GP models: ABLR [12, 56], FSBO [7], and HyperBO [57, 58] (implementation details in Appendix B). Please note that all of these transfer learning methods require learning GPs on multiple tasks sharing the same search space. Therefore, none of them apply to the RealWorldData benchmark where every study has its own search space. We show the trajectory of the best normalized function value averaged over all functions from each benchmark in Fig. 4. While the prior policy returned by the OPTFORMER does not perform as well as Vizier, it is comparable or slightly better than our GP-UCB baseline and ABLR. The most significant improvement is achieved when we augment our prior policy with the Expected Improvement acquisition function. The resulting OPTFORMER (EI) outperforms all baselines across the board on both benchmarks. This illustrates that the OPTFORMER is able to learn the distribution of functions in the meta-training split and transfers to the meta-testing split. It is worth noting that to run 100 trials for about half of the test functions, the required history token sequence is longer than the 1024-token length used in training, with the maximum length about twice the training horizon. The superior performance of the OPTFORMER (EI) thus demonstrates its good generalization performance beyond the optimization horizon it is trained for. 6.4 Ablations We provide further ablations on three important components for our policy: Training dataset. To understand the impact of the training datasets on the OPTFORMER, we train three variants on individual datasets (OPTFORMER-"R","H","B" respectively for RealWorldData, HPO-B, BBOB) and study their transfer learning performances on HPO-B. Fig. 5a verifies that training with in-domain data ("H") gives better performance than training over the more diverse across-domain RealWorldData HPO dataset ("R"), which is better than training over the synthetic BBOB data ("B"). Nonetheless, training on RealWorldData is enough to give comparable performance to the best transfer learning baseline at the end of 100 trials. Lastly, training on all of the datasets (OPTFORMER) gives a further advantage over OPTFORMER-H. This suggests that more data does not hurt the model’s performance but rather may improve it, even if the extra data is out-of-domain. Meta-data m. We have demonstrated how the OPTFORMER’s behavior can be controlled by the algorithm name in metadata m in Section 6.1. Here we study whether the OPTFORMER learns to depend on other meta information. At inference time, we provide minimum information in m (OPTFORMER-min) by excluding all textual information and parameter value ranges. We only keep necessary information such as parameter types and algorithm names. Fig. 5b shows that the prior policy of OPTFORMER-min performs comparably with the OPTFORMER, partly due to the use of data augmentation (see Appendix D.2). The augmented policy OPTFORMER-min (EI) (dashed orange) improves upon the prior policy but is significantly worse than the full model, suggesting that the missing metadata impacts the model’s predictions on function values. Prior policy. Section 6.3 demonstrated the benefit of adding an acquisition function to the prior policy. A natural question is whether a good prior policy is needed at all. In Fig. 5c, we replace the prior policy in the OPTFORMER (EI) with random search (Random Search (EI), dashed blue line). While adding Expected Improvement still improves this random search policy’s performance, the best method requires both a good prior policy and the acquisition function. Choice of acquisition function. In Fig. 5d, we compare the Expected Improvement (EI) with Thompson Sampling (TS), Probability of Improvement (PI), and Upper Confidence Bound (UCB) with a confidence level of 0.9. We observe that the prior policy is improved by all the acquisition functions. Particularly, OPTFORMER (EI) is the best among all the choices though the difference is relatively small compared to the advantage over other baselines and OPTFORMER prior policy. We provide additional analysis with results on both the RealWorldData and HPO-B datasets, as well as other evaluation metrics in Appendix E.4. 7 Limitations and future extensions We list a few limitations of this work and discuss some potential extensions. (1) We did not consider parameters that do not always apply or are subject to dynamic constraints depending on other parameter values. Such parameters are common in AutoML [59] and NAS applications [60]. Our work can be extended to support these applications, by providing the conditional specifications as text in metadata m. (2) We also considered only sequential optimization with a batch size of 1. To support parallel suggestions, one could apply random masking to input function value observations to simulate scenarios with parallel pending trials [33]. (3) While we trained the Transformer to clone the behavior policy offline, there are extensive literature on offline RL [29] that could be applied here [25, 47, 61–64]. One could also consider meta-training acquisition functions as in [31] within the same model and online fine-tuning as in [7, 41]. (4) We considered a single objective function, though multiple objectives can be easily included by outputting multiple function tokens in a trial. (5) The maximum sequence length is limited by the quadratic memory size requirement of a Transformer, which could be mitigated with more scalable architecture variants such as Performer [65]. 8 Conclusion We presented first step to learning a universal Transformer model for hyperparameter optimization from large scale datasets containing tuning experiments with vastly different search spaces and experiment descriptions. By training on a diverse set of synthetic and real-world tuning trajectories, we demonstrated the capacity of a single Transformer model to imitate 7 fundamentally different HPO policies, learn to make well calibrated few-shot function predictions, and provide competitive optimization performance on unseen test functions comparable with the existing, long-tried GP-based baselines. Many extensions are readily conceivable for future exploration. Acknowledgments We would like to thank Chris Dyer, Luke Metz, Kevin Murphy, Yannis Assael, and Esteban Real for providing valuable feedback during their reviews of this paper. We further thank Sebastian Pineda Arango for technical discussions on the HPO-B benchmark and Christof Angermueller on biological benchmarks. In addition, we thank Daniel Golovin, Daiyi Peng, Yingjie Miao, Jack Parker-Holder, Jie Tan, Lucio Dery, and Aleksandra Faust for multiple useful conversations.
1. What is the focus and contribution of the paper regarding transformer language models and hyperparameter optimization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its training process and evaluation? 3. Do you have any concerns or questions about the ablation study discussed in the review? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any additional experiments or comparisons that could enhance the paper's findings?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors train a single Transformer language model (LM) on a variety of tokenized HPO trajectories from a variety of tasks and HPO algorithms. For each HPO trajectory the LM is primed on the task/algorithm (name, search space, metric, algorithm) and is then trained to predict hyper-parameters as well as the optimized metric step-by-step for each trial in order. Strengths And Weaknesses Strengths The results are strong: Reproducing the performance of most algorithms up to 100 trials as well as improving upon them. This could be the path forward for the HPO community. The authors provide insightful ablations. The method is clearly described and simple. Weaknesses This paper does neither open-source its codebase (built open an open-source codebase), nor the trained model and not even the training data (built upon open datasets). Actually, even a performance number for optimization for follow-up work to compare with is missing, as scores are calculated using an undisclosed metric and the main results are only reported in the form of plots and not tables. For RealWorldData (and HPO-B actually, even though less interesting there) a SOTA BO method would be an interesting baseline to add, like HEBO. Is the conclusion of Meta-data ablation (line 287) based on a model trained with meta-data? In that case, I would guess the worse performance stems from a train/test distribution shift, rather than from missing metadata really. Summary: The results are strong with some evaluation problems. It is not reproducible, though, which in this case is a particularly big problem, as this paper proposes a very new direction for a field of research in which the expertise to reproduce the results based only on descriptions (previous methods require a very different background) and the resources to reproduce the results without data (running HPO with different optimizers on millions of problems) are missing. Questions Is the discretization of scalars for the input important? How would a model perform, where these numbers are normalized in some way and fed to the network directly? Do you have an explanation for the strong performance of using RealWorldData for training (Fig. 5 a) compared to the larger HPO-B dataset, when comparing on HPO-B? (Transfer from a smaller dataset is better than in-domain training. This is unusual.) What do you mean by "temporal train/test splits" in line 269? How do you calculate the Thompson Sampling utility function? Limitations The listed limitations are fair, even though one limitation, I would expect there to be, is missing: Handling much longer sequences, as the Transformer is trained with a maximum sequence length.
NIPS
Title Recovering Latent Causal Factor for Generalization to Distributional Shifts Abstract Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as “spurious correlation” statistically, is domaindependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose Latent Causal Invariance Models (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at https://github.com/wubotong/LaCIM. 1 Introduction Current data-driven deep learning models, revolutionary in various tasks though, often exploit all types of correlations to fit data well. Among such correlations, there can be spurious ones corresponding to biases (e.g., confounding bias due to the presence of a third unseen factor) inherited from the data provided. Such data-dependent spurious correlations can erode the prediction power on unseen domains with distributional shifts, which can cause serious consequences especially in safety-critical tasks such as healthcare. Recently, there is a Renaissance of causality in machine learning, expected to pursue causal relationships [59] to achieve stable generalization across domains. The so-called area of “causality” is pioneered by Structural Causal Models [51], as a mathematical formulation of this metaphysical concept grasped in the human mind. The incorporation of these human priors about cause and effect endows the model with the ability to identify the causal structure [51] which entails not only the data but also the underlying process of how they are generated. To achieve causal modeling, the old-school methods [52, 10] directly causally related the output label Y to a subset of covariates X , which is however not conceptually reasonable in applications with sensory-level data (e.g. model pixels as causal factors of the output does not make sense in image classification [11]). ∗Corresponding author †Work done during an internship at Microsoft Research Asia. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). For such applications, we rather adopt the manner of human visual perception [8, 9, 80] to causally relate the label Y to unobserved abstractions denoted by S, i.e., Y ← S. We further assume the existence of another non-causal latent factor (of Y ) denoted as Z, that together with S generate the input X: X ← (S,Z). Such an assumption is similarly adopted in the literature [25, 27, 35, 75, 71]. To model shifts across domains, we allow Z to be spuriously correlated with S (hence also the output), as marked by the bidirected arrow in Fig. 1 (a). Taking image classification as an example, the S and Z respectively refer to object-related abstractions (e.g., contour, texture) and contextual information (e.g., background, view). Due to this correlation, the model can learn contextual information into prediction, which may fail to generalize to the domain such that this correlation is broken. We encapsulate above assumptions into the skeleton illustrated in Fig. 1 (a), in which the spurious correlation between S and Z varies across domains, as marked by the red bi-directed arrow in Fig. 1 (b). Taking a closer inspection, such a domain-dependent spurious correlation is governed by an auxiliary domain variable D in Fig. 1 (c), which causes the domain shifts. We call the set of causal models augmented with D as Latent Causal Invariance Models (LaCIM). Here, the “Causal Invariance” refers to P (Y |S), which together with P (X|S,Z), can be proved to be stable to the shifts across domains, under the assumptions embedded in the causal structure of LaCIM. Equipped with such an invariance, we prove that the S and the ground-truth predictor: P (Y |s?) for x generated from (s?, z?), are identifiable up to transformations that do not mix the non-causal information. Under such an identifiability guarantee, we propose to learn the P (Y |S) and P (X|S,Z) by reformulating the Variational Auto-encoder (VAE) [37] to fit the joint distribution of the input and output variables from training domains. During the test stage, we first infer the value of S by optimizing the estimated P (X|S,Z) over latent space, followed by the learned P (Y |S) for prediction. We first use simulated data to verify the correctness of the identifiability claim. Then, to demonstrate the utility, we test our approach on real-world data, consistently achieving better generalization to the new distribution; besides, we find that our inferred causal factor can be concentrated in highly explainable semantic regions for the task of image classification. We summarize our contribution as follows: Methodologically (in sec. 4.1), we propose LaCIM in which the causal assumptions of two latent factors and the distributional shifts are incorporated; Theoretically (in theorem 4.4), we prove the identifiability of the causal factor and the ground-truth predicting mechanism; Algorithmically (in sec. 4.3), guided by the identifiability, we reformulate Variational Bayesian method to learn P (X|S,Z), P (Y |S) for prediction; Experimentally (in sec. 5.2), our approach generalizes better to distributional shifts, compared with others. 2 Related Work Causality for Domain Generalization. Due to its stable transferability, the concept of causality has been introduced in many recent works for domain generalization [39, 59, 52, 10, 40, 21, 68]. Most of these works learned the assumed (causal) invariance for generalizing to unseen domains. However, they suffer from either i) lacking explicit causal modeling; or ii) inappropriate causal relations made for the output. Specifically, for i), the [39, 59] are still data-driven methods to learn stable correlation (i.e., invariance) without incorporating causal assumptions [51] beyond data, which may impede its generalization to a broader set of domains; for ii), the [52, 10, 40, 21, 68] causally relate the output with covariates, which is inappropriate for sensory-level data. Our Specification. We explicitly incorporate the causal assumptions. Specifically, we introduce i) latent factors and separate them into the causal and the non-causal factor; ii) the domain variable D, as a selecting mechanism to generate the varied S-Z correlation across domains. Such a causal modeling makes it possible to recover the causal factor S for generalization. In independent and concurrent works, [75] and [28] also explore latent variables in causal relation. As comparisons, [75] did not differentiate S from Z. The spurious correlation in [28] is limited in the correlation between domains and the output; while it is allowed in our setting to exist in a single domain, which is more aligned with real scenarios, e.g., the dog is more associated with grass than snow in a domain when most samples are collected in sunny morning. Other Conceptually Related Works: i) transfer learning that leverages invariance in the context of domain adaptation [60, 81, 17] or domain generalization [43, 63]; (ii) causal inference [51, 53] which builds structural causal models and define intervention (a.k.a, “do-calculus”) for cause-effect reasoning and counterfactual learning; and (iii) latent generative model that assumes generation from latent space to observed data [37, 71] but aims at learning generator in the unsupervised scenario. 3 Preliminaries Problem Setting. Let X,Y respectively denote the input and output variables. The training data {De}e∈Etrain are collected from multiple environments e ∈ Etrain, where each e is associated with a distribution Pe(X,Y ) over X × Y and De := {xei , yei }i∈[ne] i.i.d∼ Pe with [k] := {1, ..., k} for any k ∈ Z+. Our goal is to learn a robust predictor f : X → Y that only exploit the causal factor for prediction and generalize well to all domains E ⊃ Etrain. We use respectively upper, lower case letter and Cursive letter to denote the random variable, the instance and the space, e.g., a is an instance in the spaceA of random variable A. ForA := f(X )∩B with B := Rp[i1]×Rp[i2]× ...×Rp[ik], the [f(x)]A denotes the f(x) restricted on dimensions of A, i.e., [f(x)]A := [fi1(x), ..., fik(x)]. The Sobolev space W k,p(A) contains all f such that ∫ A ∣∣∂Afα|A=a∣∣pdµ(a) <∞,∀α ≤ k. Structural Causal Model. The structural causal model (SCM) is defined as a triplet M := 〈G,F , P (ε)〉, in which i) the causal structure G := (V,E) (V,E respectively denote the node and edge set) is described by a directed acyclic graph (DAG); ii) the structural equations F := {fk}Vk∈V are autonomous, i.e., intervening on Vk does not affect others, based on which we can define the dooperator and calculate the causal effect; iii) the P (ε) are probability measure for exogenous variables {εk}k. By assuming independence among {εk}k, we obtain according to Causal Markov Condition that each P that is compatible with G has P({Vk = vk}Vk∈V ) = ΠkP(Vk = vk|Pa(k) = pa(k)). An acyclic directed mixed graph (ADMG) can further allow the existence of bidirectional arrows↔, meaning the spurious correlation between two variables connected. 4 Methodology We first incorporate the causal assumptions into LaCIM in sec. 4.1. Under such assumptions, we identify the invariant distributions P (X|S,Z) and P (Y |S), which are repectively dubbed as generative invariance and causal invariance that are robust to domain shifts. Equipped with these invariances, we in sec. 4.2 show that the causal factor can be identified without mixing information from non-causal one during prediction. Finally, we introduce our learning method in sec. 4.3 to estimate the P (X|S,Z) and P (Y |S), which are respectively resorted in the inference and prediction that constitute a robust predictor during test stage. 4.1 Latent Causal Invariance Models In this section, we introduce a set of structural causal models dubbed as Latent Causal Invariance Model (LaCIM), which incorporates the causal assumptions mentioned above and also the source of distributional shifts. The corresponding causal structure of LaCIM is illustrated in Fig. 1 (c), which we will introduce step-by-step from the skeleton in Fig. 1 (a). Fig. 1 (a). Specifically, the ADMG in Fig. 1 (a) introduces latent factors V := {S,Z} to model the abstractions/concepts that generate the observed variables (X,Y ), as similarly assumed in unsupervised latent generative models [37] for image generation. Further, we explicitly separate the V into S and Z, with only S causally related to the label Y . In image classification, such a causal factor refers to the (shape,contour) of the object need to be classified; while the image X is additionally affected by contextual factor such as light, view. Fig. 1 (a)→ Fig. 1 (b). In addition, we assume that S is spuriously correlated with Z, as marked by the red “↔” in Fig. 1 (a). Such a spurious correlation corresponds to the bias inherited from data, e.g. the contextual information in image classification. Therefore, the magnitude of this correlation is distribution-dependent and thus can vary across domains. Statistically, the “spurious correlation" implicates the presence of a third unobserved (we use dot circle to represent unobserved variables) confounder, which is denoted as C in Fig. 1 (b). The unblocked path from Z to Y induced by C can lead to learning the non-causal factor during data-fitting, which can degrade the performance on unseen domains if the correlation between this non-causal factor and the output is broken. Fig. 1 (b)→ Fig. 1 (c). Taking a further inspection in Fig. 1 (b), the varying degree of correlation can be either due to the distributional shift of S,Z|C or of the C itself across domains (we use red color to mean varied distributions). As both shifts are domain-dependent, we in Fig. 1 (c) ascribe them to a domain variable D, which causes the mutation of its children nodes’ distribution, i.e., S,Z and C. Such a domain variable has been similarly introduced in [69, 68] to generate mutable variables. In our scenario, we do not require D to be observed; rather, we only need the domain index d̃e (one-hot encoded vector with length m := |Etrain|). The set of SCMs augmented with D, with the SCM Markovian compatible to the DAG of C, S, Z,X, Y in Fig. 1 (c), is dubbed as Latent Causal Invariance Models (LaCIM) that is formally defined as follows: Definition 4.1 (LaCIM). The LaCIM denotes a set of SCMs augmented with the domain variable D, i.e., {〈Me, de〉}e∈E , in which de denotes the value of D and Me := 〈G,Fe, P (ε)〉 for e. The G denotes the DAG restricted on C, S, Z,X, Y . For each environment/domain e, the Fe := {fx, fy, fes , fez , fec } correspond to generating mechanism ofX,Y, S, Z,C, with fec (εc) := gc(εc, de), fes (c, εs) := gs(c, εs, d e) and fez (c, εz) := gz(c, εz, d e) from some gc, gs, gz . Remark 1. Different from scenarios in which X generates [28] nor generated from Y [1], we consider the scenario when the X and Y are generated concurrently, which can widely exist but ignored in the literature. For example, the clinicians are recording the disease status while implementing the ultrasound test at the same time, during medical diagnosis. As an illustration, we consider the following example, in which the distributional shifts caused by domain variable D can refer to sampling bias in data. Example 4.1 (Sampling Bias). Consider the cat/dog classification, in which the animal in each image is either associated with the snow or grass. The D refers to the sampler, which generates the C that denotes time and weather to collect each sample. The S,Z respectively refer to the features of animals and context. Since each sampler may have a fixed sampling pattern (e.g. gets used to going out in the sunny morning (or in the snowy evening)), the data one collects may have sampling bias: dogs (cats) more associated with grass (snow) in the sunny morning (or snowy evening). The Def. 4.1 specifies the generating mechanisms across environments and how they differ. Equipped with such a specification, we can identify the invariant mechanisms that are stable to domain shifts: Proposition 4.2 (Causal Invariance & Generative Invariance). For LaCIM in Def. 4.1, the P (Y |S) and P (X|S,Z) are invariant to shifts across E , and are respectively denoted as Causal Invariance (CI) and Generative Invariance (GI). Remark 2. The generating process from latent variables to observed variables follows from physical law, e.g., the shape, contour, color, view, light should satisfy physical constraints to generate a reasonable image. Therefore, it is naturally hold that such generating processes are invariant. The P (X|S,Z) and P (Y |S) can induce an invariant predicting mechanism. Specifically, for a new sample x← fx(s?, z?, εx), y ← fy(s?, εy), we can first infer the causal factor s? from pfx(x|s, z) by maximizing log-likelihood of pfx(x|s, z) over S ×Z and then feed the estimated s into pfy (y|s?) for prediction. To ensure the robustness of such a two-step invariant prediction, we need to answer two following identifiability questions: 1. Can the inferred causal factor S not mix the information of (disentangled from) others? 2. Can such an invariant predictor recover the ground-truth predictor P (Y |s?)? We will answer these questions in the subsequent section, followed by our learning methods to identify the causal factor and the causal/generative invariance for prediction. 4.2 Identifiability Analysis We present the identifiability results regarding (i) the disentanglement of inferred causal factor S from non-causal Z, and (ii) the induced true predicting mechanism P (Y |s?) for x← fx(s?, z?, εx), which respectively echo the two questions imposed in the last section. Our main results are presented in theorem 4.4. To distinguish the causal factor S from others, our results require that the degree of diversity of S-Z correlation across environments is large enough, which has been similarly assumed in the literature of identifiability [52, 1]. Such a diversity condition implies the dramatical change of correlation between Z and Y , thus providing a clue to disentangle the S. Such a disentanglement analysis, is crucial to causal prediction but is ignored in existing literature about identifiability, such as those identifying the discrete latent confounders [32, 62], or those relying on Additive Noise Model (ANM) assumption [31], or linear Independent Component Analysis (ICA) [14, 35, 36, 75] (Please refer to supplement D.1 for more exhaustive reviews). More importantly, we will later in theorem 4.5 show the extension of above analysis from exponential family of P (S,Z|C) to Sobelev space; and from ANM for Y to categorical distribution for Y . We assume the ANM for fx(s, z, εx)= f̂x(s, z) + εx (we replace f̂x with fx for simplicity), which has been widely adopted to identify the causal factor [30, 54, 35]. We assume the fx to be bijective and invertible (we will discuss it later). We first narrow our interest to a subset of LaCIM denoted as Pexp in which any model in Pexp satisfies that (i) the S,Z belong to the exponential family; and (ii) the Y is generated from the ANM: Pexp = { LaCIM with any m > 0| y = fy(s) + εy, pe(s, z|c) := Πt=s,zpTt,Γt c,de (t|c),∀e } ,with pTt,Γt c,de (t) = qt∏ i=1 exp ( kt∑ j=1 T ti,j(ti)Γ t c,de,i,j +Bi(ti)−Atc,de,i ) ,∀kt, qt (1) for t = s, z and e ∈ E , with qt, kt respectively denoting the dimension of t = s, z and the number of natural parameters in each dimension. The {T ti,j(ti)}, {Γtc,de,i,j} denote the sufficient statistics and natural parameters, {Bi} and {Atc,de,i} denote the base measures and normalizing constants to ensure the integral of distribution equals to 1. Let Tt(t) := [Tt1(t1), ...,Ttqt(tqt)] ∈ Rkt×qt ( Tti(ti) := [T t i,1(ti), ..., T t i,kt(ti)], ∀i ∈ [qt] ) , Γtc,de := [ Γtc,de,1, ...,Γ t c,de,qt ] ∈ Rkt×qt ( Γtc,de,i := [Γtc,de,i,1, ...,Γ t c,de,i,kt ], ∀i ∈ [qt] ) . We further assume that the P e(C) serves to discrete distributions on the set {c1, ..., cR}, with which the pe(s, z) := ∫ p(s|c)p(z|c)dP e(c) = ∑ r p e(s, z|cr)pe(cr) can be regarded as the mixture of exponential family distributions. Rather than uniquely inference, we target on disentangling the S from Z and also recovering the ground-truth predictor, which is formally defined as ∼exp-identifiability as follows: Definition 4.3 (∼exp-identifiability). Suppose the X ⊇ fx(S × Z). We define a binary relation θ ∼exp θ̃ on the parameter space of X × Y: there exist two sets of permutation matrices and vectors, (Ms, as) and (Mz, az) for s and z respectively, such that for any (x, y) ∈ X ×Y , the following hold: T̃s([f̃−1x ]S(x)) = MsT s([f−1x ]S(x)) + as, T̃ z([f̃−1x ]Z(x)) = MzT z([f−1x ]Z(x)) + az; (2) pf̃y (y|[f̃ −1 x ]S(x)) = pfy (y|[f−1x ]S(x)). (3) We then say that θ is∼exp-identifiable, if for any θ̃, peθ(x, y) = peθ̃(x, y) ∀e ∈ Etrain, implies θ ∼exp θ̃. This definition is inspired by but beyond the scope of unsupervised scenario considered in nonlinear ICA [27, 35] in that, the former further disentangle S from Z (in Eq. (2)) and identify the true predicting mechanism (in Eq. (3)). To see disentanglement, note that for any clean (noise-free) sample x← fx(s?, z?), the Eq. (2) ensures that the inferred causal factor T̃s([f̃−1x ]S(x)) does not mix the information of others, unless the extreme case that there is a deterministic function between S and Z, in which it is impossible for S to be identified. With such an identification of s, the Eq. (3) further guarantees that the learned pf̃y (y|[f̃ −1]S(x)) can recover the ground-truth prediction probability density, i.e., pfy (y|[f−1x ]S(x)) = pfy (y|s?). With noise, the s? can be inferred with some indeterminacy. The formal result is presented in theorem 4.4. Theorem 4.4 (∼exp-identifiability). For θ of Pexp in Def. 4.1 with m := |Etrain|, we have that the θ is ∼exp identifiable under following assumptions: 1. The characteristic functions of εx, εy are almost everywhere nonzero. 2. fx, f ′x, f ′′ x are continuous and fx, fy are bijective; 3. The {T ti,j}1≤j≤kt are linearly independent in S or Z for each i ∈ [qt] for any t = s, z; and T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; 4. The { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} contains a non-empty open set in Rqs×ks+qz×kz , with B(x) := ∏ is∈[qs]Bis([f −1]is(x)) ∏ iz∈[qz ]Biz ([f −1]iz (x)). 5. The L := [P e1(C)T, ..., P em(C)T]T ∈ Rm×R and [ [Γt=s,zc2,de1 − Γ t=s,z c1,de1 ]T, ..., [Γt=s,zcR,dem − Γt=s,zc1,de1 ] T ]T ∈ R(R×m)×(qt×kt) have full column rank. The assumptions 1-3 are trivial and easy to satisfy. The characteristics functions of εx, εy can be almost everywhere non-zero for most continuous variables, such as Gaussian, exponential, beta, gamma distribution. This assumption can ensure the identifiability of p(f−1(x), as will be shown in the appendix. The bijectivity of fx and fy have been widely assumed in [30, 54, 53, 35, 75] as a basic condition for identifiability. It naturally holds for fx to be bijective since it has been empirically proven in auto-encoder [38] that the low-dimension embeddings (i.e., qs + qz < qx) can recover the original input well and also that the variational auto-encoder can extract meaningful representations from x. For the θ with categorical Y such that p(y = k|s) = [fy]k(s)/ ( ∑ k[fy]k(s)), the fy may not satisfy the bijectivity condition. We will shown identifiability for such a categorical case later in theorem 4.5. The assumption 3 can be uniformly satisfied for all distributions in the strongly exponential family. The containment of an open set in assumption (4) for { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} implies that space expanded by sufficient statistics are dense in some open set, as a sufficient condition for the mixture distribution P e(C) and also P e(X,Y |c) to be identified. The diversity assumption (5) implies that i) m ≥ R and m ∗ R ≥ max(kz ∗ qz, ks ∗ qs) + 1; and that ii) different environments are diverse enough in terms of S-Z correlation, as an almost a necessary for the invariant one to be identified (a different version is assumed in [1]). In supplement B.2, we will show that the ii) can hold unless the space of Γ belong to a zero-(Lebesgue) measure set. As indicated by the formulation, a larger m would be easier to satisfy the condition, which agrees with the intuition that more environments can provide more complementary information. Besides, our result can be extended to non-independent case among {s1, ..., sqs} (or {z1, ..., zqz}), i.e., pTt,Γt c,de (t) = exp(〈Tt(t),Γtc,de〉+B(t)−Atc,de) (t = s, z), which will shown in supplement B.2. Extension to the general forms of LaCIM. We extend to general forms of LaCIM in theorem 4.5 as long as its P(S,Z|C = c) ∈W r,2(S × Z) (for some r ≥ 2) and categorical Y , in the following theorem. This is accomplished by proving that any model in LaCIM can be approximated by a sequence of distributions with parameterization in Pexp, motivated by [3] that the exponential family is dense in the set of distributions with bounded support, and in [44] that the continuous variable with multinomial logit model can be approximated by a series of distributions with i.i.d Gumbel noise as the temperature converges to infinity. The proof is left in the supplement. Theorem 4.5 (Asymptotic∼exp-identifiability). Suppose the LaCIM satisfy that p(x|s, z) and p(y|s) are smooth w.r.t s, z and s respectively. For each e and c ∈ C, suppose Pe(S,Z|c) ∈W r,2(S×Z) for some r ≥ 2, we have that the LaCIM is asymptotically∼exp-identifiable: ∀ > 0, ∃ ∼exp-identifiable P̃θ ∈ Pexp, s.t. dPok(pe(X,Y ), p̃eθ(X,Y )) < ,∀e ∈ Etrain 3. Our proof is built on [3] that any probability in Sobolev space can be approximated by a sequence of distribution with the number of natural paramters going to infinity, i.e., kt →∞. 4.3 Learning and Inference Guided by the identifiability result, we propose to learn P (X|S,Z) and P (Y |S) via generative modeling following from Fig. 1 (c). Then to predict the label for a new sample x generated from (s?, z?), we first leverage the learned p(x|s, z) to infer s? that is ensured to be able to not mix the non-causal information, followed by learned P (y|s̃?) for prediction. 3The dPok(µ1, µ2) denotes the Pokorov distance between µ1 and µ2, with limn→∞ dPok(µn, µ) → 0 ⇐⇒ µn d→ µ. 4.3.1 Learning Method To learn the P (X|S,Z), P (Y |S) for invariant prediction, we reformulate the objective function of Variational Auto-Encoder (VAE) in the supervised scenario, in order to fit {pe(x, y)}e∈Etrain . As a latent generative model, the VAE was originally proposed for unsupervised generation from latent variables V to high-dimensional input variable X . To make such a generation tractable, the VAE introduced a variational distribution qψ parameterized by ψ to approximate the intractable posterior by maximizing the following Evidence Lower Bound (ELBO):−Lθ,ψ = Ep(x) [ Eqψ(v|x) log pθ(x,v) qψ(v|x) ] ≤ Ep(x)[log pθ(x)], where the equality is achieved when qψ(v|x) = pθ(v|x). Therefore, maximizing the ELBO over pθ and qψ will drive (i) qψ(v|x) to approximate pθ(v|x); (ii) pθ to estimate the ground-truth model p. To adapt the above surrogate loss to our DAG in Fig. 1 (c), we introduce the variational distribution qeψ(s, z|x, y) for each environment e. The corresponding ELBO for e is −Leθ,ψ ∆ =Epe(x,y) [ Eqeψ(s,z|x,y) log peθ(x, y, s, z) qeψ(s, z|x, y) ] , where peθ(x, y, s, z) = pθ(x|s, z)pθ(y|s)pe(s, z). Similarly, minimizing Leθ,ψ can drive pθ(x|s, z), pθ(y|s) to approximate the p(x|s, z), p(y|s), and also qeψ(s, z|x, y) to estimate peθ(s, z|x, y). Therefore, the qψ can inherit the properties of pθ. As peθ(s, z|x, y)= peθ(s,z|x)pθ(y|s) peθ(y|x) for our DAG in Fig. 1 (c), we can similarly reparameterize qeψ(s, z|x, y) as qeψ(s,z|x)pθ(y|s) qeψ(y|x) with qψ(y|s) replaced by pθ(y|s) (since the goal of qψ is to mimic the behavior of pθ). Then, the Leθ,ψ can be rewritten as: Leθ,ψ = Epe(x,y) [ − log qeψ(y|x)− Eqeψ(s,z|x) pθ(y|s) qeψ(y|x) log pθ(x|s, z)peθ(s, z) qeψ(s, z|x) ] , (4) where qeψ(y|x) = ∫ S q e ψ(s|x)pθ(y|s)ds. We correspondingly parameterize the prior model peθ(s, z) and inference model qeψ(s, z|x) as pθ(s, z|d̃e) and qψ(s, z|x, d̃e), in which d̃e (of environment e) denotes the domain index that can be represented by the one-hot encoded vector with length m := |Etrain|. The overall loss function is: Lθ,ψ ∆ = ∑ e∈Etrain Leθ,ψ. (5) The training datasets {De}e∈Etrain are applied to optimize the prior models {p(s, z|d̃e)}e, inference models {qψ(s, z|x, d̃e)}e, generative model pθ(x|s, z) and predictive model pθ(y|s). Particularly, the parameters of pθ(x|s, z) and pθ(y|s) are shared among all environments, motivated by the the invariance property of P (X|S,Z) and P (Y |S) across all domains. 4.3.2 Inference & Prediction. We leverage the learned P (X|S,Z), P (Y |S) for prediction. According to Prop. 4.2 and Eq. (3) in theorem 4.4, the induced predictor via P (X|S,Z), P (Y |S) can recover the true predicting mechanism for any distributional shifts from E . Specifically, for any x generated by (s?, z?), we first optimize the following log-likelihood of pθ(x|s, z) over S × Z to infer s?, z?, max s,z log pθ(x|s, z) + λs‖s‖22 + λz‖z‖22, (6) with hyperparameters λs > 0 and λz > 0 in order to control the learned s, z in a reasonable scale. Note that Eq. Eq. (6) is different from the maximum a posterior estimation since the posterior qeψ(s, z|x) is parameterized differently for different e while the pθ(x|s, z) is invariantly parameterized for E (this is because p(x|s, z) is invariant). For optimization, we adopt the strategy in [61] that first sample some candidate points from N (0, I) and select the optimal one in terms of Eq. (6) as initial point; then use Adam to optimize for another T iterations. The implementation details and optimization effect are shown in supplement E.2. Finally, with estimated s̃?, z̃?, we implement the learned pθ(y|s̃?) for prediction: ỹ := arg maxy pθ(y|s̃?). 5 Experiments We first verify the identifiability claims of theorem 4.4 in sec. 5.1. Then we evaluate LaCIM on real-world data in sec. 5.2: Non-I.I.D. Image dataset with Contexts (NICO); Colored MNIST (CMNIST); Alzheimer’s Disease Neuroimaging Initiative (ADNI www.loni.ucla.edu/ADNI for early prediction of Alzheimer’s Disease), to verify the generalization ability of our method on the target domain with distributional shifts. 5.1 Simulation To verify the identifiability claims, we implement LaCIM on synthetic data. We generate C, S, Z,X, Y following Fig. 1 (with details left in supplementary). We choose m = 3, 5 with the same total number of samples. To verify the advantage of learning on multiple diverse domains (m > 1), we compare with pool-LaCIM: minimizing the loss Eq. (4) on the pooled data from all m domains. We compute the mean correlation coefficient (MCC) adopted in [35], which measures the goodness of identifiability under permutation by introducing cost optimization to assign each learned component to the source component. We run all methods for 100 times, with the average recorded in Fig. 2a. The superiority of LaCIM over pool-LaCIM, together with the fact that LaICM with m = 5 performs better than m = 3, verify the benefit of more domains to satisfy the diversity condition. To illustrate the learning effect, we visualize the learned Z (with S left in supplement E.1) in Fig. 2b. 5.2 Real-world Data We verify the generalization ability of LaCIM on three data: NICO, CMNIST and ADNI. Dataset. We describe the datasets as follows (X,Y denotes the input and output; D is unobserved): • NICO. We consider the cat/dog classification in “Animal” dataset in NICO, a benchmark for non-i.i.d problem in [20]. Each animal is associated with “grass”,“snow” contexts. The D denotes the attributes of the sampler. The C denotes the time and weather of sampling, which generates the S,Z that respectively denote the semantic and contextual features. We split the dataset into m training domains and the test domain, in which each domain has different proportions of contexts associated with each animal, i.e., (%cat in grass, %cat in snow, %dog in grass, %dog in snow), due to different sampling strategies determined by D. The proportion vectors of all domains are introduced in Tab. 3. The distributional shift refers to the spurious correlation between the context and the label. • CMNIST: We relabel the digits 0-4 and 5-9 as y = 0 and y = 1, based on MNIST. Then we color pe (1 − pe) of images with y = 0 (y = 1) as green and others as red. We set m = 2 with pe1 = 0.95, pe2 = 0.99; while the petest for the test domain is set to 0.1. The D denotes the attributes of the painter. The Z, S respectively represent the features related to the color and the digit. Their confounder C denotes the time and weather for which the painter D draws the number and color, e.g., the painter tends to draw red 0 more often than green 1 in the sunny morning. In this regard, the distributional shift refers to the spurious correlation between the color and the label. • ADNI. The Y := {0, 1, 2}, with 0,1,2 respectively denoting Normal Control, Mild Cognitive Impairment and AD. The X is structural Magnetic resonance imaging. We split the data into m = 2 training domains and the test domain, with different values of D that denotes Age, TAU (a biomarker [24]). The C, S (Z) respectively denote the hormone level that affects the brain structure development and the disease-related (-unrelated) brain regions. The distributional shifts among all domains are due to different values of D. Compared Baselines & Implementation Details. We compare with (i) Empirical Risk Mnimization from X → Y (ERM), (ii) domain-adversarial neural network (DANN) [15], (iii) Maximum Mean Discrepancy with Adversarial Auto-Encoder (MMD-AAE) [43], (iv) Domain Invariant Variational Autoencoders (DIVA) [29], (v) Invariant Risk Mnimization (IRM) [1], (vi) Supervised VAE (sVAE): our LaCIM implemented by VAE without disentangling S,Z. For all methods, the network structures of qeψ(s, z|x), pθ(x|s, z) and pθ(y|s) for CMNIST, NICO and ADNI are shared (details introduced in supplement E.4, E.5, E.6, Tab. 7, 8). We implement SGD as optimizer, with learning rate (lr) 0.5 and weight decay (wd) 1e-5 for CMNIST; lr 0.01 with decaying 0.2× every 60 epochs, wd 5e-5 for NICO and ADNI (wd is 2e-4). The batch-size are set to 256, 30 and 4 for CMNIST, NICO, ADNI. Main Results & Discussions. We report accuracy over 10 runs for each method. As shown in Tab. 1, our LaCIM consistently outperforms others on all data. Specifically, the advantage over IRM and ERM may due to the incorporation of causal assumptions embedded in Fig. 1 (c). Further, the improvement over sVAE is benefited from the separation of S from others to avoid spurious correlation. Besides, a larger m (with the total sample size fixed) can bring further benefit on NICO, which may due to the easier satisfaction of the diversity condition in theorem 4.4. Interpretability. We visualize the learned S andZ on CMNIST and NICO. Specifically, for CMNIST, we visualize the generated image (with only digit “0” among all classes that belong to Y = 0 and digit “7” among all classes that belong to Y = 1) by interpolating S (and Z) with fixed Z (and S); for NICO, we adopt the gradient method [67], which visualizes the derivatives of the S? (i.e., dimension of S that has the highest correlation with Y ) with respect to each image. As shown in Fig. 3a, the generated sequential images in the 1st and 2nd row look more like “7” from “0” as s increases; while the sequential images in the 2nd-row change from red to green as z increases. Besides, different dimensions of S can learn different differentiating semantic information. For example, the first dimension can learn to add the dash in the hand-writing "7"; while the second dimension can learn to remove the left part of "0" to "7" as interpolated. For the dimension of Z, it learned other non-differentiating factors such as width, color. This result reflects that the learned S and Z correspond to the digit (causal factor of Y ) and color-related features. For NICO, Fig. 3b shows the ability of identifying more explainable semantic features of LaCIM than ERM in which the learned features can mix the background information. Supplement E.5 provides more results. 6 Conclusions & Discussions We propose recovering latent causal factor that is robust to distributional shifts caused by a domain variable. We introduce the causal and non-causal latent factors that are spuriously correlated with each other, and generate the input and the output via invariant mechanisms. Under this invariance, the causal factor is guaranteed to be disentangled from the non-causal one, which induces the groundtruth predictor that holds on all domains. A reformulated generative model is proposed for inferring the causal factor and prediction. A possible drawback of our model lies in our requirement of the number of environments for identifiability, the relaxation of which is left in future work. Broader Impact We claim that this work does not present any foreseeable negative social impact.
1. What are the strengths and weaknesses of the proposed LaCIM model in dealing with distributional shifts between training and testing domains? 2. How practical is the proposed method, and what are the concerns regarding its applicability in real-world scenarios? 3. Are there any out-of-distribution generalization guarantees provided in the paper for the learned invariant predictor? 4. How clear and concise are the notations used in Section 4.2, particularly for readers who may be unfamiliar with the domain variable approach? 5. Are there any other approaches or references that could help relax the conditional independence assumptions made in the paper?
Summary Of The Paper Review
Summary Of The Paper To deal with the issue of distributional shifts between training and testing domains, the authors propose Latent Causal Invariance Models (LaCIM) consisting of both causal latent factors and non-causal latent factors and the extent of the correlations between them is governed by a domain variable. They theoretically show the identifiability of the causal latent factors and the ground-truth predicting mechanism in the proposed LaCIM. Based on the identifiability, they learn the model by reformulating VAE and then verify it on various real-world data. Review Overall I like the idea and technically the paper makes sense. Here are some of my concerns. My main concern is that how practical the proposed LaCIM is. If I understand correctly, from Definition 4.1 we know that for any e , we have ⫫ S ⫫ Z | C , that is, ⫫ S ⫫ Z | C , D . Also, from the assumption over the prior p T , Γ , we see that ⫫ S i ⫫ S j | C , D for any i ≠ j and ⫫ Z i ⫫ Z j | C , D for any i ≠ j . These conditional independence (given C and D ) assumptions play a key role in proving the identifiability. Conversely, if the latent variables S and Z do not satisfy these conditional independence assumptions, the identifiability would not hold true and the proposed method would fail to identify the causal factors. Am I right? Actually, in many real-world scenarios, these assumptions do not hold. For example, when some part of Z is affected by Y [1-5], or Z is directly affected by S [6], etc. In fact, [3,7] provided some more general approaches to covering the dependent cases for identifiability, which might be helpful to relax the assumptions in this paper. Another concern is that it seems in the paper that the authors did not explicitly provide the out-of-distribution generalization guarantees, i.e., some theoretical guarantees on that the learned invariant predictor can be generalized from the training environments in E t r a i n to all the environments in E . Proposition 4.2 is more of a definition than a proof. It might be better if the authors could provide more details about it. The notations in Section 4.2 seem a bit messy. For example, what is q t ? what is k t ? etc. All these appear for the first time and should be explained better. In the last sentence of the caption of Figure 1, is it right that "the read and blue respectively means the invariant and varied values/distributions"? or the opposite? References [1] Invariant Risk Minimization. Arjovsky et al. 2019 [2] Invariant Risk Minimization Games. Ahuja et al. 2020 [3] Nonlinear Invariant Risk Minimization: A Causal Approach. Lu et al. 2021 [4] Elements of Causal Inference: Foundations and Learning Algorithms. Peters et al., 2017. [5] Domain Adaptation under Target and Conditional Shift. Zhang et al., 2013. [6] Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style. Kügelgen et al., 2021 [7] ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA. Khemakhem et al., 2020
NIPS
Title Recovering Latent Causal Factor for Generalization to Distributional Shifts Abstract Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as “spurious correlation” statistically, is domaindependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose Latent Causal Invariance Models (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at https://github.com/wubotong/LaCIM. 1 Introduction Current data-driven deep learning models, revolutionary in various tasks though, often exploit all types of correlations to fit data well. Among such correlations, there can be spurious ones corresponding to biases (e.g., confounding bias due to the presence of a third unseen factor) inherited from the data provided. Such data-dependent spurious correlations can erode the prediction power on unseen domains with distributional shifts, which can cause serious consequences especially in safety-critical tasks such as healthcare. Recently, there is a Renaissance of causality in machine learning, expected to pursue causal relationships [59] to achieve stable generalization across domains. The so-called area of “causality” is pioneered by Structural Causal Models [51], as a mathematical formulation of this metaphysical concept grasped in the human mind. The incorporation of these human priors about cause and effect endows the model with the ability to identify the causal structure [51] which entails not only the data but also the underlying process of how they are generated. To achieve causal modeling, the old-school methods [52, 10] directly causally related the output label Y to a subset of covariates X , which is however not conceptually reasonable in applications with sensory-level data (e.g. model pixels as causal factors of the output does not make sense in image classification [11]). ∗Corresponding author †Work done during an internship at Microsoft Research Asia. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). For such applications, we rather adopt the manner of human visual perception [8, 9, 80] to causally relate the label Y to unobserved abstractions denoted by S, i.e., Y ← S. We further assume the existence of another non-causal latent factor (of Y ) denoted as Z, that together with S generate the input X: X ← (S,Z). Such an assumption is similarly adopted in the literature [25, 27, 35, 75, 71]. To model shifts across domains, we allow Z to be spuriously correlated with S (hence also the output), as marked by the bidirected arrow in Fig. 1 (a). Taking image classification as an example, the S and Z respectively refer to object-related abstractions (e.g., contour, texture) and contextual information (e.g., background, view). Due to this correlation, the model can learn contextual information into prediction, which may fail to generalize to the domain such that this correlation is broken. We encapsulate above assumptions into the skeleton illustrated in Fig. 1 (a), in which the spurious correlation between S and Z varies across domains, as marked by the red bi-directed arrow in Fig. 1 (b). Taking a closer inspection, such a domain-dependent spurious correlation is governed by an auxiliary domain variable D in Fig. 1 (c), which causes the domain shifts. We call the set of causal models augmented with D as Latent Causal Invariance Models (LaCIM). Here, the “Causal Invariance” refers to P (Y |S), which together with P (X|S,Z), can be proved to be stable to the shifts across domains, under the assumptions embedded in the causal structure of LaCIM. Equipped with such an invariance, we prove that the S and the ground-truth predictor: P (Y |s?) for x generated from (s?, z?), are identifiable up to transformations that do not mix the non-causal information. Under such an identifiability guarantee, we propose to learn the P (Y |S) and P (X|S,Z) by reformulating the Variational Auto-encoder (VAE) [37] to fit the joint distribution of the input and output variables from training domains. During the test stage, we first infer the value of S by optimizing the estimated P (X|S,Z) over latent space, followed by the learned P (Y |S) for prediction. We first use simulated data to verify the correctness of the identifiability claim. Then, to demonstrate the utility, we test our approach on real-world data, consistently achieving better generalization to the new distribution; besides, we find that our inferred causal factor can be concentrated in highly explainable semantic regions for the task of image classification. We summarize our contribution as follows: Methodologically (in sec. 4.1), we propose LaCIM in which the causal assumptions of two latent factors and the distributional shifts are incorporated; Theoretically (in theorem 4.4), we prove the identifiability of the causal factor and the ground-truth predicting mechanism; Algorithmically (in sec. 4.3), guided by the identifiability, we reformulate Variational Bayesian method to learn P (X|S,Z), P (Y |S) for prediction; Experimentally (in sec. 5.2), our approach generalizes better to distributional shifts, compared with others. 2 Related Work Causality for Domain Generalization. Due to its stable transferability, the concept of causality has been introduced in many recent works for domain generalization [39, 59, 52, 10, 40, 21, 68]. Most of these works learned the assumed (causal) invariance for generalizing to unseen domains. However, they suffer from either i) lacking explicit causal modeling; or ii) inappropriate causal relations made for the output. Specifically, for i), the [39, 59] are still data-driven methods to learn stable correlation (i.e., invariance) without incorporating causal assumptions [51] beyond data, which may impede its generalization to a broader set of domains; for ii), the [52, 10, 40, 21, 68] causally relate the output with covariates, which is inappropriate for sensory-level data. Our Specification. We explicitly incorporate the causal assumptions. Specifically, we introduce i) latent factors and separate them into the causal and the non-causal factor; ii) the domain variable D, as a selecting mechanism to generate the varied S-Z correlation across domains. Such a causal modeling makes it possible to recover the causal factor S for generalization. In independent and concurrent works, [75] and [28] also explore latent variables in causal relation. As comparisons, [75] did not differentiate S from Z. The spurious correlation in [28] is limited in the correlation between domains and the output; while it is allowed in our setting to exist in a single domain, which is more aligned with real scenarios, e.g., the dog is more associated with grass than snow in a domain when most samples are collected in sunny morning. Other Conceptually Related Works: i) transfer learning that leverages invariance in the context of domain adaptation [60, 81, 17] or domain generalization [43, 63]; (ii) causal inference [51, 53] which builds structural causal models and define intervention (a.k.a, “do-calculus”) for cause-effect reasoning and counterfactual learning; and (iii) latent generative model that assumes generation from latent space to observed data [37, 71] but aims at learning generator in the unsupervised scenario. 3 Preliminaries Problem Setting. Let X,Y respectively denote the input and output variables. The training data {De}e∈Etrain are collected from multiple environments e ∈ Etrain, where each e is associated with a distribution Pe(X,Y ) over X × Y and De := {xei , yei }i∈[ne] i.i.d∼ Pe with [k] := {1, ..., k} for any k ∈ Z+. Our goal is to learn a robust predictor f : X → Y that only exploit the causal factor for prediction and generalize well to all domains E ⊃ Etrain. We use respectively upper, lower case letter and Cursive letter to denote the random variable, the instance and the space, e.g., a is an instance in the spaceA of random variable A. ForA := f(X )∩B with B := Rp[i1]×Rp[i2]× ...×Rp[ik], the [f(x)]A denotes the f(x) restricted on dimensions of A, i.e., [f(x)]A := [fi1(x), ..., fik(x)]. The Sobolev space W k,p(A) contains all f such that ∫ A ∣∣∂Afα|A=a∣∣pdµ(a) <∞,∀α ≤ k. Structural Causal Model. The structural causal model (SCM) is defined as a triplet M := 〈G,F , P (ε)〉, in which i) the causal structure G := (V,E) (V,E respectively denote the node and edge set) is described by a directed acyclic graph (DAG); ii) the structural equations F := {fk}Vk∈V are autonomous, i.e., intervening on Vk does not affect others, based on which we can define the dooperator and calculate the causal effect; iii) the P (ε) are probability measure for exogenous variables {εk}k. By assuming independence among {εk}k, we obtain according to Causal Markov Condition that each P that is compatible with G has P({Vk = vk}Vk∈V ) = ΠkP(Vk = vk|Pa(k) = pa(k)). An acyclic directed mixed graph (ADMG) can further allow the existence of bidirectional arrows↔, meaning the spurious correlation between two variables connected. 4 Methodology We first incorporate the causal assumptions into LaCIM in sec. 4.1. Under such assumptions, we identify the invariant distributions P (X|S,Z) and P (Y |S), which are repectively dubbed as generative invariance and causal invariance that are robust to domain shifts. Equipped with these invariances, we in sec. 4.2 show that the causal factor can be identified without mixing information from non-causal one during prediction. Finally, we introduce our learning method in sec. 4.3 to estimate the P (X|S,Z) and P (Y |S), which are respectively resorted in the inference and prediction that constitute a robust predictor during test stage. 4.1 Latent Causal Invariance Models In this section, we introduce a set of structural causal models dubbed as Latent Causal Invariance Model (LaCIM), which incorporates the causal assumptions mentioned above and also the source of distributional shifts. The corresponding causal structure of LaCIM is illustrated in Fig. 1 (c), which we will introduce step-by-step from the skeleton in Fig. 1 (a). Fig. 1 (a). Specifically, the ADMG in Fig. 1 (a) introduces latent factors V := {S,Z} to model the abstractions/concepts that generate the observed variables (X,Y ), as similarly assumed in unsupervised latent generative models [37] for image generation. Further, we explicitly separate the V into S and Z, with only S causally related to the label Y . In image classification, such a causal factor refers to the (shape,contour) of the object need to be classified; while the image X is additionally affected by contextual factor such as light, view. Fig. 1 (a)→ Fig. 1 (b). In addition, we assume that S is spuriously correlated with Z, as marked by the red “↔” in Fig. 1 (a). Such a spurious correlation corresponds to the bias inherited from data, e.g. the contextual information in image classification. Therefore, the magnitude of this correlation is distribution-dependent and thus can vary across domains. Statistically, the “spurious correlation" implicates the presence of a third unobserved (we use dot circle to represent unobserved variables) confounder, which is denoted as C in Fig. 1 (b). The unblocked path from Z to Y induced by C can lead to learning the non-causal factor during data-fitting, which can degrade the performance on unseen domains if the correlation between this non-causal factor and the output is broken. Fig. 1 (b)→ Fig. 1 (c). Taking a further inspection in Fig. 1 (b), the varying degree of correlation can be either due to the distributional shift of S,Z|C or of the C itself across domains (we use red color to mean varied distributions). As both shifts are domain-dependent, we in Fig. 1 (c) ascribe them to a domain variable D, which causes the mutation of its children nodes’ distribution, i.e., S,Z and C. Such a domain variable has been similarly introduced in [69, 68] to generate mutable variables. In our scenario, we do not require D to be observed; rather, we only need the domain index d̃e (one-hot encoded vector with length m := |Etrain|). The set of SCMs augmented with D, with the SCM Markovian compatible to the DAG of C, S, Z,X, Y in Fig. 1 (c), is dubbed as Latent Causal Invariance Models (LaCIM) that is formally defined as follows: Definition 4.1 (LaCIM). The LaCIM denotes a set of SCMs augmented with the domain variable D, i.e., {〈Me, de〉}e∈E , in which de denotes the value of D and Me := 〈G,Fe, P (ε)〉 for e. The G denotes the DAG restricted on C, S, Z,X, Y . For each environment/domain e, the Fe := {fx, fy, fes , fez , fec } correspond to generating mechanism ofX,Y, S, Z,C, with fec (εc) := gc(εc, de), fes (c, εs) := gs(c, εs, d e) and fez (c, εz) := gz(c, εz, d e) from some gc, gs, gz . Remark 1. Different from scenarios in which X generates [28] nor generated from Y [1], we consider the scenario when the X and Y are generated concurrently, which can widely exist but ignored in the literature. For example, the clinicians are recording the disease status while implementing the ultrasound test at the same time, during medical diagnosis. As an illustration, we consider the following example, in which the distributional shifts caused by domain variable D can refer to sampling bias in data. Example 4.1 (Sampling Bias). Consider the cat/dog classification, in which the animal in each image is either associated with the snow or grass. The D refers to the sampler, which generates the C that denotes time and weather to collect each sample. The S,Z respectively refer to the features of animals and context. Since each sampler may have a fixed sampling pattern (e.g. gets used to going out in the sunny morning (or in the snowy evening)), the data one collects may have sampling bias: dogs (cats) more associated with grass (snow) in the sunny morning (or snowy evening). The Def. 4.1 specifies the generating mechanisms across environments and how they differ. Equipped with such a specification, we can identify the invariant mechanisms that are stable to domain shifts: Proposition 4.2 (Causal Invariance & Generative Invariance). For LaCIM in Def. 4.1, the P (Y |S) and P (X|S,Z) are invariant to shifts across E , and are respectively denoted as Causal Invariance (CI) and Generative Invariance (GI). Remark 2. The generating process from latent variables to observed variables follows from physical law, e.g., the shape, contour, color, view, light should satisfy physical constraints to generate a reasonable image. Therefore, it is naturally hold that such generating processes are invariant. The P (X|S,Z) and P (Y |S) can induce an invariant predicting mechanism. Specifically, for a new sample x← fx(s?, z?, εx), y ← fy(s?, εy), we can first infer the causal factor s? from pfx(x|s, z) by maximizing log-likelihood of pfx(x|s, z) over S ×Z and then feed the estimated s into pfy (y|s?) for prediction. To ensure the robustness of such a two-step invariant prediction, we need to answer two following identifiability questions: 1. Can the inferred causal factor S not mix the information of (disentangled from) others? 2. Can such an invariant predictor recover the ground-truth predictor P (Y |s?)? We will answer these questions in the subsequent section, followed by our learning methods to identify the causal factor and the causal/generative invariance for prediction. 4.2 Identifiability Analysis We present the identifiability results regarding (i) the disentanglement of inferred causal factor S from non-causal Z, and (ii) the induced true predicting mechanism P (Y |s?) for x← fx(s?, z?, εx), which respectively echo the two questions imposed in the last section. Our main results are presented in theorem 4.4. To distinguish the causal factor S from others, our results require that the degree of diversity of S-Z correlation across environments is large enough, which has been similarly assumed in the literature of identifiability [52, 1]. Such a diversity condition implies the dramatical change of correlation between Z and Y , thus providing a clue to disentangle the S. Such a disentanglement analysis, is crucial to causal prediction but is ignored in existing literature about identifiability, such as those identifying the discrete latent confounders [32, 62], or those relying on Additive Noise Model (ANM) assumption [31], or linear Independent Component Analysis (ICA) [14, 35, 36, 75] (Please refer to supplement D.1 for more exhaustive reviews). More importantly, we will later in theorem 4.5 show the extension of above analysis from exponential family of P (S,Z|C) to Sobelev space; and from ANM for Y to categorical distribution for Y . We assume the ANM for fx(s, z, εx)= f̂x(s, z) + εx (we replace f̂x with fx for simplicity), which has been widely adopted to identify the causal factor [30, 54, 35]. We assume the fx to be bijective and invertible (we will discuss it later). We first narrow our interest to a subset of LaCIM denoted as Pexp in which any model in Pexp satisfies that (i) the S,Z belong to the exponential family; and (ii) the Y is generated from the ANM: Pexp = { LaCIM with any m > 0| y = fy(s) + εy, pe(s, z|c) := Πt=s,zpTt,Γt c,de (t|c),∀e } ,with pTt,Γt c,de (t) = qt∏ i=1 exp ( kt∑ j=1 T ti,j(ti)Γ t c,de,i,j +Bi(ti)−Atc,de,i ) ,∀kt, qt (1) for t = s, z and e ∈ E , with qt, kt respectively denoting the dimension of t = s, z and the number of natural parameters in each dimension. The {T ti,j(ti)}, {Γtc,de,i,j} denote the sufficient statistics and natural parameters, {Bi} and {Atc,de,i} denote the base measures and normalizing constants to ensure the integral of distribution equals to 1. Let Tt(t) := [Tt1(t1), ...,Ttqt(tqt)] ∈ Rkt×qt ( Tti(ti) := [T t i,1(ti), ..., T t i,kt(ti)], ∀i ∈ [qt] ) , Γtc,de := [ Γtc,de,1, ...,Γ t c,de,qt ] ∈ Rkt×qt ( Γtc,de,i := [Γtc,de,i,1, ...,Γ t c,de,i,kt ], ∀i ∈ [qt] ) . We further assume that the P e(C) serves to discrete distributions on the set {c1, ..., cR}, with which the pe(s, z) := ∫ p(s|c)p(z|c)dP e(c) = ∑ r p e(s, z|cr)pe(cr) can be regarded as the mixture of exponential family distributions. Rather than uniquely inference, we target on disentangling the S from Z and also recovering the ground-truth predictor, which is formally defined as ∼exp-identifiability as follows: Definition 4.3 (∼exp-identifiability). Suppose the X ⊇ fx(S × Z). We define a binary relation θ ∼exp θ̃ on the parameter space of X × Y: there exist two sets of permutation matrices and vectors, (Ms, as) and (Mz, az) for s and z respectively, such that for any (x, y) ∈ X ×Y , the following hold: T̃s([f̃−1x ]S(x)) = MsT s([f−1x ]S(x)) + as, T̃ z([f̃−1x ]Z(x)) = MzT z([f−1x ]Z(x)) + az; (2) pf̃y (y|[f̃ −1 x ]S(x)) = pfy (y|[f−1x ]S(x)). (3) We then say that θ is∼exp-identifiable, if for any θ̃, peθ(x, y) = peθ̃(x, y) ∀e ∈ Etrain, implies θ ∼exp θ̃. This definition is inspired by but beyond the scope of unsupervised scenario considered in nonlinear ICA [27, 35] in that, the former further disentangle S from Z (in Eq. (2)) and identify the true predicting mechanism (in Eq. (3)). To see disentanglement, note that for any clean (noise-free) sample x← fx(s?, z?), the Eq. (2) ensures that the inferred causal factor T̃s([f̃−1x ]S(x)) does not mix the information of others, unless the extreme case that there is a deterministic function between S and Z, in which it is impossible for S to be identified. With such an identification of s, the Eq. (3) further guarantees that the learned pf̃y (y|[f̃ −1]S(x)) can recover the ground-truth prediction probability density, i.e., pfy (y|[f−1x ]S(x)) = pfy (y|s?). With noise, the s? can be inferred with some indeterminacy. The formal result is presented in theorem 4.4. Theorem 4.4 (∼exp-identifiability). For θ of Pexp in Def. 4.1 with m := |Etrain|, we have that the θ is ∼exp identifiable under following assumptions: 1. The characteristic functions of εx, εy are almost everywhere nonzero. 2. fx, f ′x, f ′′ x are continuous and fx, fy are bijective; 3. The {T ti,j}1≤j≤kt are linearly independent in S or Z for each i ∈ [qt] for any t = s, z; and T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; 4. The { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} contains a non-empty open set in Rqs×ks+qz×kz , with B(x) := ∏ is∈[qs]Bis([f −1]is(x)) ∏ iz∈[qz ]Biz ([f −1]iz (x)). 5. The L := [P e1(C)T, ..., P em(C)T]T ∈ Rm×R and [ [Γt=s,zc2,de1 − Γ t=s,z c1,de1 ]T, ..., [Γt=s,zcR,dem − Γt=s,zc1,de1 ] T ]T ∈ R(R×m)×(qt×kt) have full column rank. The assumptions 1-3 are trivial and easy to satisfy. The characteristics functions of εx, εy can be almost everywhere non-zero for most continuous variables, such as Gaussian, exponential, beta, gamma distribution. This assumption can ensure the identifiability of p(f−1(x), as will be shown in the appendix. The bijectivity of fx and fy have been widely assumed in [30, 54, 53, 35, 75] as a basic condition for identifiability. It naturally holds for fx to be bijective since it has been empirically proven in auto-encoder [38] that the low-dimension embeddings (i.e., qs + qz < qx) can recover the original input well and also that the variational auto-encoder can extract meaningful representations from x. For the θ with categorical Y such that p(y = k|s) = [fy]k(s)/ ( ∑ k[fy]k(s)), the fy may not satisfy the bijectivity condition. We will shown identifiability for such a categorical case later in theorem 4.5. The assumption 3 can be uniformly satisfied for all distributions in the strongly exponential family. The containment of an open set in assumption (4) for { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} implies that space expanded by sufficient statistics are dense in some open set, as a sufficient condition for the mixture distribution P e(C) and also P e(X,Y |c) to be identified. The diversity assumption (5) implies that i) m ≥ R and m ∗ R ≥ max(kz ∗ qz, ks ∗ qs) + 1; and that ii) different environments are diverse enough in terms of S-Z correlation, as an almost a necessary for the invariant one to be identified (a different version is assumed in [1]). In supplement B.2, we will show that the ii) can hold unless the space of Γ belong to a zero-(Lebesgue) measure set. As indicated by the formulation, a larger m would be easier to satisfy the condition, which agrees with the intuition that more environments can provide more complementary information. Besides, our result can be extended to non-independent case among {s1, ..., sqs} (or {z1, ..., zqz}), i.e., pTt,Γt c,de (t) = exp(〈Tt(t),Γtc,de〉+B(t)−Atc,de) (t = s, z), which will shown in supplement B.2. Extension to the general forms of LaCIM. We extend to general forms of LaCIM in theorem 4.5 as long as its P(S,Z|C = c) ∈W r,2(S × Z) (for some r ≥ 2) and categorical Y , in the following theorem. This is accomplished by proving that any model in LaCIM can be approximated by a sequence of distributions with parameterization in Pexp, motivated by [3] that the exponential family is dense in the set of distributions with bounded support, and in [44] that the continuous variable with multinomial logit model can be approximated by a series of distributions with i.i.d Gumbel noise as the temperature converges to infinity. The proof is left in the supplement. Theorem 4.5 (Asymptotic∼exp-identifiability). Suppose the LaCIM satisfy that p(x|s, z) and p(y|s) are smooth w.r.t s, z and s respectively. For each e and c ∈ C, suppose Pe(S,Z|c) ∈W r,2(S×Z) for some r ≥ 2, we have that the LaCIM is asymptotically∼exp-identifiable: ∀ > 0, ∃ ∼exp-identifiable P̃θ ∈ Pexp, s.t. dPok(pe(X,Y ), p̃eθ(X,Y )) < ,∀e ∈ Etrain 3. Our proof is built on [3] that any probability in Sobolev space can be approximated by a sequence of distribution with the number of natural paramters going to infinity, i.e., kt →∞. 4.3 Learning and Inference Guided by the identifiability result, we propose to learn P (X|S,Z) and P (Y |S) via generative modeling following from Fig. 1 (c). Then to predict the label for a new sample x generated from (s?, z?), we first leverage the learned p(x|s, z) to infer s? that is ensured to be able to not mix the non-causal information, followed by learned P (y|s̃?) for prediction. 3The dPok(µ1, µ2) denotes the Pokorov distance between µ1 and µ2, with limn→∞ dPok(µn, µ) → 0 ⇐⇒ µn d→ µ. 4.3.1 Learning Method To learn the P (X|S,Z), P (Y |S) for invariant prediction, we reformulate the objective function of Variational Auto-Encoder (VAE) in the supervised scenario, in order to fit {pe(x, y)}e∈Etrain . As a latent generative model, the VAE was originally proposed for unsupervised generation from latent variables V to high-dimensional input variable X . To make such a generation tractable, the VAE introduced a variational distribution qψ parameterized by ψ to approximate the intractable posterior by maximizing the following Evidence Lower Bound (ELBO):−Lθ,ψ = Ep(x) [ Eqψ(v|x) log pθ(x,v) qψ(v|x) ] ≤ Ep(x)[log pθ(x)], where the equality is achieved when qψ(v|x) = pθ(v|x). Therefore, maximizing the ELBO over pθ and qψ will drive (i) qψ(v|x) to approximate pθ(v|x); (ii) pθ to estimate the ground-truth model p. To adapt the above surrogate loss to our DAG in Fig. 1 (c), we introduce the variational distribution qeψ(s, z|x, y) for each environment e. The corresponding ELBO for e is −Leθ,ψ ∆ =Epe(x,y) [ Eqeψ(s,z|x,y) log peθ(x, y, s, z) qeψ(s, z|x, y) ] , where peθ(x, y, s, z) = pθ(x|s, z)pθ(y|s)pe(s, z). Similarly, minimizing Leθ,ψ can drive pθ(x|s, z), pθ(y|s) to approximate the p(x|s, z), p(y|s), and also qeψ(s, z|x, y) to estimate peθ(s, z|x, y). Therefore, the qψ can inherit the properties of pθ. As peθ(s, z|x, y)= peθ(s,z|x)pθ(y|s) peθ(y|x) for our DAG in Fig. 1 (c), we can similarly reparameterize qeψ(s, z|x, y) as qeψ(s,z|x)pθ(y|s) qeψ(y|x) with qψ(y|s) replaced by pθ(y|s) (since the goal of qψ is to mimic the behavior of pθ). Then, the Leθ,ψ can be rewritten as: Leθ,ψ = Epe(x,y) [ − log qeψ(y|x)− Eqeψ(s,z|x) pθ(y|s) qeψ(y|x) log pθ(x|s, z)peθ(s, z) qeψ(s, z|x) ] , (4) where qeψ(y|x) = ∫ S q e ψ(s|x)pθ(y|s)ds. We correspondingly parameterize the prior model peθ(s, z) and inference model qeψ(s, z|x) as pθ(s, z|d̃e) and qψ(s, z|x, d̃e), in which d̃e (of environment e) denotes the domain index that can be represented by the one-hot encoded vector with length m := |Etrain|. The overall loss function is: Lθ,ψ ∆ = ∑ e∈Etrain Leθ,ψ. (5) The training datasets {De}e∈Etrain are applied to optimize the prior models {p(s, z|d̃e)}e, inference models {qψ(s, z|x, d̃e)}e, generative model pθ(x|s, z) and predictive model pθ(y|s). Particularly, the parameters of pθ(x|s, z) and pθ(y|s) are shared among all environments, motivated by the the invariance property of P (X|S,Z) and P (Y |S) across all domains. 4.3.2 Inference & Prediction. We leverage the learned P (X|S,Z), P (Y |S) for prediction. According to Prop. 4.2 and Eq. (3) in theorem 4.4, the induced predictor via P (X|S,Z), P (Y |S) can recover the true predicting mechanism for any distributional shifts from E . Specifically, for any x generated by (s?, z?), we first optimize the following log-likelihood of pθ(x|s, z) over S × Z to infer s?, z?, max s,z log pθ(x|s, z) + λs‖s‖22 + λz‖z‖22, (6) with hyperparameters λs > 0 and λz > 0 in order to control the learned s, z in a reasonable scale. Note that Eq. Eq. (6) is different from the maximum a posterior estimation since the posterior qeψ(s, z|x) is parameterized differently for different e while the pθ(x|s, z) is invariantly parameterized for E (this is because p(x|s, z) is invariant). For optimization, we adopt the strategy in [61] that first sample some candidate points from N (0, I) and select the optimal one in terms of Eq. (6) as initial point; then use Adam to optimize for another T iterations. The implementation details and optimization effect are shown in supplement E.2. Finally, with estimated s̃?, z̃?, we implement the learned pθ(y|s̃?) for prediction: ỹ := arg maxy pθ(y|s̃?). 5 Experiments We first verify the identifiability claims of theorem 4.4 in sec. 5.1. Then we evaluate LaCIM on real-world data in sec. 5.2: Non-I.I.D. Image dataset with Contexts (NICO); Colored MNIST (CMNIST); Alzheimer’s Disease Neuroimaging Initiative (ADNI www.loni.ucla.edu/ADNI for early prediction of Alzheimer’s Disease), to verify the generalization ability of our method on the target domain with distributional shifts. 5.1 Simulation To verify the identifiability claims, we implement LaCIM on synthetic data. We generate C, S, Z,X, Y following Fig. 1 (with details left in supplementary). We choose m = 3, 5 with the same total number of samples. To verify the advantage of learning on multiple diverse domains (m > 1), we compare with pool-LaCIM: minimizing the loss Eq. (4) on the pooled data from all m domains. We compute the mean correlation coefficient (MCC) adopted in [35], which measures the goodness of identifiability under permutation by introducing cost optimization to assign each learned component to the source component. We run all methods for 100 times, with the average recorded in Fig. 2a. The superiority of LaCIM over pool-LaCIM, together with the fact that LaICM with m = 5 performs better than m = 3, verify the benefit of more domains to satisfy the diversity condition. To illustrate the learning effect, we visualize the learned Z (with S left in supplement E.1) in Fig. 2b. 5.2 Real-world Data We verify the generalization ability of LaCIM on three data: NICO, CMNIST and ADNI. Dataset. We describe the datasets as follows (X,Y denotes the input and output; D is unobserved): • NICO. We consider the cat/dog classification in “Animal” dataset in NICO, a benchmark for non-i.i.d problem in [20]. Each animal is associated with “grass”,“snow” contexts. The D denotes the attributes of the sampler. The C denotes the time and weather of sampling, which generates the S,Z that respectively denote the semantic and contextual features. We split the dataset into m training domains and the test domain, in which each domain has different proportions of contexts associated with each animal, i.e., (%cat in grass, %cat in snow, %dog in grass, %dog in snow), due to different sampling strategies determined by D. The proportion vectors of all domains are introduced in Tab. 3. The distributional shift refers to the spurious correlation between the context and the label. • CMNIST: We relabel the digits 0-4 and 5-9 as y = 0 and y = 1, based on MNIST. Then we color pe (1 − pe) of images with y = 0 (y = 1) as green and others as red. We set m = 2 with pe1 = 0.95, pe2 = 0.99; while the petest for the test domain is set to 0.1. The D denotes the attributes of the painter. The Z, S respectively represent the features related to the color and the digit. Their confounder C denotes the time and weather for which the painter D draws the number and color, e.g., the painter tends to draw red 0 more often than green 1 in the sunny morning. In this regard, the distributional shift refers to the spurious correlation between the color and the label. • ADNI. The Y := {0, 1, 2}, with 0,1,2 respectively denoting Normal Control, Mild Cognitive Impairment and AD. The X is structural Magnetic resonance imaging. We split the data into m = 2 training domains and the test domain, with different values of D that denotes Age, TAU (a biomarker [24]). The C, S (Z) respectively denote the hormone level that affects the brain structure development and the disease-related (-unrelated) brain regions. The distributional shifts among all domains are due to different values of D. Compared Baselines & Implementation Details. We compare with (i) Empirical Risk Mnimization from X → Y (ERM), (ii) domain-adversarial neural network (DANN) [15], (iii) Maximum Mean Discrepancy with Adversarial Auto-Encoder (MMD-AAE) [43], (iv) Domain Invariant Variational Autoencoders (DIVA) [29], (v) Invariant Risk Mnimization (IRM) [1], (vi) Supervised VAE (sVAE): our LaCIM implemented by VAE without disentangling S,Z. For all methods, the network structures of qeψ(s, z|x), pθ(x|s, z) and pθ(y|s) for CMNIST, NICO and ADNI are shared (details introduced in supplement E.4, E.5, E.6, Tab. 7, 8). We implement SGD as optimizer, with learning rate (lr) 0.5 and weight decay (wd) 1e-5 for CMNIST; lr 0.01 with decaying 0.2× every 60 epochs, wd 5e-5 for NICO and ADNI (wd is 2e-4). The batch-size are set to 256, 30 and 4 for CMNIST, NICO, ADNI. Main Results & Discussions. We report accuracy over 10 runs for each method. As shown in Tab. 1, our LaCIM consistently outperforms others on all data. Specifically, the advantage over IRM and ERM may due to the incorporation of causal assumptions embedded in Fig. 1 (c). Further, the improvement over sVAE is benefited from the separation of S from others to avoid spurious correlation. Besides, a larger m (with the total sample size fixed) can bring further benefit on NICO, which may due to the easier satisfaction of the diversity condition in theorem 4.4. Interpretability. We visualize the learned S andZ on CMNIST and NICO. Specifically, for CMNIST, we visualize the generated image (with only digit “0” among all classes that belong to Y = 0 and digit “7” among all classes that belong to Y = 1) by interpolating S (and Z) with fixed Z (and S); for NICO, we adopt the gradient method [67], which visualizes the derivatives of the S? (i.e., dimension of S that has the highest correlation with Y ) with respect to each image. As shown in Fig. 3a, the generated sequential images in the 1st and 2nd row look more like “7” from “0” as s increases; while the sequential images in the 2nd-row change from red to green as z increases. Besides, different dimensions of S can learn different differentiating semantic information. For example, the first dimension can learn to add the dash in the hand-writing "7"; while the second dimension can learn to remove the left part of "0" to "7" as interpolated. For the dimension of Z, it learned other non-differentiating factors such as width, color. This result reflects that the learned S and Z correspond to the digit (causal factor of Y ) and color-related features. For NICO, Fig. 3b shows the ability of identifying more explainable semantic features of LaCIM than ERM in which the learned features can mix the background information. Supplement E.5 provides more results. 6 Conclusions & Discussions We propose recovering latent causal factor that is robust to distributional shifts caused by a domain variable. We introduce the causal and non-causal latent factors that are spuriously correlated with each other, and generate the input and the output via invariant mechanisms. Under this invariance, the causal factor is guaranteed to be disentangled from the non-causal one, which induces the groundtruth predictor that holds on all domains. A reformulated generative model is proposed for inferring the causal factor and prediction. A possible drawback of our model lies in our requirement of the number of environments for identifiability, the relaxation of which is left in future work. Broader Impact We claim that this work does not present any foreseeable negative social impact.
1. What is the novel approach introduced by the paper in avoiding "spurious correlations" to improve model generalizability? 2. What are the strengths of the proposed LaCIM approach, particularly regarding its identifiability results and experimental evaluation? 3. What are the weaknesses of the paper, especially concerning the causal structure assumption and the untestable assumptions regarding data quality? 4. How does the reviewer assess the clarity and writing style of the paper, particularly in Section 4.2? 5. What are the concerns regarding the experimental methodology, including the modification of datasets to satisfy the latent causal structure assumption? 6. How does the reviewer suggest improving the paper's discussion of possible limitations of the method for it to be a comprehensive work?
Summary Of The Paper Review
Summary Of The Paper This paper introduces the "Latent Causal Invariance Model" (LaCIM), a structural variational autoencoder-like approach for learning to isolate causal factors from "spurious" ones in prediction to improve generalization to new environments. The authors assume a latent generative structure for the data, with separate latent constructs for causal and contextual factors. Under strong parametric assumptions and requirements on the number of source environments the authors prove identifiability of the causal factors. Experiments on imaging datasets (2 benchmarks and a small medical dataset) show LaCIM outperforms some existing domain adaptation baselines as well as Invariant Risk Minimization (IRM). Further, qualitative assessment of images generated by ablating the causal and contextual factors show disentanglement of these factors. Review To the best of my knowledge, the proposed LaCIM approach is a novel and interesting approach for avoiding "spurious correlations" to improve model generalizability to new contexts. The theoretical results are quite daunting to parse, in part because the methodological ideas build off of both literature on additive noise models (ANM) in causal discovery and independent component analysis (ICA). The authors define a two step process for learning: first learning the generative model (effectively a VAE), then computing the latent representation that maximizes the likelihood for each example and fitting a classifier on top of this. The approach is reasonable, and the experiments show its potential. I did have a number of minor concerns. The assumed causal structure makes some questionable decisions, there are untestable assumptions regarding the quality of the input data, and the experiments are designed to satisfy the assumptions of LaCIM (meaning comparisons to other methods are likely over optimistic). Strengths: The approach is novel, synthesizing ideas from ANM and ICA to new effect The authors prove identifiability of their model. These results are hidden behind rather dense theory, though, so I was unable to check these results. Promising experimental evaluation: both quantitative comparisons to other approaches as well as qualitative analysis of disentanglement of factors. Weaknesses: Under the causal model assumed by LaCIM, X and Y are generated concurrently from the contextual factors Z and causal factors S. The authors note this as a strength or unexplored direction in Remark 1 (Ln 148), but for many examples it doesn't seem to make as much sense as assuming X generates or is generated by Y. For example, in Remark 1 the authors talk about disease status (Y) and ultrasound tests (X), but surely whether or not a patient has the disease will effect the results observed in the ultrasound! In most classification tasks, even in imaging, the label is the concept of interest while the observed data (i.e., the image) is a realization of that label. In the qualitative analysis (Line 349), fixing S kept the class label fixed (which the authors state as evidence that LaCIM has learned the "causal" factors), but is it not inefficient to have this redundancy in the role of Y and S? Further, what value is there in having D and C be separate? Isn't it sufficient to just have D in Fig 1c? No additional conditional independence value is added. The identifiablity results in Section 4.2 are very interesting, but are at times quite hard to follow. This is in part because of a general lack of clarity in this section (but in general the paper would benefit from one pass to improve writing). The authors justify many of the assumptions in Section 4.2 by pointing to other works that make these assumptions, but readers not familiar with these other works will not understand the significance or consequences of these assumptions. I think the authors assume readers will be familiar with, e.g., assumptions and techniques from ICA. The authors discuss the diversity of environments. This is very important and I'm glad to see this! Condition i) (on Line 234) is untestable, right? It depends on R, the cardinality of the latent confounding construct... Does this mean, in general, a user will have no idea if they have enough data or high enough quality data to ensure the correctness of LaCIM? In the experiments, the authors explicitly set up different environments to try to follow the generative structure of LaCIM. But this structure does not match up with, e.g., the way the original colored MNIST dataset was generated and thus favors LaCIM. Could LaCIM be applied to the original MNIST dataset? Does IRM (which was the paper which developed the original colored MNIST datasets) outperform LaCIM then? This would speak to sensitivity of results to the causal structural assumptions which would be important to know. In the simulated experiment, I think it would be more relevant to examine how the number of domains and the diversity of the data affect the identifiability. Currently, the simulated experiment looks at 3 vs 5 environments, but according to diversity condition i), isn't the relevant factor how the number of environments relates to the cardinality of the latent factor C? Minor question: Regarding the learning procedure, won't there be a posterior over S and Z? Couldn't predictions be made by marginalizing over this posterior, rather than optimizing to compute an estimate of s*,z* and then predict P ( Y | s ∗ ) ? UPDATES: Thanks to the authors for their response. I think the greatest improvements to the paper can be made by increasing the clarity in Section 4. I think elements from the discussion of the identifiability and diversity conditions in the authors' response to my comments should be included in the main paper. I believe this will really help readers better understand the method. I would like to make one comment regrading the experimental methodology: In their response, the authors state that "our method can outperform all compared methods on other benchmarks that satisfy our causal structure." It should be noted that this is expected in order to demonstrate the empirical soundness and validity of the method. For a comprehensive evaluation of the utility of the method, including its possible failure cases, it should also be applied in realistic settings which may not satisfy the untestable assumptions of the method. The authors assume a latent causal structure, and, in all experiments in the main paper, modify the datasets to satisfy this assumption. But in a real world practical application, a user will not know the latent causal structure. The authors pointed to experiments on the original colored MNIST dataset in the appendix. However, the authors modified the proposed LaCIM method (i.e., used LaCIM-REx) to fit the generative structure of the original colored MNIST dataset. In my view, this defeats the purpose of this experiment, because what we want to see is how the unmodified LaCIM method behaves on the original colored MNIST dataset. For example, from a user perspective, there is no way to differentiate the data generating process (DGP) of the original colored MNIST dataset from the DGP of the modified colored MNIST dataset used by the authors. Thus, in a real use case, the user would be unable to modify the LaCIM methodology as the authors did in the supplemental experiment. Put another way, users should be able to apply the method to problems as they appear (consider, e.g., the WILDS distribution shift benchmark tasks), or be able to determine that the method is not applicable to their particular problem/application. Thus, while I remain positive about the work, I think the authors should more clearly investigate and discuss possible limitations of the method for this paper to be a strong and comprehensive . For this reason, I am maintaining my score.
NIPS
Title Recovering Latent Causal Factor for Generalization to Distributional Shifts Abstract Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as “spurious correlation” statistically, is domaindependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose Latent Causal Invariance Models (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at https://github.com/wubotong/LaCIM. 1 Introduction Current data-driven deep learning models, revolutionary in various tasks though, often exploit all types of correlations to fit data well. Among such correlations, there can be spurious ones corresponding to biases (e.g., confounding bias due to the presence of a third unseen factor) inherited from the data provided. Such data-dependent spurious correlations can erode the prediction power on unseen domains with distributional shifts, which can cause serious consequences especially in safety-critical tasks such as healthcare. Recently, there is a Renaissance of causality in machine learning, expected to pursue causal relationships [59] to achieve stable generalization across domains. The so-called area of “causality” is pioneered by Structural Causal Models [51], as a mathematical formulation of this metaphysical concept grasped in the human mind. The incorporation of these human priors about cause and effect endows the model with the ability to identify the causal structure [51] which entails not only the data but also the underlying process of how they are generated. To achieve causal modeling, the old-school methods [52, 10] directly causally related the output label Y to a subset of covariates X , which is however not conceptually reasonable in applications with sensory-level data (e.g. model pixels as causal factors of the output does not make sense in image classification [11]). ∗Corresponding author †Work done during an internship at Microsoft Research Asia. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). For such applications, we rather adopt the manner of human visual perception [8, 9, 80] to causally relate the label Y to unobserved abstractions denoted by S, i.e., Y ← S. We further assume the existence of another non-causal latent factor (of Y ) denoted as Z, that together with S generate the input X: X ← (S,Z). Such an assumption is similarly adopted in the literature [25, 27, 35, 75, 71]. To model shifts across domains, we allow Z to be spuriously correlated with S (hence also the output), as marked by the bidirected arrow in Fig. 1 (a). Taking image classification as an example, the S and Z respectively refer to object-related abstractions (e.g., contour, texture) and contextual information (e.g., background, view). Due to this correlation, the model can learn contextual information into prediction, which may fail to generalize to the domain such that this correlation is broken. We encapsulate above assumptions into the skeleton illustrated in Fig. 1 (a), in which the spurious correlation between S and Z varies across domains, as marked by the red bi-directed arrow in Fig. 1 (b). Taking a closer inspection, such a domain-dependent spurious correlation is governed by an auxiliary domain variable D in Fig. 1 (c), which causes the domain shifts. We call the set of causal models augmented with D as Latent Causal Invariance Models (LaCIM). Here, the “Causal Invariance” refers to P (Y |S), which together with P (X|S,Z), can be proved to be stable to the shifts across domains, under the assumptions embedded in the causal structure of LaCIM. Equipped with such an invariance, we prove that the S and the ground-truth predictor: P (Y |s?) for x generated from (s?, z?), are identifiable up to transformations that do not mix the non-causal information. Under such an identifiability guarantee, we propose to learn the P (Y |S) and P (X|S,Z) by reformulating the Variational Auto-encoder (VAE) [37] to fit the joint distribution of the input and output variables from training domains. During the test stage, we first infer the value of S by optimizing the estimated P (X|S,Z) over latent space, followed by the learned P (Y |S) for prediction. We first use simulated data to verify the correctness of the identifiability claim. Then, to demonstrate the utility, we test our approach on real-world data, consistently achieving better generalization to the new distribution; besides, we find that our inferred causal factor can be concentrated in highly explainable semantic regions for the task of image classification. We summarize our contribution as follows: Methodologically (in sec. 4.1), we propose LaCIM in which the causal assumptions of two latent factors and the distributional shifts are incorporated; Theoretically (in theorem 4.4), we prove the identifiability of the causal factor and the ground-truth predicting mechanism; Algorithmically (in sec. 4.3), guided by the identifiability, we reformulate Variational Bayesian method to learn P (X|S,Z), P (Y |S) for prediction; Experimentally (in sec. 5.2), our approach generalizes better to distributional shifts, compared with others. 2 Related Work Causality for Domain Generalization. Due to its stable transferability, the concept of causality has been introduced in many recent works for domain generalization [39, 59, 52, 10, 40, 21, 68]. Most of these works learned the assumed (causal) invariance for generalizing to unseen domains. However, they suffer from either i) lacking explicit causal modeling; or ii) inappropriate causal relations made for the output. Specifically, for i), the [39, 59] are still data-driven methods to learn stable correlation (i.e., invariance) without incorporating causal assumptions [51] beyond data, which may impede its generalization to a broader set of domains; for ii), the [52, 10, 40, 21, 68] causally relate the output with covariates, which is inappropriate for sensory-level data. Our Specification. We explicitly incorporate the causal assumptions. Specifically, we introduce i) latent factors and separate them into the causal and the non-causal factor; ii) the domain variable D, as a selecting mechanism to generate the varied S-Z correlation across domains. Such a causal modeling makes it possible to recover the causal factor S for generalization. In independent and concurrent works, [75] and [28] also explore latent variables in causal relation. As comparisons, [75] did not differentiate S from Z. The spurious correlation in [28] is limited in the correlation between domains and the output; while it is allowed in our setting to exist in a single domain, which is more aligned with real scenarios, e.g., the dog is more associated with grass than snow in a domain when most samples are collected in sunny morning. Other Conceptually Related Works: i) transfer learning that leverages invariance in the context of domain adaptation [60, 81, 17] or domain generalization [43, 63]; (ii) causal inference [51, 53] which builds structural causal models and define intervention (a.k.a, “do-calculus”) for cause-effect reasoning and counterfactual learning; and (iii) latent generative model that assumes generation from latent space to observed data [37, 71] but aims at learning generator in the unsupervised scenario. 3 Preliminaries Problem Setting. Let X,Y respectively denote the input and output variables. The training data {De}e∈Etrain are collected from multiple environments e ∈ Etrain, where each e is associated with a distribution Pe(X,Y ) over X × Y and De := {xei , yei }i∈[ne] i.i.d∼ Pe with [k] := {1, ..., k} for any k ∈ Z+. Our goal is to learn a robust predictor f : X → Y that only exploit the causal factor for prediction and generalize well to all domains E ⊃ Etrain. We use respectively upper, lower case letter and Cursive letter to denote the random variable, the instance and the space, e.g., a is an instance in the spaceA of random variable A. ForA := f(X )∩B with B := Rp[i1]×Rp[i2]× ...×Rp[ik], the [f(x)]A denotes the f(x) restricted on dimensions of A, i.e., [f(x)]A := [fi1(x), ..., fik(x)]. The Sobolev space W k,p(A) contains all f such that ∫ A ∣∣∂Afα|A=a∣∣pdµ(a) <∞,∀α ≤ k. Structural Causal Model. The structural causal model (SCM) is defined as a triplet M := 〈G,F , P (ε)〉, in which i) the causal structure G := (V,E) (V,E respectively denote the node and edge set) is described by a directed acyclic graph (DAG); ii) the structural equations F := {fk}Vk∈V are autonomous, i.e., intervening on Vk does not affect others, based on which we can define the dooperator and calculate the causal effect; iii) the P (ε) are probability measure for exogenous variables {εk}k. By assuming independence among {εk}k, we obtain according to Causal Markov Condition that each P that is compatible with G has P({Vk = vk}Vk∈V ) = ΠkP(Vk = vk|Pa(k) = pa(k)). An acyclic directed mixed graph (ADMG) can further allow the existence of bidirectional arrows↔, meaning the spurious correlation between two variables connected. 4 Methodology We first incorporate the causal assumptions into LaCIM in sec. 4.1. Under such assumptions, we identify the invariant distributions P (X|S,Z) and P (Y |S), which are repectively dubbed as generative invariance and causal invariance that are robust to domain shifts. Equipped with these invariances, we in sec. 4.2 show that the causal factor can be identified without mixing information from non-causal one during prediction. Finally, we introduce our learning method in sec. 4.3 to estimate the P (X|S,Z) and P (Y |S), which are respectively resorted in the inference and prediction that constitute a robust predictor during test stage. 4.1 Latent Causal Invariance Models In this section, we introduce a set of structural causal models dubbed as Latent Causal Invariance Model (LaCIM), which incorporates the causal assumptions mentioned above and also the source of distributional shifts. The corresponding causal structure of LaCIM is illustrated in Fig. 1 (c), which we will introduce step-by-step from the skeleton in Fig. 1 (a). Fig. 1 (a). Specifically, the ADMG in Fig. 1 (a) introduces latent factors V := {S,Z} to model the abstractions/concepts that generate the observed variables (X,Y ), as similarly assumed in unsupervised latent generative models [37] for image generation. Further, we explicitly separate the V into S and Z, with only S causally related to the label Y . In image classification, such a causal factor refers to the (shape,contour) of the object need to be classified; while the image X is additionally affected by contextual factor such as light, view. Fig. 1 (a)→ Fig. 1 (b). In addition, we assume that S is spuriously correlated with Z, as marked by the red “↔” in Fig. 1 (a). Such a spurious correlation corresponds to the bias inherited from data, e.g. the contextual information in image classification. Therefore, the magnitude of this correlation is distribution-dependent and thus can vary across domains. Statistically, the “spurious correlation" implicates the presence of a third unobserved (we use dot circle to represent unobserved variables) confounder, which is denoted as C in Fig. 1 (b). The unblocked path from Z to Y induced by C can lead to learning the non-causal factor during data-fitting, which can degrade the performance on unseen domains if the correlation between this non-causal factor and the output is broken. Fig. 1 (b)→ Fig. 1 (c). Taking a further inspection in Fig. 1 (b), the varying degree of correlation can be either due to the distributional shift of S,Z|C or of the C itself across domains (we use red color to mean varied distributions). As both shifts are domain-dependent, we in Fig. 1 (c) ascribe them to a domain variable D, which causes the mutation of its children nodes’ distribution, i.e., S,Z and C. Such a domain variable has been similarly introduced in [69, 68] to generate mutable variables. In our scenario, we do not require D to be observed; rather, we only need the domain index d̃e (one-hot encoded vector with length m := |Etrain|). The set of SCMs augmented with D, with the SCM Markovian compatible to the DAG of C, S, Z,X, Y in Fig. 1 (c), is dubbed as Latent Causal Invariance Models (LaCIM) that is formally defined as follows: Definition 4.1 (LaCIM). The LaCIM denotes a set of SCMs augmented with the domain variable D, i.e., {〈Me, de〉}e∈E , in which de denotes the value of D and Me := 〈G,Fe, P (ε)〉 for e. The G denotes the DAG restricted on C, S, Z,X, Y . For each environment/domain e, the Fe := {fx, fy, fes , fez , fec } correspond to generating mechanism ofX,Y, S, Z,C, with fec (εc) := gc(εc, de), fes (c, εs) := gs(c, εs, d e) and fez (c, εz) := gz(c, εz, d e) from some gc, gs, gz . Remark 1. Different from scenarios in which X generates [28] nor generated from Y [1], we consider the scenario when the X and Y are generated concurrently, which can widely exist but ignored in the literature. For example, the clinicians are recording the disease status while implementing the ultrasound test at the same time, during medical diagnosis. As an illustration, we consider the following example, in which the distributional shifts caused by domain variable D can refer to sampling bias in data. Example 4.1 (Sampling Bias). Consider the cat/dog classification, in which the animal in each image is either associated with the snow or grass. The D refers to the sampler, which generates the C that denotes time and weather to collect each sample. The S,Z respectively refer to the features of animals and context. Since each sampler may have a fixed sampling pattern (e.g. gets used to going out in the sunny morning (or in the snowy evening)), the data one collects may have sampling bias: dogs (cats) more associated with grass (snow) in the sunny morning (or snowy evening). The Def. 4.1 specifies the generating mechanisms across environments and how they differ. Equipped with such a specification, we can identify the invariant mechanisms that are stable to domain shifts: Proposition 4.2 (Causal Invariance & Generative Invariance). For LaCIM in Def. 4.1, the P (Y |S) and P (X|S,Z) are invariant to shifts across E , and are respectively denoted as Causal Invariance (CI) and Generative Invariance (GI). Remark 2. The generating process from latent variables to observed variables follows from physical law, e.g., the shape, contour, color, view, light should satisfy physical constraints to generate a reasonable image. Therefore, it is naturally hold that such generating processes are invariant. The P (X|S,Z) and P (Y |S) can induce an invariant predicting mechanism. Specifically, for a new sample x← fx(s?, z?, εx), y ← fy(s?, εy), we can first infer the causal factor s? from pfx(x|s, z) by maximizing log-likelihood of pfx(x|s, z) over S ×Z and then feed the estimated s into pfy (y|s?) for prediction. To ensure the robustness of such a two-step invariant prediction, we need to answer two following identifiability questions: 1. Can the inferred causal factor S not mix the information of (disentangled from) others? 2. Can such an invariant predictor recover the ground-truth predictor P (Y |s?)? We will answer these questions in the subsequent section, followed by our learning methods to identify the causal factor and the causal/generative invariance for prediction. 4.2 Identifiability Analysis We present the identifiability results regarding (i) the disentanglement of inferred causal factor S from non-causal Z, and (ii) the induced true predicting mechanism P (Y |s?) for x← fx(s?, z?, εx), which respectively echo the two questions imposed in the last section. Our main results are presented in theorem 4.4. To distinguish the causal factor S from others, our results require that the degree of diversity of S-Z correlation across environments is large enough, which has been similarly assumed in the literature of identifiability [52, 1]. Such a diversity condition implies the dramatical change of correlation between Z and Y , thus providing a clue to disentangle the S. Such a disentanglement analysis, is crucial to causal prediction but is ignored in existing literature about identifiability, such as those identifying the discrete latent confounders [32, 62], or those relying on Additive Noise Model (ANM) assumption [31], or linear Independent Component Analysis (ICA) [14, 35, 36, 75] (Please refer to supplement D.1 for more exhaustive reviews). More importantly, we will later in theorem 4.5 show the extension of above analysis from exponential family of P (S,Z|C) to Sobelev space; and from ANM for Y to categorical distribution for Y . We assume the ANM for fx(s, z, εx)= f̂x(s, z) + εx (we replace f̂x with fx for simplicity), which has been widely adopted to identify the causal factor [30, 54, 35]. We assume the fx to be bijective and invertible (we will discuss it later). We first narrow our interest to a subset of LaCIM denoted as Pexp in which any model in Pexp satisfies that (i) the S,Z belong to the exponential family; and (ii) the Y is generated from the ANM: Pexp = { LaCIM with any m > 0| y = fy(s) + εy, pe(s, z|c) := Πt=s,zpTt,Γt c,de (t|c),∀e } ,with pTt,Γt c,de (t) = qt∏ i=1 exp ( kt∑ j=1 T ti,j(ti)Γ t c,de,i,j +Bi(ti)−Atc,de,i ) ,∀kt, qt (1) for t = s, z and e ∈ E , with qt, kt respectively denoting the dimension of t = s, z and the number of natural parameters in each dimension. The {T ti,j(ti)}, {Γtc,de,i,j} denote the sufficient statistics and natural parameters, {Bi} and {Atc,de,i} denote the base measures and normalizing constants to ensure the integral of distribution equals to 1. Let Tt(t) := [Tt1(t1), ...,Ttqt(tqt)] ∈ Rkt×qt ( Tti(ti) := [T t i,1(ti), ..., T t i,kt(ti)], ∀i ∈ [qt] ) , Γtc,de := [ Γtc,de,1, ...,Γ t c,de,qt ] ∈ Rkt×qt ( Γtc,de,i := [Γtc,de,i,1, ...,Γ t c,de,i,kt ], ∀i ∈ [qt] ) . We further assume that the P e(C) serves to discrete distributions on the set {c1, ..., cR}, with which the pe(s, z) := ∫ p(s|c)p(z|c)dP e(c) = ∑ r p e(s, z|cr)pe(cr) can be regarded as the mixture of exponential family distributions. Rather than uniquely inference, we target on disentangling the S from Z and also recovering the ground-truth predictor, which is formally defined as ∼exp-identifiability as follows: Definition 4.3 (∼exp-identifiability). Suppose the X ⊇ fx(S × Z). We define a binary relation θ ∼exp θ̃ on the parameter space of X × Y: there exist two sets of permutation matrices and vectors, (Ms, as) and (Mz, az) for s and z respectively, such that for any (x, y) ∈ X ×Y , the following hold: T̃s([f̃−1x ]S(x)) = MsT s([f−1x ]S(x)) + as, T̃ z([f̃−1x ]Z(x)) = MzT z([f−1x ]Z(x)) + az; (2) pf̃y (y|[f̃ −1 x ]S(x)) = pfy (y|[f−1x ]S(x)). (3) We then say that θ is∼exp-identifiable, if for any θ̃, peθ(x, y) = peθ̃(x, y) ∀e ∈ Etrain, implies θ ∼exp θ̃. This definition is inspired by but beyond the scope of unsupervised scenario considered in nonlinear ICA [27, 35] in that, the former further disentangle S from Z (in Eq. (2)) and identify the true predicting mechanism (in Eq. (3)). To see disentanglement, note that for any clean (noise-free) sample x← fx(s?, z?), the Eq. (2) ensures that the inferred causal factor T̃s([f̃−1x ]S(x)) does not mix the information of others, unless the extreme case that there is a deterministic function between S and Z, in which it is impossible for S to be identified. With such an identification of s, the Eq. (3) further guarantees that the learned pf̃y (y|[f̃ −1]S(x)) can recover the ground-truth prediction probability density, i.e., pfy (y|[f−1x ]S(x)) = pfy (y|s?). With noise, the s? can be inferred with some indeterminacy. The formal result is presented in theorem 4.4. Theorem 4.4 (∼exp-identifiability). For θ of Pexp in Def. 4.1 with m := |Etrain|, we have that the θ is ∼exp identifiable under following assumptions: 1. The characteristic functions of εx, εy are almost everywhere nonzero. 2. fx, f ′x, f ′′ x are continuous and fx, fy are bijective; 3. The {T ti,j}1≤j≤kt are linearly independent in S or Z for each i ∈ [qt] for any t = s, z; and T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; 4. The { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} contains a non-empty open set in Rqs×ks+qz×kz , with B(x) := ∏ is∈[qs]Bis([f −1]is(x)) ∏ iz∈[qz ]Biz ([f −1]iz (x)). 5. The L := [P e1(C)T, ..., P em(C)T]T ∈ Rm×R and [ [Γt=s,zc2,de1 − Γ t=s,z c1,de1 ]T, ..., [Γt=s,zcR,dem − Γt=s,zc1,de1 ] T ]T ∈ R(R×m)×(qt×kt) have full column rank. The assumptions 1-3 are trivial and easy to satisfy. The characteristics functions of εx, εy can be almost everywhere non-zero for most continuous variables, such as Gaussian, exponential, beta, gamma distribution. This assumption can ensure the identifiability of p(f−1(x), as will be shown in the appendix. The bijectivity of fx and fy have been widely assumed in [30, 54, 53, 35, 75] as a basic condition for identifiability. It naturally holds for fx to be bijective since it has been empirically proven in auto-encoder [38] that the low-dimension embeddings (i.e., qs + qz < qx) can recover the original input well and also that the variational auto-encoder can extract meaningful representations from x. For the θ with categorical Y such that p(y = k|s) = [fy]k(s)/ ( ∑ k[fy]k(s)), the fy may not satisfy the bijectivity condition. We will shown identifiability for such a categorical case later in theorem 4.5. The assumption 3 can be uniformly satisfied for all distributions in the strongly exponential family. The containment of an open set in assumption (4) for { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} implies that space expanded by sufficient statistics are dense in some open set, as a sufficient condition for the mixture distribution P e(C) and also P e(X,Y |c) to be identified. The diversity assumption (5) implies that i) m ≥ R and m ∗ R ≥ max(kz ∗ qz, ks ∗ qs) + 1; and that ii) different environments are diverse enough in terms of S-Z correlation, as an almost a necessary for the invariant one to be identified (a different version is assumed in [1]). In supplement B.2, we will show that the ii) can hold unless the space of Γ belong to a zero-(Lebesgue) measure set. As indicated by the formulation, a larger m would be easier to satisfy the condition, which agrees with the intuition that more environments can provide more complementary information. Besides, our result can be extended to non-independent case among {s1, ..., sqs} (or {z1, ..., zqz}), i.e., pTt,Γt c,de (t) = exp(〈Tt(t),Γtc,de〉+B(t)−Atc,de) (t = s, z), which will shown in supplement B.2. Extension to the general forms of LaCIM. We extend to general forms of LaCIM in theorem 4.5 as long as its P(S,Z|C = c) ∈W r,2(S × Z) (for some r ≥ 2) and categorical Y , in the following theorem. This is accomplished by proving that any model in LaCIM can be approximated by a sequence of distributions with parameterization in Pexp, motivated by [3] that the exponential family is dense in the set of distributions with bounded support, and in [44] that the continuous variable with multinomial logit model can be approximated by a series of distributions with i.i.d Gumbel noise as the temperature converges to infinity. The proof is left in the supplement. Theorem 4.5 (Asymptotic∼exp-identifiability). Suppose the LaCIM satisfy that p(x|s, z) and p(y|s) are smooth w.r.t s, z and s respectively. For each e and c ∈ C, suppose Pe(S,Z|c) ∈W r,2(S×Z) for some r ≥ 2, we have that the LaCIM is asymptotically∼exp-identifiable: ∀ > 0, ∃ ∼exp-identifiable P̃θ ∈ Pexp, s.t. dPok(pe(X,Y ), p̃eθ(X,Y )) < ,∀e ∈ Etrain 3. Our proof is built on [3] that any probability in Sobolev space can be approximated by a sequence of distribution with the number of natural paramters going to infinity, i.e., kt →∞. 4.3 Learning and Inference Guided by the identifiability result, we propose to learn P (X|S,Z) and P (Y |S) via generative modeling following from Fig. 1 (c). Then to predict the label for a new sample x generated from (s?, z?), we first leverage the learned p(x|s, z) to infer s? that is ensured to be able to not mix the non-causal information, followed by learned P (y|s̃?) for prediction. 3The dPok(µ1, µ2) denotes the Pokorov distance between µ1 and µ2, with limn→∞ dPok(µn, µ) → 0 ⇐⇒ µn d→ µ. 4.3.1 Learning Method To learn the P (X|S,Z), P (Y |S) for invariant prediction, we reformulate the objective function of Variational Auto-Encoder (VAE) in the supervised scenario, in order to fit {pe(x, y)}e∈Etrain . As a latent generative model, the VAE was originally proposed for unsupervised generation from latent variables V to high-dimensional input variable X . To make such a generation tractable, the VAE introduced a variational distribution qψ parameterized by ψ to approximate the intractable posterior by maximizing the following Evidence Lower Bound (ELBO):−Lθ,ψ = Ep(x) [ Eqψ(v|x) log pθ(x,v) qψ(v|x) ] ≤ Ep(x)[log pθ(x)], where the equality is achieved when qψ(v|x) = pθ(v|x). Therefore, maximizing the ELBO over pθ and qψ will drive (i) qψ(v|x) to approximate pθ(v|x); (ii) pθ to estimate the ground-truth model p. To adapt the above surrogate loss to our DAG in Fig. 1 (c), we introduce the variational distribution qeψ(s, z|x, y) for each environment e. The corresponding ELBO for e is −Leθ,ψ ∆ =Epe(x,y) [ Eqeψ(s,z|x,y) log peθ(x, y, s, z) qeψ(s, z|x, y) ] , where peθ(x, y, s, z) = pθ(x|s, z)pθ(y|s)pe(s, z). Similarly, minimizing Leθ,ψ can drive pθ(x|s, z), pθ(y|s) to approximate the p(x|s, z), p(y|s), and also qeψ(s, z|x, y) to estimate peθ(s, z|x, y). Therefore, the qψ can inherit the properties of pθ. As peθ(s, z|x, y)= peθ(s,z|x)pθ(y|s) peθ(y|x) for our DAG in Fig. 1 (c), we can similarly reparameterize qeψ(s, z|x, y) as qeψ(s,z|x)pθ(y|s) qeψ(y|x) with qψ(y|s) replaced by pθ(y|s) (since the goal of qψ is to mimic the behavior of pθ). Then, the Leθ,ψ can be rewritten as: Leθ,ψ = Epe(x,y) [ − log qeψ(y|x)− Eqeψ(s,z|x) pθ(y|s) qeψ(y|x) log pθ(x|s, z)peθ(s, z) qeψ(s, z|x) ] , (4) where qeψ(y|x) = ∫ S q e ψ(s|x)pθ(y|s)ds. We correspondingly parameterize the prior model peθ(s, z) and inference model qeψ(s, z|x) as pθ(s, z|d̃e) and qψ(s, z|x, d̃e), in which d̃e (of environment e) denotes the domain index that can be represented by the one-hot encoded vector with length m := |Etrain|. The overall loss function is: Lθ,ψ ∆ = ∑ e∈Etrain Leθ,ψ. (5) The training datasets {De}e∈Etrain are applied to optimize the prior models {p(s, z|d̃e)}e, inference models {qψ(s, z|x, d̃e)}e, generative model pθ(x|s, z) and predictive model pθ(y|s). Particularly, the parameters of pθ(x|s, z) and pθ(y|s) are shared among all environments, motivated by the the invariance property of P (X|S,Z) and P (Y |S) across all domains. 4.3.2 Inference & Prediction. We leverage the learned P (X|S,Z), P (Y |S) for prediction. According to Prop. 4.2 and Eq. (3) in theorem 4.4, the induced predictor via P (X|S,Z), P (Y |S) can recover the true predicting mechanism for any distributional shifts from E . Specifically, for any x generated by (s?, z?), we first optimize the following log-likelihood of pθ(x|s, z) over S × Z to infer s?, z?, max s,z log pθ(x|s, z) + λs‖s‖22 + λz‖z‖22, (6) with hyperparameters λs > 0 and λz > 0 in order to control the learned s, z in a reasonable scale. Note that Eq. Eq. (6) is different from the maximum a posterior estimation since the posterior qeψ(s, z|x) is parameterized differently for different e while the pθ(x|s, z) is invariantly parameterized for E (this is because p(x|s, z) is invariant). For optimization, we adopt the strategy in [61] that first sample some candidate points from N (0, I) and select the optimal one in terms of Eq. (6) as initial point; then use Adam to optimize for another T iterations. The implementation details and optimization effect are shown in supplement E.2. Finally, with estimated s̃?, z̃?, we implement the learned pθ(y|s̃?) for prediction: ỹ := arg maxy pθ(y|s̃?). 5 Experiments We first verify the identifiability claims of theorem 4.4 in sec. 5.1. Then we evaluate LaCIM on real-world data in sec. 5.2: Non-I.I.D. Image dataset with Contexts (NICO); Colored MNIST (CMNIST); Alzheimer’s Disease Neuroimaging Initiative (ADNI www.loni.ucla.edu/ADNI for early prediction of Alzheimer’s Disease), to verify the generalization ability of our method on the target domain with distributional shifts. 5.1 Simulation To verify the identifiability claims, we implement LaCIM on synthetic data. We generate C, S, Z,X, Y following Fig. 1 (with details left in supplementary). We choose m = 3, 5 with the same total number of samples. To verify the advantage of learning on multiple diverse domains (m > 1), we compare with pool-LaCIM: minimizing the loss Eq. (4) on the pooled data from all m domains. We compute the mean correlation coefficient (MCC) adopted in [35], which measures the goodness of identifiability under permutation by introducing cost optimization to assign each learned component to the source component. We run all methods for 100 times, with the average recorded in Fig. 2a. The superiority of LaCIM over pool-LaCIM, together with the fact that LaICM with m = 5 performs better than m = 3, verify the benefit of more domains to satisfy the diversity condition. To illustrate the learning effect, we visualize the learned Z (with S left in supplement E.1) in Fig. 2b. 5.2 Real-world Data We verify the generalization ability of LaCIM on three data: NICO, CMNIST and ADNI. Dataset. We describe the datasets as follows (X,Y denotes the input and output; D is unobserved): • NICO. We consider the cat/dog classification in “Animal” dataset in NICO, a benchmark for non-i.i.d problem in [20]. Each animal is associated with “grass”,“snow” contexts. The D denotes the attributes of the sampler. The C denotes the time and weather of sampling, which generates the S,Z that respectively denote the semantic and contextual features. We split the dataset into m training domains and the test domain, in which each domain has different proportions of contexts associated with each animal, i.e., (%cat in grass, %cat in snow, %dog in grass, %dog in snow), due to different sampling strategies determined by D. The proportion vectors of all domains are introduced in Tab. 3. The distributional shift refers to the spurious correlation between the context and the label. • CMNIST: We relabel the digits 0-4 and 5-9 as y = 0 and y = 1, based on MNIST. Then we color pe (1 − pe) of images with y = 0 (y = 1) as green and others as red. We set m = 2 with pe1 = 0.95, pe2 = 0.99; while the petest for the test domain is set to 0.1. The D denotes the attributes of the painter. The Z, S respectively represent the features related to the color and the digit. Their confounder C denotes the time and weather for which the painter D draws the number and color, e.g., the painter tends to draw red 0 more often than green 1 in the sunny morning. In this regard, the distributional shift refers to the spurious correlation between the color and the label. • ADNI. The Y := {0, 1, 2}, with 0,1,2 respectively denoting Normal Control, Mild Cognitive Impairment and AD. The X is structural Magnetic resonance imaging. We split the data into m = 2 training domains and the test domain, with different values of D that denotes Age, TAU (a biomarker [24]). The C, S (Z) respectively denote the hormone level that affects the brain structure development and the disease-related (-unrelated) brain regions. The distributional shifts among all domains are due to different values of D. Compared Baselines & Implementation Details. We compare with (i) Empirical Risk Mnimization from X → Y (ERM), (ii) domain-adversarial neural network (DANN) [15], (iii) Maximum Mean Discrepancy with Adversarial Auto-Encoder (MMD-AAE) [43], (iv) Domain Invariant Variational Autoencoders (DIVA) [29], (v) Invariant Risk Mnimization (IRM) [1], (vi) Supervised VAE (sVAE): our LaCIM implemented by VAE without disentangling S,Z. For all methods, the network structures of qeψ(s, z|x), pθ(x|s, z) and pθ(y|s) for CMNIST, NICO and ADNI are shared (details introduced in supplement E.4, E.5, E.6, Tab. 7, 8). We implement SGD as optimizer, with learning rate (lr) 0.5 and weight decay (wd) 1e-5 for CMNIST; lr 0.01 with decaying 0.2× every 60 epochs, wd 5e-5 for NICO and ADNI (wd is 2e-4). The batch-size are set to 256, 30 and 4 for CMNIST, NICO, ADNI. Main Results & Discussions. We report accuracy over 10 runs for each method. As shown in Tab. 1, our LaCIM consistently outperforms others on all data. Specifically, the advantage over IRM and ERM may due to the incorporation of causal assumptions embedded in Fig. 1 (c). Further, the improvement over sVAE is benefited from the separation of S from others to avoid spurious correlation. Besides, a larger m (with the total sample size fixed) can bring further benefit on NICO, which may due to the easier satisfaction of the diversity condition in theorem 4.4. Interpretability. We visualize the learned S andZ on CMNIST and NICO. Specifically, for CMNIST, we visualize the generated image (with only digit “0” among all classes that belong to Y = 0 and digit “7” among all classes that belong to Y = 1) by interpolating S (and Z) with fixed Z (and S); for NICO, we adopt the gradient method [67], which visualizes the derivatives of the S? (i.e., dimension of S that has the highest correlation with Y ) with respect to each image. As shown in Fig. 3a, the generated sequential images in the 1st and 2nd row look more like “7” from “0” as s increases; while the sequential images in the 2nd-row change from red to green as z increases. Besides, different dimensions of S can learn different differentiating semantic information. For example, the first dimension can learn to add the dash in the hand-writing "7"; while the second dimension can learn to remove the left part of "0" to "7" as interpolated. For the dimension of Z, it learned other non-differentiating factors such as width, color. This result reflects that the learned S and Z correspond to the digit (causal factor of Y ) and color-related features. For NICO, Fig. 3b shows the ability of identifying more explainable semantic features of LaCIM than ERM in which the learned features can mix the background information. Supplement E.5 provides more results. 6 Conclusions & Discussions We propose recovering latent causal factor that is robust to distributional shifts caused by a domain variable. We introduce the causal and non-causal latent factors that are spuriously correlated with each other, and generate the input and the output via invariant mechanisms. Under this invariance, the causal factor is guaranteed to be disentangled from the non-causal one, which induces the groundtruth predictor that holds on all domains. A reformulated generative model is proposed for inferring the causal factor and prediction. A possible drawback of our model lies in our requirement of the number of environments for identifiability, the relaxation of which is left in future work. Broader Impact We claim that this work does not present any foreseeable negative social impact.
1. What is the main contribution of the paper regarding identifiable causal models with latent variables? 2. How does the proposed approach differ from prior works in terms of novelty and significance? 3. What are the strengths and weaknesses of the paper regarding its clarity, quality, and reproducibility? 4. Do you have any concerns or questions about the assumptions, notation, and experimental design used in the paper?
Summary Of The Paper Review
Summary Of The Paper This work proposes an identifiable causal model with latent variables to model spurious correlations between labels and irrelevant context (such as background or light setting) in order to construct a classifier which is robust to certain distribution shifts. The authors show under which conditions the irrelevant context and important features can be disentangled from only input observations and labels. They propose a VAE-like approach to learn the model which they use to come up with an invariant classifier. The identifiability of the model is assessed on synthetic data and the robustness to dataset shift is evaluated on multiple real world data sets, drawing a favorable picture of their contribution. Review Originality: I have been following the literature on nonlinear ICA (from which the identifiability result is inspired) and it is the first time I see these ideas directly applied to out-of-distribution generalization. As far as I know, all important related works have been cited. Quality: I believe this paper presents interesting novel ideas with convincing experiments and I enjoyed reading it. However, some algorithmic decisions are not justified and the paper lacks clarity. The focus on identifiability and why it matters for dataset shift generalization is not well explained. I make some of these concerns more specific in what follows, with no specific order: Some assumptions appear to be implicit. For instance, the fact that q_s + q_z <= q_x. Otherwise, f_x cannot be bijective. Moreover, saying that f_x and f_y are bijective without specifying the domain and codomain of the functions is imprecise. For instance, if q_s + q_z < dim(x) and the codomain is R^dim(x), then f_x cannot be bijective. So what is the codomain? L172: The two points are said to “...ensure the robustness of such a two-step invariant prediction, ...”. However, the aforementioned two-step procedure is only very vaguely defined up to this point in the paper, thus it was hard for me to accept that we actually want these desiderata. Maybe presenting the procedure before would help? That being said, even after understanding the procedure, I felt like I had to think a lot to understand why these two points (at line 172) are important to have (and I am still not sure). Could you provide a clear explanation of what would be the consequence of any of these desiderata not holding? Definition 4.3: Desiderata 1 (line 172) does not require M_s and M_z to be permutation-scaling matrices, right? In principle, only having “block” identifiability would be enough, no? Do you believe one could come up with weaker assumptions? If not, please mention that having permutation matrices is “extra” and not really required. Is it possible that [ f x − 1 ] S maps some points of X outside S ? Similarly for Z ? Since the definition supposes only X ⊃ f x ( S × Z ) , it is possible. This definition assumes implicitly that f x is invertible but this is not mentioned until the following theorem. This assumption should be mentionned earlier. Theorem 4.5 introduces a notion of “asymptotic identifiability”, however, I am not sure I understand its interpretation. It would be helpful to give some intuition for its meaning. That would help me understand its value. Also, I suspect the dimensionality of the sufficient statistic is allowed to grow, however, the definition of P exp does not allow for this. Moreover, in order for the assumption of Theorem 4.4 to hold, the number of environments m needs to grow as well, right? This should be clearer. I believe an interesting consequence of this Theorem is that even if the assumptions of Theorem 4.4 are not satisfied for some p ∈ P exp , it can still be approximated by an identifiable model. Is that correct (assuming the exponential model is in the appropriate Sobolev space)? Section 4.3.2 presents the invariant prediction procedure which, as far as I know, is not standard (which is not necessarily a bad thing). I had a hard time interpreting the meaning of it and understanding why it makes sense. I believe the author should add a few lines motivating this approach. One possible interpretation for the max in (6) is that it corresponds to a maximum a posteriori (MAP) estimation of z and s in a counterfactual model in which p(s,z|c) is replaced by p(s)p(z) (thus without correlation). But even with this interpretation, I am not completely satisfied. Of course the procedure is invariant to the environment by design, but is it optimal in some sense? Section 5: I found the experiments sufficient and convincing. Identifiability is confirmed on synthetic experiments and out-of-distribution generalization is measured against multiple relevant baselines on three datasets. Since I do not follow closely the literature on OOD generalization, I cannot confirm that no important baseline is missing. I would like to see more than one latent interpolation for the dataset CMNIST. Do they all look as good as the one presented here? Clarity: L96: The definition of [f]_A is imprecise, usually the word “restricted” refers to the domain being restricted. At that point I did not understand it and even after reading Definition 4.3, I had to infer it from context. Please clarify. L139: This sentence is contradictory: “In our scenario, we do not require D to be observed; rather, we only need the domain index d_e...”. This means D is observed, no? Equation (1): I believe some notation is unnecessarily hard to follow. For instance, the | c symbol should be removed from the two left factors in the first line of (1). Also, having a superscript t on T and \Gamma was a bit confusing. It led me to think that these things could depend on the actual value of z or s. Now I understand that this is just to allow different exponential families for Z and S. A solution would be to remove this superscript and replace it with a footnote mentioning that everything works even if we have different sufficient statistics for Z and S. Some notation is implicitly defined. For instance q_s and q_z being the dimensionality of S and Z, respectively. It would help the reader to mention it explicitly in the text. Similarly for k_s and k_q, which are the dimensionality of the sufficient statistic of each s_i and z_i, respectively. What is the dimensionality of x? Again, implicitly defined on L228. Theorem 4.4: The fifth assumption was hard to parse, even if I have some experience with these kinds of assumptions. L237: “... can hold unless the space of \Gamma belong to a zero-(Lebesgue) measure set.” This statement is not clear. It seems \Gamma can always take only finitely many values and thus is contained in a measure zero set, even if the assumptions are satisfied. Am I misunderstanding something? The writing of Section 4.3.1 should be completely revised. I had a very hard time actually understanding many steps of the derivations and even just what the actual inference model (q) versus the generative model (p) are. Some conditional density functions are introduced without definitions which might explain why I could not follow. Also, is there a superscript e missing on q_\psi(y | s) at line 271. In Section 5: Please mention explicitly which representations are used to compute the MCC. I suppose they are the ground-truth (z,s) and the one obtained by solving (6)? Significance: I believe using ideas from nonlinear ICA to come up with identifiable latent models for out-of-distribution generalization is an interesting direction and could inspire more work.
NIPS
Title Recovering Latent Causal Factor for Generalization to Distributional Shifts Abstract Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as “spurious correlation” statistically, is domaindependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose Latent Causal Invariance Models (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at https://github.com/wubotong/LaCIM. 1 Introduction Current data-driven deep learning models, revolutionary in various tasks though, often exploit all types of correlations to fit data well. Among such correlations, there can be spurious ones corresponding to biases (e.g., confounding bias due to the presence of a third unseen factor) inherited from the data provided. Such data-dependent spurious correlations can erode the prediction power on unseen domains with distributional shifts, which can cause serious consequences especially in safety-critical tasks such as healthcare. Recently, there is a Renaissance of causality in machine learning, expected to pursue causal relationships [59] to achieve stable generalization across domains. The so-called area of “causality” is pioneered by Structural Causal Models [51], as a mathematical formulation of this metaphysical concept grasped in the human mind. The incorporation of these human priors about cause and effect endows the model with the ability to identify the causal structure [51] which entails not only the data but also the underlying process of how they are generated. To achieve causal modeling, the old-school methods [52, 10] directly causally related the output label Y to a subset of covariates X , which is however not conceptually reasonable in applications with sensory-level data (e.g. model pixels as causal factors of the output does not make sense in image classification [11]). ∗Corresponding author †Work done during an internship at Microsoft Research Asia. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). For such applications, we rather adopt the manner of human visual perception [8, 9, 80] to causally relate the label Y to unobserved abstractions denoted by S, i.e., Y ← S. We further assume the existence of another non-causal latent factor (of Y ) denoted as Z, that together with S generate the input X: X ← (S,Z). Such an assumption is similarly adopted in the literature [25, 27, 35, 75, 71]. To model shifts across domains, we allow Z to be spuriously correlated with S (hence also the output), as marked by the bidirected arrow in Fig. 1 (a). Taking image classification as an example, the S and Z respectively refer to object-related abstractions (e.g., contour, texture) and contextual information (e.g., background, view). Due to this correlation, the model can learn contextual information into prediction, which may fail to generalize to the domain such that this correlation is broken. We encapsulate above assumptions into the skeleton illustrated in Fig. 1 (a), in which the spurious correlation between S and Z varies across domains, as marked by the red bi-directed arrow in Fig. 1 (b). Taking a closer inspection, such a domain-dependent spurious correlation is governed by an auxiliary domain variable D in Fig. 1 (c), which causes the domain shifts. We call the set of causal models augmented with D as Latent Causal Invariance Models (LaCIM). Here, the “Causal Invariance” refers to P (Y |S), which together with P (X|S,Z), can be proved to be stable to the shifts across domains, under the assumptions embedded in the causal structure of LaCIM. Equipped with such an invariance, we prove that the S and the ground-truth predictor: P (Y |s?) for x generated from (s?, z?), are identifiable up to transformations that do not mix the non-causal information. Under such an identifiability guarantee, we propose to learn the P (Y |S) and P (X|S,Z) by reformulating the Variational Auto-encoder (VAE) [37] to fit the joint distribution of the input and output variables from training domains. During the test stage, we first infer the value of S by optimizing the estimated P (X|S,Z) over latent space, followed by the learned P (Y |S) for prediction. We first use simulated data to verify the correctness of the identifiability claim. Then, to demonstrate the utility, we test our approach on real-world data, consistently achieving better generalization to the new distribution; besides, we find that our inferred causal factor can be concentrated in highly explainable semantic regions for the task of image classification. We summarize our contribution as follows: Methodologically (in sec. 4.1), we propose LaCIM in which the causal assumptions of two latent factors and the distributional shifts are incorporated; Theoretically (in theorem 4.4), we prove the identifiability of the causal factor and the ground-truth predicting mechanism; Algorithmically (in sec. 4.3), guided by the identifiability, we reformulate Variational Bayesian method to learn P (X|S,Z), P (Y |S) for prediction; Experimentally (in sec. 5.2), our approach generalizes better to distributional shifts, compared with others. 2 Related Work Causality for Domain Generalization. Due to its stable transferability, the concept of causality has been introduced in many recent works for domain generalization [39, 59, 52, 10, 40, 21, 68]. Most of these works learned the assumed (causal) invariance for generalizing to unseen domains. However, they suffer from either i) lacking explicit causal modeling; or ii) inappropriate causal relations made for the output. Specifically, for i), the [39, 59] are still data-driven methods to learn stable correlation (i.e., invariance) without incorporating causal assumptions [51] beyond data, which may impede its generalization to a broader set of domains; for ii), the [52, 10, 40, 21, 68] causally relate the output with covariates, which is inappropriate for sensory-level data. Our Specification. We explicitly incorporate the causal assumptions. Specifically, we introduce i) latent factors and separate them into the causal and the non-causal factor; ii) the domain variable D, as a selecting mechanism to generate the varied S-Z correlation across domains. Such a causal modeling makes it possible to recover the causal factor S for generalization. In independent and concurrent works, [75] and [28] also explore latent variables in causal relation. As comparisons, [75] did not differentiate S from Z. The spurious correlation in [28] is limited in the correlation between domains and the output; while it is allowed in our setting to exist in a single domain, which is more aligned with real scenarios, e.g., the dog is more associated with grass than snow in a domain when most samples are collected in sunny morning. Other Conceptually Related Works: i) transfer learning that leverages invariance in the context of domain adaptation [60, 81, 17] or domain generalization [43, 63]; (ii) causal inference [51, 53] which builds structural causal models and define intervention (a.k.a, “do-calculus”) for cause-effect reasoning and counterfactual learning; and (iii) latent generative model that assumes generation from latent space to observed data [37, 71] but aims at learning generator in the unsupervised scenario. 3 Preliminaries Problem Setting. Let X,Y respectively denote the input and output variables. The training data {De}e∈Etrain are collected from multiple environments e ∈ Etrain, where each e is associated with a distribution Pe(X,Y ) over X × Y and De := {xei , yei }i∈[ne] i.i.d∼ Pe with [k] := {1, ..., k} for any k ∈ Z+. Our goal is to learn a robust predictor f : X → Y that only exploit the causal factor for prediction and generalize well to all domains E ⊃ Etrain. We use respectively upper, lower case letter and Cursive letter to denote the random variable, the instance and the space, e.g., a is an instance in the spaceA of random variable A. ForA := f(X )∩B with B := Rp[i1]×Rp[i2]× ...×Rp[ik], the [f(x)]A denotes the f(x) restricted on dimensions of A, i.e., [f(x)]A := [fi1(x), ..., fik(x)]. The Sobolev space W k,p(A) contains all f such that ∫ A ∣∣∂Afα|A=a∣∣pdµ(a) <∞,∀α ≤ k. Structural Causal Model. The structural causal model (SCM) is defined as a triplet M := 〈G,F , P (ε)〉, in which i) the causal structure G := (V,E) (V,E respectively denote the node and edge set) is described by a directed acyclic graph (DAG); ii) the structural equations F := {fk}Vk∈V are autonomous, i.e., intervening on Vk does not affect others, based on which we can define the dooperator and calculate the causal effect; iii) the P (ε) are probability measure for exogenous variables {εk}k. By assuming independence among {εk}k, we obtain according to Causal Markov Condition that each P that is compatible with G has P({Vk = vk}Vk∈V ) = ΠkP(Vk = vk|Pa(k) = pa(k)). An acyclic directed mixed graph (ADMG) can further allow the existence of bidirectional arrows↔, meaning the spurious correlation between two variables connected. 4 Methodology We first incorporate the causal assumptions into LaCIM in sec. 4.1. Under such assumptions, we identify the invariant distributions P (X|S,Z) and P (Y |S), which are repectively dubbed as generative invariance and causal invariance that are robust to domain shifts. Equipped with these invariances, we in sec. 4.2 show that the causal factor can be identified without mixing information from non-causal one during prediction. Finally, we introduce our learning method in sec. 4.3 to estimate the P (X|S,Z) and P (Y |S), which are respectively resorted in the inference and prediction that constitute a robust predictor during test stage. 4.1 Latent Causal Invariance Models In this section, we introduce a set of structural causal models dubbed as Latent Causal Invariance Model (LaCIM), which incorporates the causal assumptions mentioned above and also the source of distributional shifts. The corresponding causal structure of LaCIM is illustrated in Fig. 1 (c), which we will introduce step-by-step from the skeleton in Fig. 1 (a). Fig. 1 (a). Specifically, the ADMG in Fig. 1 (a) introduces latent factors V := {S,Z} to model the abstractions/concepts that generate the observed variables (X,Y ), as similarly assumed in unsupervised latent generative models [37] for image generation. Further, we explicitly separate the V into S and Z, with only S causally related to the label Y . In image classification, such a causal factor refers to the (shape,contour) of the object need to be classified; while the image X is additionally affected by contextual factor such as light, view. Fig. 1 (a)→ Fig. 1 (b). In addition, we assume that S is spuriously correlated with Z, as marked by the red “↔” in Fig. 1 (a). Such a spurious correlation corresponds to the bias inherited from data, e.g. the contextual information in image classification. Therefore, the magnitude of this correlation is distribution-dependent and thus can vary across domains. Statistically, the “spurious correlation" implicates the presence of a third unobserved (we use dot circle to represent unobserved variables) confounder, which is denoted as C in Fig. 1 (b). The unblocked path from Z to Y induced by C can lead to learning the non-causal factor during data-fitting, which can degrade the performance on unseen domains if the correlation between this non-causal factor and the output is broken. Fig. 1 (b)→ Fig. 1 (c). Taking a further inspection in Fig. 1 (b), the varying degree of correlation can be either due to the distributional shift of S,Z|C or of the C itself across domains (we use red color to mean varied distributions). As both shifts are domain-dependent, we in Fig. 1 (c) ascribe them to a domain variable D, which causes the mutation of its children nodes’ distribution, i.e., S,Z and C. Such a domain variable has been similarly introduced in [69, 68] to generate mutable variables. In our scenario, we do not require D to be observed; rather, we only need the domain index d̃e (one-hot encoded vector with length m := |Etrain|). The set of SCMs augmented with D, with the SCM Markovian compatible to the DAG of C, S, Z,X, Y in Fig. 1 (c), is dubbed as Latent Causal Invariance Models (LaCIM) that is formally defined as follows: Definition 4.1 (LaCIM). The LaCIM denotes a set of SCMs augmented with the domain variable D, i.e., {〈Me, de〉}e∈E , in which de denotes the value of D and Me := 〈G,Fe, P (ε)〉 for e. The G denotes the DAG restricted on C, S, Z,X, Y . For each environment/domain e, the Fe := {fx, fy, fes , fez , fec } correspond to generating mechanism ofX,Y, S, Z,C, with fec (εc) := gc(εc, de), fes (c, εs) := gs(c, εs, d e) and fez (c, εz) := gz(c, εz, d e) from some gc, gs, gz . Remark 1. Different from scenarios in which X generates [28] nor generated from Y [1], we consider the scenario when the X and Y are generated concurrently, which can widely exist but ignored in the literature. For example, the clinicians are recording the disease status while implementing the ultrasound test at the same time, during medical diagnosis. As an illustration, we consider the following example, in which the distributional shifts caused by domain variable D can refer to sampling bias in data. Example 4.1 (Sampling Bias). Consider the cat/dog classification, in which the animal in each image is either associated with the snow or grass. The D refers to the sampler, which generates the C that denotes time and weather to collect each sample. The S,Z respectively refer to the features of animals and context. Since each sampler may have a fixed sampling pattern (e.g. gets used to going out in the sunny morning (or in the snowy evening)), the data one collects may have sampling bias: dogs (cats) more associated with grass (snow) in the sunny morning (or snowy evening). The Def. 4.1 specifies the generating mechanisms across environments and how they differ. Equipped with such a specification, we can identify the invariant mechanisms that are stable to domain shifts: Proposition 4.2 (Causal Invariance & Generative Invariance). For LaCIM in Def. 4.1, the P (Y |S) and P (X|S,Z) are invariant to shifts across E , and are respectively denoted as Causal Invariance (CI) and Generative Invariance (GI). Remark 2. The generating process from latent variables to observed variables follows from physical law, e.g., the shape, contour, color, view, light should satisfy physical constraints to generate a reasonable image. Therefore, it is naturally hold that such generating processes are invariant. The P (X|S,Z) and P (Y |S) can induce an invariant predicting mechanism. Specifically, for a new sample x← fx(s?, z?, εx), y ← fy(s?, εy), we can first infer the causal factor s? from pfx(x|s, z) by maximizing log-likelihood of pfx(x|s, z) over S ×Z and then feed the estimated s into pfy (y|s?) for prediction. To ensure the robustness of such a two-step invariant prediction, we need to answer two following identifiability questions: 1. Can the inferred causal factor S not mix the information of (disentangled from) others? 2. Can such an invariant predictor recover the ground-truth predictor P (Y |s?)? We will answer these questions in the subsequent section, followed by our learning methods to identify the causal factor and the causal/generative invariance for prediction. 4.2 Identifiability Analysis We present the identifiability results regarding (i) the disentanglement of inferred causal factor S from non-causal Z, and (ii) the induced true predicting mechanism P (Y |s?) for x← fx(s?, z?, εx), which respectively echo the two questions imposed in the last section. Our main results are presented in theorem 4.4. To distinguish the causal factor S from others, our results require that the degree of diversity of S-Z correlation across environments is large enough, which has been similarly assumed in the literature of identifiability [52, 1]. Such a diversity condition implies the dramatical change of correlation between Z and Y , thus providing a clue to disentangle the S. Such a disentanglement analysis, is crucial to causal prediction but is ignored in existing literature about identifiability, such as those identifying the discrete latent confounders [32, 62], or those relying on Additive Noise Model (ANM) assumption [31], or linear Independent Component Analysis (ICA) [14, 35, 36, 75] (Please refer to supplement D.1 for more exhaustive reviews). More importantly, we will later in theorem 4.5 show the extension of above analysis from exponential family of P (S,Z|C) to Sobelev space; and from ANM for Y to categorical distribution for Y . We assume the ANM for fx(s, z, εx)= f̂x(s, z) + εx (we replace f̂x with fx for simplicity), which has been widely adopted to identify the causal factor [30, 54, 35]. We assume the fx to be bijective and invertible (we will discuss it later). We first narrow our interest to a subset of LaCIM denoted as Pexp in which any model in Pexp satisfies that (i) the S,Z belong to the exponential family; and (ii) the Y is generated from the ANM: Pexp = { LaCIM with any m > 0| y = fy(s) + εy, pe(s, z|c) := Πt=s,zpTt,Γt c,de (t|c),∀e } ,with pTt,Γt c,de (t) = qt∏ i=1 exp ( kt∑ j=1 T ti,j(ti)Γ t c,de,i,j +Bi(ti)−Atc,de,i ) ,∀kt, qt (1) for t = s, z and e ∈ E , with qt, kt respectively denoting the dimension of t = s, z and the number of natural parameters in each dimension. The {T ti,j(ti)}, {Γtc,de,i,j} denote the sufficient statistics and natural parameters, {Bi} and {Atc,de,i} denote the base measures and normalizing constants to ensure the integral of distribution equals to 1. Let Tt(t) := [Tt1(t1), ...,Ttqt(tqt)] ∈ Rkt×qt ( Tti(ti) := [T t i,1(ti), ..., T t i,kt(ti)], ∀i ∈ [qt] ) , Γtc,de := [ Γtc,de,1, ...,Γ t c,de,qt ] ∈ Rkt×qt ( Γtc,de,i := [Γtc,de,i,1, ...,Γ t c,de,i,kt ], ∀i ∈ [qt] ) . We further assume that the P e(C) serves to discrete distributions on the set {c1, ..., cR}, with which the pe(s, z) := ∫ p(s|c)p(z|c)dP e(c) = ∑ r p e(s, z|cr)pe(cr) can be regarded as the mixture of exponential family distributions. Rather than uniquely inference, we target on disentangling the S from Z and also recovering the ground-truth predictor, which is formally defined as ∼exp-identifiability as follows: Definition 4.3 (∼exp-identifiability). Suppose the X ⊇ fx(S × Z). We define a binary relation θ ∼exp θ̃ on the parameter space of X × Y: there exist two sets of permutation matrices and vectors, (Ms, as) and (Mz, az) for s and z respectively, such that for any (x, y) ∈ X ×Y , the following hold: T̃s([f̃−1x ]S(x)) = MsT s([f−1x ]S(x)) + as, T̃ z([f̃−1x ]Z(x)) = MzT z([f−1x ]Z(x)) + az; (2) pf̃y (y|[f̃ −1 x ]S(x)) = pfy (y|[f−1x ]S(x)). (3) We then say that θ is∼exp-identifiable, if for any θ̃, peθ(x, y) = peθ̃(x, y) ∀e ∈ Etrain, implies θ ∼exp θ̃. This definition is inspired by but beyond the scope of unsupervised scenario considered in nonlinear ICA [27, 35] in that, the former further disentangle S from Z (in Eq. (2)) and identify the true predicting mechanism (in Eq. (3)). To see disentanglement, note that for any clean (noise-free) sample x← fx(s?, z?), the Eq. (2) ensures that the inferred causal factor T̃s([f̃−1x ]S(x)) does not mix the information of others, unless the extreme case that there is a deterministic function between S and Z, in which it is impossible for S to be identified. With such an identification of s, the Eq. (3) further guarantees that the learned pf̃y (y|[f̃ −1]S(x)) can recover the ground-truth prediction probability density, i.e., pfy (y|[f−1x ]S(x)) = pfy (y|s?). With noise, the s? can be inferred with some indeterminacy. The formal result is presented in theorem 4.4. Theorem 4.4 (∼exp-identifiability). For θ of Pexp in Def. 4.1 with m := |Etrain|, we have that the θ is ∼exp identifiable under following assumptions: 1. The characteristic functions of εx, εy are almost everywhere nonzero. 2. fx, f ′x, f ′′ x are continuous and fx, fy are bijective; 3. The {T ti,j}1≤j≤kt are linearly independent in S or Z for each i ∈ [qt] for any t = s, z; and T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; 4. The { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} contains a non-empty open set in Rqs×ks+qz×kz , with B(x) := ∏ is∈[qs]Bis([f −1]is(x)) ∏ iz∈[qz ]Biz ([f −1]iz (x)). 5. The L := [P e1(C)T, ..., P em(C)T]T ∈ Rm×R and [ [Γt=s,zc2,de1 − Γ t=s,z c1,de1 ]T, ..., [Γt=s,zcR,dem − Γt=s,zc1,de1 ] T ]T ∈ R(R×m)×(qt×kt) have full column rank. The assumptions 1-3 are trivial and easy to satisfy. The characteristics functions of εx, εy can be almost everywhere non-zero for most continuous variables, such as Gaussian, exponential, beta, gamma distribution. This assumption can ensure the identifiability of p(f−1(x), as will be shown in the appendix. The bijectivity of fx and fy have been widely assumed in [30, 54, 53, 35, 75] as a basic condition for identifiability. It naturally holds for fx to be bijective since it has been empirically proven in auto-encoder [38] that the low-dimension embeddings (i.e., qs + qz < qx) can recover the original input well and also that the variational auto-encoder can extract meaningful representations from x. For the θ with categorical Y such that p(y = k|s) = [fy]k(s)/ ( ∑ k[fy]k(s)), the fy may not satisfy the bijectivity condition. We will shown identifiability for such a categorical case later in theorem 4.5. The assumption 3 can be uniformly satisfied for all distributions in the strongly exponential family. The containment of an open set in assumption (4) for { ( Ts([f−1]S(x)),T z([f−1]Z(x)) ) ;B(x) > 0} implies that space expanded by sufficient statistics are dense in some open set, as a sufficient condition for the mixture distribution P e(C) and also P e(X,Y |c) to be identified. The diversity assumption (5) implies that i) m ≥ R and m ∗ R ≥ max(kz ∗ qz, ks ∗ qs) + 1; and that ii) different environments are diverse enough in terms of S-Z correlation, as an almost a necessary for the invariant one to be identified (a different version is assumed in [1]). In supplement B.2, we will show that the ii) can hold unless the space of Γ belong to a zero-(Lebesgue) measure set. As indicated by the formulation, a larger m would be easier to satisfy the condition, which agrees with the intuition that more environments can provide more complementary information. Besides, our result can be extended to non-independent case among {s1, ..., sqs} (or {z1, ..., zqz}), i.e., pTt,Γt c,de (t) = exp(〈Tt(t),Γtc,de〉+B(t)−Atc,de) (t = s, z), which will shown in supplement B.2. Extension to the general forms of LaCIM. We extend to general forms of LaCIM in theorem 4.5 as long as its P(S,Z|C = c) ∈W r,2(S × Z) (for some r ≥ 2) and categorical Y , in the following theorem. This is accomplished by proving that any model in LaCIM can be approximated by a sequence of distributions with parameterization in Pexp, motivated by [3] that the exponential family is dense in the set of distributions with bounded support, and in [44] that the continuous variable with multinomial logit model can be approximated by a series of distributions with i.i.d Gumbel noise as the temperature converges to infinity. The proof is left in the supplement. Theorem 4.5 (Asymptotic∼exp-identifiability). Suppose the LaCIM satisfy that p(x|s, z) and p(y|s) are smooth w.r.t s, z and s respectively. For each e and c ∈ C, suppose Pe(S,Z|c) ∈W r,2(S×Z) for some r ≥ 2, we have that the LaCIM is asymptotically∼exp-identifiable: ∀ > 0, ∃ ∼exp-identifiable P̃θ ∈ Pexp, s.t. dPok(pe(X,Y ), p̃eθ(X,Y )) < ,∀e ∈ Etrain 3. Our proof is built on [3] that any probability in Sobolev space can be approximated by a sequence of distribution with the number of natural paramters going to infinity, i.e., kt →∞. 4.3 Learning and Inference Guided by the identifiability result, we propose to learn P (X|S,Z) and P (Y |S) via generative modeling following from Fig. 1 (c). Then to predict the label for a new sample x generated from (s?, z?), we first leverage the learned p(x|s, z) to infer s? that is ensured to be able to not mix the non-causal information, followed by learned P (y|s̃?) for prediction. 3The dPok(µ1, µ2) denotes the Pokorov distance between µ1 and µ2, with limn→∞ dPok(µn, µ) → 0 ⇐⇒ µn d→ µ. 4.3.1 Learning Method To learn the P (X|S,Z), P (Y |S) for invariant prediction, we reformulate the objective function of Variational Auto-Encoder (VAE) in the supervised scenario, in order to fit {pe(x, y)}e∈Etrain . As a latent generative model, the VAE was originally proposed for unsupervised generation from latent variables V to high-dimensional input variable X . To make such a generation tractable, the VAE introduced a variational distribution qψ parameterized by ψ to approximate the intractable posterior by maximizing the following Evidence Lower Bound (ELBO):−Lθ,ψ = Ep(x) [ Eqψ(v|x) log pθ(x,v) qψ(v|x) ] ≤ Ep(x)[log pθ(x)], where the equality is achieved when qψ(v|x) = pθ(v|x). Therefore, maximizing the ELBO over pθ and qψ will drive (i) qψ(v|x) to approximate pθ(v|x); (ii) pθ to estimate the ground-truth model p. To adapt the above surrogate loss to our DAG in Fig. 1 (c), we introduce the variational distribution qeψ(s, z|x, y) for each environment e. The corresponding ELBO for e is −Leθ,ψ ∆ =Epe(x,y) [ Eqeψ(s,z|x,y) log peθ(x, y, s, z) qeψ(s, z|x, y) ] , where peθ(x, y, s, z) = pθ(x|s, z)pθ(y|s)pe(s, z). Similarly, minimizing Leθ,ψ can drive pθ(x|s, z), pθ(y|s) to approximate the p(x|s, z), p(y|s), and also qeψ(s, z|x, y) to estimate peθ(s, z|x, y). Therefore, the qψ can inherit the properties of pθ. As peθ(s, z|x, y)= peθ(s,z|x)pθ(y|s) peθ(y|x) for our DAG in Fig. 1 (c), we can similarly reparameterize qeψ(s, z|x, y) as qeψ(s,z|x)pθ(y|s) qeψ(y|x) with qψ(y|s) replaced by pθ(y|s) (since the goal of qψ is to mimic the behavior of pθ). Then, the Leθ,ψ can be rewritten as: Leθ,ψ = Epe(x,y) [ − log qeψ(y|x)− Eqeψ(s,z|x) pθ(y|s) qeψ(y|x) log pθ(x|s, z)peθ(s, z) qeψ(s, z|x) ] , (4) where qeψ(y|x) = ∫ S q e ψ(s|x)pθ(y|s)ds. We correspondingly parameterize the prior model peθ(s, z) and inference model qeψ(s, z|x) as pθ(s, z|d̃e) and qψ(s, z|x, d̃e), in which d̃e (of environment e) denotes the domain index that can be represented by the one-hot encoded vector with length m := |Etrain|. The overall loss function is: Lθ,ψ ∆ = ∑ e∈Etrain Leθ,ψ. (5) The training datasets {De}e∈Etrain are applied to optimize the prior models {p(s, z|d̃e)}e, inference models {qψ(s, z|x, d̃e)}e, generative model pθ(x|s, z) and predictive model pθ(y|s). Particularly, the parameters of pθ(x|s, z) and pθ(y|s) are shared among all environments, motivated by the the invariance property of P (X|S,Z) and P (Y |S) across all domains. 4.3.2 Inference & Prediction. We leverage the learned P (X|S,Z), P (Y |S) for prediction. According to Prop. 4.2 and Eq. (3) in theorem 4.4, the induced predictor via P (X|S,Z), P (Y |S) can recover the true predicting mechanism for any distributional shifts from E . Specifically, for any x generated by (s?, z?), we first optimize the following log-likelihood of pθ(x|s, z) over S × Z to infer s?, z?, max s,z log pθ(x|s, z) + λs‖s‖22 + λz‖z‖22, (6) with hyperparameters λs > 0 and λz > 0 in order to control the learned s, z in a reasonable scale. Note that Eq. Eq. (6) is different from the maximum a posterior estimation since the posterior qeψ(s, z|x) is parameterized differently for different e while the pθ(x|s, z) is invariantly parameterized for E (this is because p(x|s, z) is invariant). For optimization, we adopt the strategy in [61] that first sample some candidate points from N (0, I) and select the optimal one in terms of Eq. (6) as initial point; then use Adam to optimize for another T iterations. The implementation details and optimization effect are shown in supplement E.2. Finally, with estimated s̃?, z̃?, we implement the learned pθ(y|s̃?) for prediction: ỹ := arg maxy pθ(y|s̃?). 5 Experiments We first verify the identifiability claims of theorem 4.4 in sec. 5.1. Then we evaluate LaCIM on real-world data in sec. 5.2: Non-I.I.D. Image dataset with Contexts (NICO); Colored MNIST (CMNIST); Alzheimer’s Disease Neuroimaging Initiative (ADNI www.loni.ucla.edu/ADNI for early prediction of Alzheimer’s Disease), to verify the generalization ability of our method on the target domain with distributional shifts. 5.1 Simulation To verify the identifiability claims, we implement LaCIM on synthetic data. We generate C, S, Z,X, Y following Fig. 1 (with details left in supplementary). We choose m = 3, 5 with the same total number of samples. To verify the advantage of learning on multiple diverse domains (m > 1), we compare with pool-LaCIM: minimizing the loss Eq. (4) on the pooled data from all m domains. We compute the mean correlation coefficient (MCC) adopted in [35], which measures the goodness of identifiability under permutation by introducing cost optimization to assign each learned component to the source component. We run all methods for 100 times, with the average recorded in Fig. 2a. The superiority of LaCIM over pool-LaCIM, together with the fact that LaICM with m = 5 performs better than m = 3, verify the benefit of more domains to satisfy the diversity condition. To illustrate the learning effect, we visualize the learned Z (with S left in supplement E.1) in Fig. 2b. 5.2 Real-world Data We verify the generalization ability of LaCIM on three data: NICO, CMNIST and ADNI. Dataset. We describe the datasets as follows (X,Y denotes the input and output; D is unobserved): • NICO. We consider the cat/dog classification in “Animal” dataset in NICO, a benchmark for non-i.i.d problem in [20]. Each animal is associated with “grass”,“snow” contexts. The D denotes the attributes of the sampler. The C denotes the time and weather of sampling, which generates the S,Z that respectively denote the semantic and contextual features. We split the dataset into m training domains and the test domain, in which each domain has different proportions of contexts associated with each animal, i.e., (%cat in grass, %cat in snow, %dog in grass, %dog in snow), due to different sampling strategies determined by D. The proportion vectors of all domains are introduced in Tab. 3. The distributional shift refers to the spurious correlation between the context and the label. • CMNIST: We relabel the digits 0-4 and 5-9 as y = 0 and y = 1, based on MNIST. Then we color pe (1 − pe) of images with y = 0 (y = 1) as green and others as red. We set m = 2 with pe1 = 0.95, pe2 = 0.99; while the petest for the test domain is set to 0.1. The D denotes the attributes of the painter. The Z, S respectively represent the features related to the color and the digit. Their confounder C denotes the time and weather for which the painter D draws the number and color, e.g., the painter tends to draw red 0 more often than green 1 in the sunny morning. In this regard, the distributional shift refers to the spurious correlation between the color and the label. • ADNI. The Y := {0, 1, 2}, with 0,1,2 respectively denoting Normal Control, Mild Cognitive Impairment and AD. The X is structural Magnetic resonance imaging. We split the data into m = 2 training domains and the test domain, with different values of D that denotes Age, TAU (a biomarker [24]). The C, S (Z) respectively denote the hormone level that affects the brain structure development and the disease-related (-unrelated) brain regions. The distributional shifts among all domains are due to different values of D. Compared Baselines & Implementation Details. We compare with (i) Empirical Risk Mnimization from X → Y (ERM), (ii) domain-adversarial neural network (DANN) [15], (iii) Maximum Mean Discrepancy with Adversarial Auto-Encoder (MMD-AAE) [43], (iv) Domain Invariant Variational Autoencoders (DIVA) [29], (v) Invariant Risk Mnimization (IRM) [1], (vi) Supervised VAE (sVAE): our LaCIM implemented by VAE without disentangling S,Z. For all methods, the network structures of qeψ(s, z|x), pθ(x|s, z) and pθ(y|s) for CMNIST, NICO and ADNI are shared (details introduced in supplement E.4, E.5, E.6, Tab. 7, 8). We implement SGD as optimizer, with learning rate (lr) 0.5 and weight decay (wd) 1e-5 for CMNIST; lr 0.01 with decaying 0.2× every 60 epochs, wd 5e-5 for NICO and ADNI (wd is 2e-4). The batch-size are set to 256, 30 and 4 for CMNIST, NICO, ADNI. Main Results & Discussions. We report accuracy over 10 runs for each method. As shown in Tab. 1, our LaCIM consistently outperforms others on all data. Specifically, the advantage over IRM and ERM may due to the incorporation of causal assumptions embedded in Fig. 1 (c). Further, the improvement over sVAE is benefited from the separation of S from others to avoid spurious correlation. Besides, a larger m (with the total sample size fixed) can bring further benefit on NICO, which may due to the easier satisfaction of the diversity condition in theorem 4.4. Interpretability. We visualize the learned S andZ on CMNIST and NICO. Specifically, for CMNIST, we visualize the generated image (with only digit “0” among all classes that belong to Y = 0 and digit “7” among all classes that belong to Y = 1) by interpolating S (and Z) with fixed Z (and S); for NICO, we adopt the gradient method [67], which visualizes the derivatives of the S? (i.e., dimension of S that has the highest correlation with Y ) with respect to each image. As shown in Fig. 3a, the generated sequential images in the 1st and 2nd row look more like “7” from “0” as s increases; while the sequential images in the 2nd-row change from red to green as z increases. Besides, different dimensions of S can learn different differentiating semantic information. For example, the first dimension can learn to add the dash in the hand-writing "7"; while the second dimension can learn to remove the left part of "0" to "7" as interpolated. For the dimension of Z, it learned other non-differentiating factors such as width, color. This result reflects that the learned S and Z correspond to the digit (causal factor of Y ) and color-related features. For NICO, Fig. 3b shows the ability of identifying more explainable semantic features of LaCIM than ERM in which the learned features can mix the background information. Supplement E.5 provides more results. 6 Conclusions & Discussions We propose recovering latent causal factor that is robust to distributional shifts caused by a domain variable. We introduce the causal and non-causal latent factors that are spuriously correlated with each other, and generate the input and the output via invariant mechanisms. Under this invariance, the causal factor is guaranteed to be disentangled from the non-causal one, which induces the groundtruth predictor that holds on all domains. A reformulated generative model is proposed for inferring the causal factor and prediction. A possible drawback of our model lies in our requirement of the number of environments for identifiability, the relaxation of which is left in future work. Broader Impact We claim that this work does not present any foreseeable negative social impact.
1. What is the focus of the paper regarding causal mechanisms? 2. What are the strengths of the proposed method, particularly its theoretical foundation? 3. What are the weaknesses of the paper, especially regarding its writing and clarity? 4. Do you have any suggestions for additional citations or references that could enhance the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose a method to estimate the invariant causal mechanism. By modeling the process as the causal graph in Fig.1 (c), the authors first provide the condition when disentangling causal factors is feasible. Based on the theoretical result, they present the method to infer causal factor and non-causal factor with X based on VAE. Review The authors address an important problem in this paper. The method is with solid theoretical guarantee. The experiments validate the effectiveness of the proposed method. I like this paper. Hence I give a positive score. I possibly change my score according to the comments of other reviewers. I have two suggestions: The writing could be improved. The theoretical part is quite hard to follow. There are many notations without clear illustration, e.g. k_t,q_t, \theta. Does k_t imply the sample number and q_t imply the domain number? I suggest one additional citation, where the authors propose a method that also takes latent causal factors into account when there are distributional shifts. IJCAI 2017 Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination.
NIPS
Title Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects Abstract We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision. 1 Introduction The ability to identify objects in their environments and to understand relations between them is a cornerstone of human intelligence (Kemp and Tenenbaum, 2008). Arguably, in doing so we rely on a notion of spatial and temporal consistency which gives rise to an expectation that objects do not appear out of thin air, nor do they spontaneously vanish, and that they can be described by properties such as location, appearance and some dynamic behaviour that explains their evolution over time. We argue that this notion of consistency can be seen as an inductive bias that improves the efficiency of our learning. Equally, we posit that introducing such a bias towards spatio-temporal consistency into our models should greatly reduce the amount of supervision required for learning. One way of achieving such inductive biases is through model structure. While recent successes in deep learning demonstrate that progress is possible without explicitly imbuing models with interpretable structure (LeCun, Bengio, et al., 2015), recent works show that introducing such structure into deep models can indeed lead to favourable inductive biases improving performance e.g. in convolutional networks (LeCun, Boser, et al., 1989) or in tasks requiring relational reasoning (Santoro et al., 2017). Structure can also make neural networks useful in new contexts by significantly improving generalization, data efficiency (Jacobsen et al., 2016) or extending their capabilities to unstructured inputs (Graves et al., 2016). Attend, Infer, Repeat (AIR), introduced by Eslami et al., 2016, is a notable example of such a structured probabilistic model that relies on deep learning and admits efficient amortized inference. Trained without any supervision, AIR is able to decompose a visual scene into its constituent components and to generate a (learned) number of latent variables that explicitly encode the location and appearance of each object. While this approach is inspiring, its focus on modelling individual (and thereby inherently static) scenes leads to a number of limitations. For example, it often merges two objects that are close together into one since no temporal context is available to distinguish between them. ∗Corresponding author: [email protected] 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Similarly, we demonstrate that AIR struggles to identify partially occluded objects, e.g. when they extend beyond the boundaries of the scene frame (see Figure 7 in Section 4.1). Our contribution is to mitigate the shortcomings of AIR by introducing a sequential version that models sequences of frames, enabling it to discover and track objects over time as well as to generate convincing extrapolations of frames into the future. We achieve this by leveraging temporal information to learn a richer, more capable generative model. Specifically, we extend AIR into a spatio-temporal state-space model and train it on unlabelled image sequences of dynamic objects. We show that the resulting model, which we name Sequential AIR (SQAIR), retains the strengths of the original AIR formulation while outperforming it on moving MNIST digits. The rest of this work is organised as follows. In Section 2, we describe the generative model and inference of AIR. In Section 3, we discuss its limitations and how it can be improved, thereby introducing Sequential Attend, Infer, Repeat (SQAIR), our extension of AIR to image sequences. In Section 4, we demonstrate the model on a dataset of multiple moving MNIST digits (Section 4.1) and compare it against AIR trained on each frame and Variational Recurrent Neural Network (VRNN) of Chung et al., 2015 with convolutional architectures, and show the superior performance of SQAIR in terms of log marginal likelihood and interpretability of latent variables. We also investigate the utility of inferred latent variables of SQAIR in downstream tasks. In Section 4.2 we apply SQAIR on real-world pedestrian CCTV data, where SQAIR learns to reliably detect, track and generate walking pedestrians without any supervision. Code for the implementation on the MNIST dataset2 and the results video3 are available online. 2 Attend, Infer, Repeat (AIR) AIR, introduced by Eslami et al., 2016, is a structured variational auto-encoder (VAE) capable of decomposing a static scene x into its constituent objects, where each object is represented as a separate triplet of continuous latent variables z = {zwhat,i, zwhere,i, zpres,i}ni=1, n ∈ N being the (random) number of objects in the scene. Each triplet of latent variables explicitly encodes position, appearance and presence of the respective object, and the model is able to infer the number of objects present in the scene. Hence it is able to count, locate and describe objects in the scene, all learnt in an unsupervised manner, made possible by the inductive bias introduced by the model structure. Generative Model The generative model of AIR is defined as follows pθ(n) = Geom(n | θ), pθ(z w | n) = n ∏ i=1 pθ ( zw,i ) = n ∏ i=1 N ( zw,i|0, I ) , pθ(x | z) = N ( x | yt, σ 2 xI ) , with yt = n ∑ i=1 hdecθ (z what,i, zwhere,i), (1) where zw,i ..= (zwhat,i, zwhere,i), zpres,i = 1 for i = 1 . . . n and hdecθ is the object decoder with parameters θ. It is composed of a glimpse decoder fdecθ : g i t 7→ y i t, which constructs an image patch and a spatial transformer (ST, Jaderberg et al., 2015), which scales and shifts it according to zwhere; see Figure 1 for details. Inference Eslami et al., 2016 use a sequential inference algorithm, where latent variables are inferred one at a time; see Figure 2. The number of inference steps n is given by zpres,1:n+1, a random vector of n ones followed by a zero. The zi are sampled sequentially from qφ(z | x) = qφ ( zpres,n+1 = 0 | zw,1:n,x ) n ∏ i=1 qφ ( zw,i, zpres,i = 1 | z1:i−1,x ) , (2) where qφ is implemented as a neural network with parameters φ. To implement explaining away, e.g. to avoid encoding the same object twice, it is vital to capture the dependency of zw,i and zpres,i on z1:i−1 and x. This is done using a recurrent neural network (RNN) Rφ with hidden state h i, namely: ωi,hi = Rφ(x, z i−1,hi−1). The outputs ωi, which are computed iteratively and depend on the previous latent variables (cf. Algorithm 3), parametrise qφ ( zw,i, zpres,i | z1:i−1,x ) . For simplicity the latter is assumed to factorise such that qφ ( zw, zpres | z1:i−1,x ) = qφ ( zpres,n+1 = 0 | ωn+1 ) ∏n i=1 qφ ( zw,i | ωi ) qφ ( zpres,i = 1 | ωi ) . 2code: github.com/akosiorek/sqair 3video: youtu.be/-IUNQgSLE0c 3 Sequential Attend-Infer-Repeat While capable of decomposing a scene into objects, AIR only describes single images. Should we want a similar decomposition of an image sequence, it would be desirable to do so in a temporally consistent manner. For example, we might want to detect objects of the scene as well as infer dynamics and track identities of any persistent objects. Thus, we introduce Sequential Attend, Infer, Repeat (SQAIR), whereby AIR is augmented with a state-space model (SSM) to achieve temporal consistency in the generated images of the sequence. The resulting probabilistic model is composed of two parts: Discovery (DISC), which is responsible for detecting (or introducing, in the case of the generation) new objects at every time-step (essentially equivalent to AIR), and Propagation (PROP), responsible for updating (or forgetting) latent variables from the previous time-step given the new observation (image), effectively implementing the temporal SSM. We now formally introduce SQAIR by first describing its generative model and then the inference network. Generative Model The model assumes that at every-time step, objects are first propagated from the previous time-step (PROP). Then, new objects are introduced (DISC). Let t ∈ N be the current timestep. Let Pt be the set of objects propagated from the previous time-step and let Dt be the set of objects discovered at the current time-step, and let Ot = Pt∪Dt be the set of all objects present at time-step t. Consequently, at every time step, the model retains a set of latent variables zPtt = {z i t}i∈Pt , and generates a set of new latent variables zDtt = {z i t}i∈Dt . Together they form zt ..= [zPtt , z Dt t ], where the representation of the ith object zit ..= [zwhat,it , z where,i t , z pres,i t ] is composed of three components (as in AIR): z what,i t and z where,i t are real vector-valued variables representing appearance and location of the object, respectively. z pres,i t is a binary variable representing whether the object is present at the given time-step or not. At the first time-step (t = 1) there are no objects to propagate, so we sample D1, the number of objects at t = 1, from the discovery prior pD(D1). Then for each object i ∈ Dt, we sample latent variables z what,i t , z where,i t from p D ( zi1 | D1 ) . At time t = 2, the propagation step models which objects from t = 1 are propagated to t = 2, and which objects disappear from the frame, using the binary random variable (zpres,it )i∈Pt . The discovery step at t = 2 models new objects that enter the frame, with a similar procedure to t = 1: sample D2 (which depends on z P2 2 ) then sample (zwhat,i2 , z where,i 2 )i∈D2 . This procedure of propagation and discovery recurs for t = 2, . . . T . Once the zt have been formed, we may generate images xt using the exact same generative distribution pθ(xt | zt) as in AIR (cf. Equation (1), Fig. 1, and Algorithm 1). In full, the generative model is: p(x1:T , z1:T , D1:T ) = p D(D1, z D1 1 ) T ∏ t=2 pD(Dt, z Dt t |z Pt t )p P (zPtt |zt−1)pθ(xt|zt), (3) The discovery prior pD(Dt, z Dt t |z Pt t ) samples latent variables for new objects that enter the frame. The propagation prior pP (zPtt |zt−1) samples latent variables for objects that persist in the frame and removes latents of objects that disappear from the frame, thereby modelling dynamics and appearance changes. Both priors are learned during training. The exact forms of the priors are given in Appendix B. Inference Similarly to AIR, inference in SQAIR can capture the number of objects and the representation describing the location and appearance of each object that is necessary to explain every image in a sequence. As with generation, inference is divided into PROP and DISC. During PROP, the inference network achieves two tasks. Firstly, the latent variables from the previous time step are used to infer the current ones, modelling the change in location and appearance of the corresponding objects, thereby attaining temporal consistency. This is implemented by the temporal RNN RTφ , with hidden states hTt (recurs in t). Crucially, it does not access the current image directly, but uses the output of the relation RNN (cf. Santoro et al., 2017). The relation RNN takes relations between objects into account, thereby implementing the explaining away phenomenon; it is essential for capturing any interactions between objects as well as occlusion (or overlap, if one object is occluded by another). See Figure 7 for an example. These two RNNs together decide whether to retain or to forget objects that have been propagated from the previous time step. During DISC, the network infers further latent variables that are needed to describe any new objects that have entered the frame. All latent variables remaining after PROP and DISC are passed on to the next time step. See Figures 2 and 3 for the inference network structure . The full variational posterior is defined as qφ(D1:t, z1:T | x1:T ) = T ∏ t=1 qDφ ( Dt, z Dt t | xt, z Pt t ) ∏ i∈Ot−1 qPφ ( zit | z i t−1,h T,i t ,h R,i t ) . (4) Discovery, described by qDφ , is very similar to the full posterior of AIR, cf. Equation (2). The only difference is the conditioning on zPtt , which allows for a different number of discovered objects at each time-step and also for objects explained by PROP not to be explained again. The second term, or qPφ , describes propagation. The detailed structures of q D φ and q P φ are shown in Figure 3, while all the pertinent algorithms and equations can be found in Appendices A and C, respectively. Learning We train SQAIR as an importance-weighted auto-encoder (IWAE) of Burda et al., 2016. Specifically, we maximise the importance-weighted evidence lower-bound LIWAE, namely LIWAE = Ex1:T∼pdata(x1:T ) [ Eq [ log 1 K K ∑ k=1 pθ(x1:T , z1:T ) qφ(z1:T | x1:T ) ]] . (5) To optimise the above, we use RMSPROP, K = 5 and batch size of 32. We use the VIMCO gradient estimator of Mnih and Rezende, 2016 to backpropagate through the discrete latent variables zpres, and use reparameterisation for the continuous ones (Kingma and Welling, 2013). We also tried to use NVIL of Mnih and Gregor, 2014 as in the original work on AIR, but found it very sensitive to hyper-parameters, fragile and generally under-performing. 4 Experiments We evaluate SQAIR on two datasets. Firstly, we perform an extensive evaluation on moving MNIST digits, where we show that it can learn to reliably detect, track and generate moving digits (Section 4.1). Moreover, we show that SQAIR can simulate moving objects into the future — an outcome it has not been trained for. We also study the utility of learned representations for a downstream task. Secondly, we apply SQAIR to real-world pedestrian CCTV data from static cameras (DukeMTMC, Ristani et al., 2016), where we perform background subtraction as pre-processing. In this experiment, we show that SQAIR learns to detect, track, predict and generate walking pedestrians without human supervision. 4.1 Moving multi-MNIST The dataset consists of sequences of length 10 of multiple moving MNIST digits. All images are of size 50× 50 and there are zero, one or two digits in every frame (with equal probability). Sequences are generated such that no objects overlap in the first frame, and all objects are present through the sequence; the digits can move out of the frame, but always come back. See Appendix F for an experiment on a harder version of this dataset. There are 60,000 training and 10,000 testing sequences created from the respective MNIST datasets. We train two variants of SQAIR: the MLP-SQAIR uses only fully-connected networks, while the CONV-SQAIR replaces the networks used to encode images and glimpses with convolutional ones; it also uses a subpixel-convolution network as the glimpse decoder (Shi et al., 2016). See Appendix D for details of the model architectures and the training procedure. We use AIR and VRNN (Chung et al., 2015) as baselines for comparison. VRNN can be thought of as a sequential VAE with an RNN as its deterministic backbone. Being similar to a VAE, its latent variables are not structured, nor easily interpretable. For a fair comparison, we control the latent dimensionality of VRNN and the number of learnable parameters. We provide implementation details in Appendix D.3. The quantitative analysis consists of comparing all models in terms of the marginal log-likelihood log pθ(x1:T ) evaluated as the LIWAE bound with K = 1000 particles, reconstruction quality evaluated as a single-sample approximation of Eqφ [log pθ(x1:T | z1:T )] and the KL-divergence between the approximate posterior and the prior (Table 1). Additionally, we measure the accuracy of the number of objects modelled by SQAIR and AIR. SQAIR achieves superior performance across a range of metrics — its convolutional variant outperforms both AIR and the corresponding VRNN in terms of model evidence and reconstruction performance. The KL divergence for SQAIR is almost twice as low as for VRNN and by a yet larger factor for AIR. We can interpret KL values as an indicator of the ability to compress, and we can treat SQAIR/AIR type of scheme as a version of run-length encoding. While VRNN has to use information to explicitly describe every part of the image, even if some parts are empty, SQAIR can explicitly allocate content information (zwhat) to specific parts of the image (indicated by zwhere). AIR exhibits the highest values of KL, but this is due to encoding every frame of the sequence independently — its prior cannot take what and where at the previous time-step into account, hence higher KL. The fifth column of Table 1 details the object counting accuracy, that is indicative of the quality of the approximate posterior. It is measured as the sum of z pres t for a given frame against the true number of objects in that frame. As there is no zpres for VRNN no score is provided. Perhaps surprisingly, this metric is much higher for SQAIR than for AIR. This is because AIR mistakenly infers overlapping objects as a single object. Since SQAIR can incorporate temporal Figure 7: Inputs, reconstructions with marked glimpse locations and reconstructed glimpses for AIR (left) and SQAIR (right). SQAIR can model partially visible and heavily overlapping objects by aggregating temporal information. information, it does not exhibit this failure mode (cf. Figure 7). Next, we gauge the utility of the learnt representations by using them to determine the sum of the digits present in the image (Table 1, column six). To do so, we train a 19-way classifier (mapping from any combination of up to two digits in the range [0, 9] to their sum) on the extracted representations and use the summed labels of digits present in the frame as the target. Appendix D contains details of the experiment. SQAIR significantly outperforms AIR and both variants of VRNN on this tasks. VRNN under-performs due to the inability of disentangling overlapping objects, while both VRNN and AIR suffer from low temporal consistency of learned representations, see Appendix H. Finally, we evaluate SQAIR qualitatively by analyzing reconstructions and samples produced by the model against reconstructions and samples from VRNN. We observe that samples and reconstructions from SQAIR are of better quality and, unlike VRNN, preserve motion and appearance consistently through time. See Appendix H for direct comparison and additional examples. Furthermore, we examine conditional generation, where we look at samples from the generative model of SQAIR conditioned on three images from a real sequence (see Figure 6). We see that the model can preserve appearance over time, and that the simulated objects follow similar trajectories, which hints at good learning of the motion model (see Appendix H for more examples). Figure 7 shows reconstructions and corresponding glimpses of AIR and SQAIR. Unlike SQAIR, AIR is unable to recognize objects from partial observations, nor can it distinguish strongly overlapping objects (it treats them as a single object; columns five and six in the figure). We analyze failure cases of SQAIR in Appendix G. 4.2 Generative Modelling of Walking Pedestrians To evaluate the model in a more challenging, real-world setting, we turn to data from static CCTV cameras of the DukeMTMC dataset (Ristani et al., 2016). As part of pre-precessing, we use standard background subtraction algorithms (Itseez, 2015). In this experiment, we use 3150 training and 350 validation sequences of length 5. For details of model architectures, training and data pre-processing, see Appendix E. We evaluate the model qualitatively by examining reconstructions, conditional samples (conditioned on the first four frames) and samples from the prior (Figure 8 and Appendix I). We see that the model learns to reliably detect and track walking pedestrians, even when they are close to each other. There are some spurious detections and re-detections of the same objects, which is mostly caused by imperfections of the background subtraction pipeline — backgrounds are often noisy and there are sudden appearance changes when a part of a person is treated as background in the pre-processing pipeline. The object counting accuracy in this experiment is 0.5712 on the validation dataset, and we noticed that it does increase with the size of the training set. We also had to use early stopping to prevent overfitting, and the model was trained for only 315k iterations (> 1M for MNIST experiments). Hence, we conjecture that accuracy and marginal likelihood can be further improved by using a bigger dataset. 5 Related Work Object Tracking There have been many approaches to modelling objects in images and videos. Object detection and tracking are typically learned in a supervised manner, where object bounding boxes and often additional labels are part of the training data. Single-object tracking commonly use Siamese networks, which can be seen as an RNN unrolled over two time-steps (Valmadre et al., 2017). Recently, Kosiorek et al., 2017 used an RNN with an attention mechanism in the HART model to predict bounding boxes for single objects, while robustly modelling their motion and appearance. Multi-object tracking is typically attained by detecting objects and performing data association on bounding-boxes (Bewley et al., 2016). Schulter et al., 2017 used an end-to-end supervised approach that detects objects and performs data association. In the unsupervised setting, where the training data consists of only images or videos, the dominant approach is to distill the inductive bias of spatial consistency into a discriminative model. Cho et al., 2015 detect single objects and their parts in images, and Kwak et al., 2015; Xiao and Jae Lee, 2016 incorporate temporal consistency to better track single objects. SQAIR is unsupervised and hence it does not rely on bounding boxes nor additional labels for training, while being able to learn arbitrary motion and appearance models similarly to HART (Kosiorek et al., 2017). At the same time, is inherently multi-object and performs data association implicitly (cf. Appendix A). Unlike the other unsupervised approaches, temporal consistency is baked into the model structure of SQAIR and further enforced by lower KL divergence when an object is tracked. Video Prediction Many works on video prediction learn a deterministic model conditioned on the current frame to predict the future ones (Ranzato et al., 2014; Srivastava et al., 2015). Since these models do not model uncertainty in the prediction, they can suffer from the multiple futures problem — since perfect prediction is impossible, the model produces blurry predictions which are a mean of possible outcomes. This is addressed in stochastic latent variable models trained using variational inference to generate multiple plausible videos given a sequence of images (Babaeizadeh et al., 2017; Denton and Fergus, 2018). Unlike SQAIR, these approaches do not model objects or their positions explicitly, thus the representations they learn are of limited interpretability. Learning Decomposed Representations of Images and Videos Learning decomposed representations of object appearance and position lies at the heart of our model. This problem can be also seen as perceptual grouping, which involves modelling pixels as spatial mixtures of entities. Greff, Rasmus, et al., 2016 and Greff, Steenkiste, et al., 2017 learn to decompose images into separate entities by iterative refinement of spatial clusters using either learned updates or the Expectation Maximization algorithm; Ilin et al., 2017 and Steenkiste et al., 2018 extend these approaches to videos, achieving very similar results to SQAIR. Perhaps the most similar work to ours is the concurrently developed model of Hsieh et al., 2018. The above approaches rely on iterative inference procedures, but do not exhibit the object-counting behaviour of SQAIR. For this reason, their computational complexities are proportional to the predefined maximum number of objects, while SQAIR can be more computationally efficient by adapting to the number of objects currently present in an image. Another interesting line of work is the GAN-based unsupervised video generation that decomposes motion and content (Tulyakov et al., 2018; Denton and Birodkar, 2017). These methods learn interpretable features of content and motion, but deal only with single objects and do not explicitly model their locations. Nonetheless, adversarial approaches to learning structured probabilistic models of objects offer a plausible alternative direction of research. Bayesian Nonparametric Models To the best of our knowledge, Neiswanger and Wood, 2012 is the only known approach that models pixels belonging to a variable number of objects in a video together with their locations in the generative sense. This work uses a Bayesian nonparametric (BNP) model, which relies on mixtures of Dirichlet processes to cluster pixels belonging to an object. However, the choice of the model necessitates complex inference algorithms involving Gibbs sampling and Sequential Monte Carlo, to the extent that any sensible approximation of the marginal likelihood is infeasible. It also uses a fixed likelihood function, while ours is learnable. The object appearance-persistence-disappearance model in SQAIR is reminiscent of the Markov Indian buffet process (MIBP) of Gael et al., 2009, another BNP model. MIBP was used as a model for blind source separation, where multiple sources contribute toward an audio signal, and can appear, persist, disappear and reappear independently. The prior in SQAIR is similar, but the crucial differences are that SQAIR combines the BNP prior with flexible neural network models for the dynamics and likelihood, as well as variational learning via amortized inference. The interface between deep learning and BNP, and graphical models in general, remains a fertile area of research. 6 Discussion In this paper we proposed SQAIR, a probabilistic model that extends AIR to image sequences, and thereby achieves temporally consistent reconstructions and samples. In doing so, we enhanced AIR’s capability of disentangling overlapping objects and identifying partially observed objects. This work continues the thread of Greff, Steenkiste, et al., 2017, Steenkiste et al., 2018 and, together with Hsieh et al., 2018, presents unsupervised object detection & tracking with learnable likelihoods by the means of generative modelling of objects. In particular, our work is the first one to explicitly model object presence, appearance and location through time. Being a generative model, SQAIR can be used for conditional generation, where it can extrapolate sequences into the future. As such, it would be interesting to use it in a reinforcement learning setting in conjunction with ImaginationAugmented Agents (Weber et al., 2017) or more generally as a world model (Ha and Schmidhuber, 2018), especially for settings with simple backgrounds, e. g., games like Montezuma’s Revenge or Pacman. The framework offers various avenues of further research; SQAIR leads to interpretable representations, but the interpretability of what variables can be further enhanced by using alternative objectives that disentangle factors of variation in the objects (Kim and Mnih, 2018). Moreover, in its current state, SQAIR can work only with simple backgrounds and static cameras. In future work, we would like to address this shortcoming, as well as speed up the sequential inference process whose complexity is linear in the number of objects. The generative model, which currently assumes additive image composition, can be further improved by e. g., autoregressive modelling (Oord et al., 2016). It can lead to higher fidelity of the model and improved handling of occluded objects. Finally, the SQAIR model is very complex, and it would be useful to perform a series of ablation studies to further investigate the roles of different components. Acknowledgements We would like to thank Ali Eslami for his help in implementing AIR, Alex Bewley and Martin Engelcke for discussions and valuable insights and anonymous reviewers for their constructive feedback. Additionally, we acknowledge that HK and YWT’s research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 617071.
1. How does the paper improve the existing Attend, Infer, Repeat technique for sequential data and moving objects? 2. What are the limitations of the proposed approach regarding sequence length and training data? 3. How would the model perform on real-world datasets such as self-driving cars? 4. Why were certain details of the models and analysis results moved to the appendix instead of being included in the main paper?
Review
Review This paper improves the existing Attend, Infer, Repeat paper for sequential data and moving objects. This is a much needed improvement of the AIR technique and the results are very promising. I have a few questions to the others and suggestions of improvements: 1,The sequence length is fixed and predefined. For multi-MNIST, the length is 10 and thus the model is always trained on models of length 10. What if the data has variable lengths? What if the SQAIR model is trained on different sequence lengths - make the length as a hyperparameter and show the performance. 2. A more practical experimentation would be on any self driving car dataset. It would be great if the authors could add results on that. 3. The details of the models and the analysis results are added in the appendix and in supplementary. Its an important part of the paper and should be included in the paper.
NIPS
Title Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects Abstract We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision. 1 Introduction The ability to identify objects in their environments and to understand relations between them is a cornerstone of human intelligence (Kemp and Tenenbaum, 2008). Arguably, in doing so we rely on a notion of spatial and temporal consistency which gives rise to an expectation that objects do not appear out of thin air, nor do they spontaneously vanish, and that they can be described by properties such as location, appearance and some dynamic behaviour that explains their evolution over time. We argue that this notion of consistency can be seen as an inductive bias that improves the efficiency of our learning. Equally, we posit that introducing such a bias towards spatio-temporal consistency into our models should greatly reduce the amount of supervision required for learning. One way of achieving such inductive biases is through model structure. While recent successes in deep learning demonstrate that progress is possible without explicitly imbuing models with interpretable structure (LeCun, Bengio, et al., 2015), recent works show that introducing such structure into deep models can indeed lead to favourable inductive biases improving performance e.g. in convolutional networks (LeCun, Boser, et al., 1989) or in tasks requiring relational reasoning (Santoro et al., 2017). Structure can also make neural networks useful in new contexts by significantly improving generalization, data efficiency (Jacobsen et al., 2016) or extending their capabilities to unstructured inputs (Graves et al., 2016). Attend, Infer, Repeat (AIR), introduced by Eslami et al., 2016, is a notable example of such a structured probabilistic model that relies on deep learning and admits efficient amortized inference. Trained without any supervision, AIR is able to decompose a visual scene into its constituent components and to generate a (learned) number of latent variables that explicitly encode the location and appearance of each object. While this approach is inspiring, its focus on modelling individual (and thereby inherently static) scenes leads to a number of limitations. For example, it often merges two objects that are close together into one since no temporal context is available to distinguish between them. ∗Corresponding author: [email protected] 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Similarly, we demonstrate that AIR struggles to identify partially occluded objects, e.g. when they extend beyond the boundaries of the scene frame (see Figure 7 in Section 4.1). Our contribution is to mitigate the shortcomings of AIR by introducing a sequential version that models sequences of frames, enabling it to discover and track objects over time as well as to generate convincing extrapolations of frames into the future. We achieve this by leveraging temporal information to learn a richer, more capable generative model. Specifically, we extend AIR into a spatio-temporal state-space model and train it on unlabelled image sequences of dynamic objects. We show that the resulting model, which we name Sequential AIR (SQAIR), retains the strengths of the original AIR formulation while outperforming it on moving MNIST digits. The rest of this work is organised as follows. In Section 2, we describe the generative model and inference of AIR. In Section 3, we discuss its limitations and how it can be improved, thereby introducing Sequential Attend, Infer, Repeat (SQAIR), our extension of AIR to image sequences. In Section 4, we demonstrate the model on a dataset of multiple moving MNIST digits (Section 4.1) and compare it against AIR trained on each frame and Variational Recurrent Neural Network (VRNN) of Chung et al., 2015 with convolutional architectures, and show the superior performance of SQAIR in terms of log marginal likelihood and interpretability of latent variables. We also investigate the utility of inferred latent variables of SQAIR in downstream tasks. In Section 4.2 we apply SQAIR on real-world pedestrian CCTV data, where SQAIR learns to reliably detect, track and generate walking pedestrians without any supervision. Code for the implementation on the MNIST dataset2 and the results video3 are available online. 2 Attend, Infer, Repeat (AIR) AIR, introduced by Eslami et al., 2016, is a structured variational auto-encoder (VAE) capable of decomposing a static scene x into its constituent objects, where each object is represented as a separate triplet of continuous latent variables z = {zwhat,i, zwhere,i, zpres,i}ni=1, n ∈ N being the (random) number of objects in the scene. Each triplet of latent variables explicitly encodes position, appearance and presence of the respective object, and the model is able to infer the number of objects present in the scene. Hence it is able to count, locate and describe objects in the scene, all learnt in an unsupervised manner, made possible by the inductive bias introduced by the model structure. Generative Model The generative model of AIR is defined as follows pθ(n) = Geom(n | θ), pθ(z w | n) = n ∏ i=1 pθ ( zw,i ) = n ∏ i=1 N ( zw,i|0, I ) , pθ(x | z) = N ( x | yt, σ 2 xI ) , with yt = n ∑ i=1 hdecθ (z what,i, zwhere,i), (1) where zw,i ..= (zwhat,i, zwhere,i), zpres,i = 1 for i = 1 . . . n and hdecθ is the object decoder with parameters θ. It is composed of a glimpse decoder fdecθ : g i t 7→ y i t, which constructs an image patch and a spatial transformer (ST, Jaderberg et al., 2015), which scales and shifts it according to zwhere; see Figure 1 for details. Inference Eslami et al., 2016 use a sequential inference algorithm, where latent variables are inferred one at a time; see Figure 2. The number of inference steps n is given by zpres,1:n+1, a random vector of n ones followed by a zero. The zi are sampled sequentially from qφ(z | x) = qφ ( zpres,n+1 = 0 | zw,1:n,x ) n ∏ i=1 qφ ( zw,i, zpres,i = 1 | z1:i−1,x ) , (2) where qφ is implemented as a neural network with parameters φ. To implement explaining away, e.g. to avoid encoding the same object twice, it is vital to capture the dependency of zw,i and zpres,i on z1:i−1 and x. This is done using a recurrent neural network (RNN) Rφ with hidden state h i, namely: ωi,hi = Rφ(x, z i−1,hi−1). The outputs ωi, which are computed iteratively and depend on the previous latent variables (cf. Algorithm 3), parametrise qφ ( zw,i, zpres,i | z1:i−1,x ) . For simplicity the latter is assumed to factorise such that qφ ( zw, zpres | z1:i−1,x ) = qφ ( zpres,n+1 = 0 | ωn+1 ) ∏n i=1 qφ ( zw,i | ωi ) qφ ( zpres,i = 1 | ωi ) . 2code: github.com/akosiorek/sqair 3video: youtu.be/-IUNQgSLE0c 3 Sequential Attend-Infer-Repeat While capable of decomposing a scene into objects, AIR only describes single images. Should we want a similar decomposition of an image sequence, it would be desirable to do so in a temporally consistent manner. For example, we might want to detect objects of the scene as well as infer dynamics and track identities of any persistent objects. Thus, we introduce Sequential Attend, Infer, Repeat (SQAIR), whereby AIR is augmented with a state-space model (SSM) to achieve temporal consistency in the generated images of the sequence. The resulting probabilistic model is composed of two parts: Discovery (DISC), which is responsible for detecting (or introducing, in the case of the generation) new objects at every time-step (essentially equivalent to AIR), and Propagation (PROP), responsible for updating (or forgetting) latent variables from the previous time-step given the new observation (image), effectively implementing the temporal SSM. We now formally introduce SQAIR by first describing its generative model and then the inference network. Generative Model The model assumes that at every-time step, objects are first propagated from the previous time-step (PROP). Then, new objects are introduced (DISC). Let t ∈ N be the current timestep. Let Pt be the set of objects propagated from the previous time-step and let Dt be the set of objects discovered at the current time-step, and let Ot = Pt∪Dt be the set of all objects present at time-step t. Consequently, at every time step, the model retains a set of latent variables zPtt = {z i t}i∈Pt , and generates a set of new latent variables zDtt = {z i t}i∈Dt . Together they form zt ..= [zPtt , z Dt t ], where the representation of the ith object zit ..= [zwhat,it , z where,i t , z pres,i t ] is composed of three components (as in AIR): z what,i t and z where,i t are real vector-valued variables representing appearance and location of the object, respectively. z pres,i t is a binary variable representing whether the object is present at the given time-step or not. At the first time-step (t = 1) there are no objects to propagate, so we sample D1, the number of objects at t = 1, from the discovery prior pD(D1). Then for each object i ∈ Dt, we sample latent variables z what,i t , z where,i t from p D ( zi1 | D1 ) . At time t = 2, the propagation step models which objects from t = 1 are propagated to t = 2, and which objects disappear from the frame, using the binary random variable (zpres,it )i∈Pt . The discovery step at t = 2 models new objects that enter the frame, with a similar procedure to t = 1: sample D2 (which depends on z P2 2 ) then sample (zwhat,i2 , z where,i 2 )i∈D2 . This procedure of propagation and discovery recurs for t = 2, . . . T . Once the zt have been formed, we may generate images xt using the exact same generative distribution pθ(xt | zt) as in AIR (cf. Equation (1), Fig. 1, and Algorithm 1). In full, the generative model is: p(x1:T , z1:T , D1:T ) = p D(D1, z D1 1 ) T ∏ t=2 pD(Dt, z Dt t |z Pt t )p P (zPtt |zt−1)pθ(xt|zt), (3) The discovery prior pD(Dt, z Dt t |z Pt t ) samples latent variables for new objects that enter the frame. The propagation prior pP (zPtt |zt−1) samples latent variables for objects that persist in the frame and removes latents of objects that disappear from the frame, thereby modelling dynamics and appearance changes. Both priors are learned during training. The exact forms of the priors are given in Appendix B. Inference Similarly to AIR, inference in SQAIR can capture the number of objects and the representation describing the location and appearance of each object that is necessary to explain every image in a sequence. As with generation, inference is divided into PROP and DISC. During PROP, the inference network achieves two tasks. Firstly, the latent variables from the previous time step are used to infer the current ones, modelling the change in location and appearance of the corresponding objects, thereby attaining temporal consistency. This is implemented by the temporal RNN RTφ , with hidden states hTt (recurs in t). Crucially, it does not access the current image directly, but uses the output of the relation RNN (cf. Santoro et al., 2017). The relation RNN takes relations between objects into account, thereby implementing the explaining away phenomenon; it is essential for capturing any interactions between objects as well as occlusion (or overlap, if one object is occluded by another). See Figure 7 for an example. These two RNNs together decide whether to retain or to forget objects that have been propagated from the previous time step. During DISC, the network infers further latent variables that are needed to describe any new objects that have entered the frame. All latent variables remaining after PROP and DISC are passed on to the next time step. See Figures 2 and 3 for the inference network structure . The full variational posterior is defined as qφ(D1:t, z1:T | x1:T ) = T ∏ t=1 qDφ ( Dt, z Dt t | xt, z Pt t ) ∏ i∈Ot−1 qPφ ( zit | z i t−1,h T,i t ,h R,i t ) . (4) Discovery, described by qDφ , is very similar to the full posterior of AIR, cf. Equation (2). The only difference is the conditioning on zPtt , which allows for a different number of discovered objects at each time-step and also for objects explained by PROP not to be explained again. The second term, or qPφ , describes propagation. The detailed structures of q D φ and q P φ are shown in Figure 3, while all the pertinent algorithms and equations can be found in Appendices A and C, respectively. Learning We train SQAIR as an importance-weighted auto-encoder (IWAE) of Burda et al., 2016. Specifically, we maximise the importance-weighted evidence lower-bound LIWAE, namely LIWAE = Ex1:T∼pdata(x1:T ) [ Eq [ log 1 K K ∑ k=1 pθ(x1:T , z1:T ) qφ(z1:T | x1:T ) ]] . (5) To optimise the above, we use RMSPROP, K = 5 and batch size of 32. We use the VIMCO gradient estimator of Mnih and Rezende, 2016 to backpropagate through the discrete latent variables zpres, and use reparameterisation for the continuous ones (Kingma and Welling, 2013). We also tried to use NVIL of Mnih and Gregor, 2014 as in the original work on AIR, but found it very sensitive to hyper-parameters, fragile and generally under-performing. 4 Experiments We evaluate SQAIR on two datasets. Firstly, we perform an extensive evaluation on moving MNIST digits, where we show that it can learn to reliably detect, track and generate moving digits (Section 4.1). Moreover, we show that SQAIR can simulate moving objects into the future — an outcome it has not been trained for. We also study the utility of learned representations for a downstream task. Secondly, we apply SQAIR to real-world pedestrian CCTV data from static cameras (DukeMTMC, Ristani et al., 2016), where we perform background subtraction as pre-processing. In this experiment, we show that SQAIR learns to detect, track, predict and generate walking pedestrians without human supervision. 4.1 Moving multi-MNIST The dataset consists of sequences of length 10 of multiple moving MNIST digits. All images are of size 50× 50 and there are zero, one or two digits in every frame (with equal probability). Sequences are generated such that no objects overlap in the first frame, and all objects are present through the sequence; the digits can move out of the frame, but always come back. See Appendix F for an experiment on a harder version of this dataset. There are 60,000 training and 10,000 testing sequences created from the respective MNIST datasets. We train two variants of SQAIR: the MLP-SQAIR uses only fully-connected networks, while the CONV-SQAIR replaces the networks used to encode images and glimpses with convolutional ones; it also uses a subpixel-convolution network as the glimpse decoder (Shi et al., 2016). See Appendix D for details of the model architectures and the training procedure. We use AIR and VRNN (Chung et al., 2015) as baselines for comparison. VRNN can be thought of as a sequential VAE with an RNN as its deterministic backbone. Being similar to a VAE, its latent variables are not structured, nor easily interpretable. For a fair comparison, we control the latent dimensionality of VRNN and the number of learnable parameters. We provide implementation details in Appendix D.3. The quantitative analysis consists of comparing all models in terms of the marginal log-likelihood log pθ(x1:T ) evaluated as the LIWAE bound with K = 1000 particles, reconstruction quality evaluated as a single-sample approximation of Eqφ [log pθ(x1:T | z1:T )] and the KL-divergence between the approximate posterior and the prior (Table 1). Additionally, we measure the accuracy of the number of objects modelled by SQAIR and AIR. SQAIR achieves superior performance across a range of metrics — its convolutional variant outperforms both AIR and the corresponding VRNN in terms of model evidence and reconstruction performance. The KL divergence for SQAIR is almost twice as low as for VRNN and by a yet larger factor for AIR. We can interpret KL values as an indicator of the ability to compress, and we can treat SQAIR/AIR type of scheme as a version of run-length encoding. While VRNN has to use information to explicitly describe every part of the image, even if some parts are empty, SQAIR can explicitly allocate content information (zwhat) to specific parts of the image (indicated by zwhere). AIR exhibits the highest values of KL, but this is due to encoding every frame of the sequence independently — its prior cannot take what and where at the previous time-step into account, hence higher KL. The fifth column of Table 1 details the object counting accuracy, that is indicative of the quality of the approximate posterior. It is measured as the sum of z pres t for a given frame against the true number of objects in that frame. As there is no zpres for VRNN no score is provided. Perhaps surprisingly, this metric is much higher for SQAIR than for AIR. This is because AIR mistakenly infers overlapping objects as a single object. Since SQAIR can incorporate temporal Figure 7: Inputs, reconstructions with marked glimpse locations and reconstructed glimpses for AIR (left) and SQAIR (right). SQAIR can model partially visible and heavily overlapping objects by aggregating temporal information. information, it does not exhibit this failure mode (cf. Figure 7). Next, we gauge the utility of the learnt representations by using them to determine the sum of the digits present in the image (Table 1, column six). To do so, we train a 19-way classifier (mapping from any combination of up to two digits in the range [0, 9] to their sum) on the extracted representations and use the summed labels of digits present in the frame as the target. Appendix D contains details of the experiment. SQAIR significantly outperforms AIR and both variants of VRNN on this tasks. VRNN under-performs due to the inability of disentangling overlapping objects, while both VRNN and AIR suffer from low temporal consistency of learned representations, see Appendix H. Finally, we evaluate SQAIR qualitatively by analyzing reconstructions and samples produced by the model against reconstructions and samples from VRNN. We observe that samples and reconstructions from SQAIR are of better quality and, unlike VRNN, preserve motion and appearance consistently through time. See Appendix H for direct comparison and additional examples. Furthermore, we examine conditional generation, where we look at samples from the generative model of SQAIR conditioned on three images from a real sequence (see Figure 6). We see that the model can preserve appearance over time, and that the simulated objects follow similar trajectories, which hints at good learning of the motion model (see Appendix H for more examples). Figure 7 shows reconstructions and corresponding glimpses of AIR and SQAIR. Unlike SQAIR, AIR is unable to recognize objects from partial observations, nor can it distinguish strongly overlapping objects (it treats them as a single object; columns five and six in the figure). We analyze failure cases of SQAIR in Appendix G. 4.2 Generative Modelling of Walking Pedestrians To evaluate the model in a more challenging, real-world setting, we turn to data from static CCTV cameras of the DukeMTMC dataset (Ristani et al., 2016). As part of pre-precessing, we use standard background subtraction algorithms (Itseez, 2015). In this experiment, we use 3150 training and 350 validation sequences of length 5. For details of model architectures, training and data pre-processing, see Appendix E. We evaluate the model qualitatively by examining reconstructions, conditional samples (conditioned on the first four frames) and samples from the prior (Figure 8 and Appendix I). We see that the model learns to reliably detect and track walking pedestrians, even when they are close to each other. There are some spurious detections and re-detections of the same objects, which is mostly caused by imperfections of the background subtraction pipeline — backgrounds are often noisy and there are sudden appearance changes when a part of a person is treated as background in the pre-processing pipeline. The object counting accuracy in this experiment is 0.5712 on the validation dataset, and we noticed that it does increase with the size of the training set. We also had to use early stopping to prevent overfitting, and the model was trained for only 315k iterations (> 1M for MNIST experiments). Hence, we conjecture that accuracy and marginal likelihood can be further improved by using a bigger dataset. 5 Related Work Object Tracking There have been many approaches to modelling objects in images and videos. Object detection and tracking are typically learned in a supervised manner, where object bounding boxes and often additional labels are part of the training data. Single-object tracking commonly use Siamese networks, which can be seen as an RNN unrolled over two time-steps (Valmadre et al., 2017). Recently, Kosiorek et al., 2017 used an RNN with an attention mechanism in the HART model to predict bounding boxes for single objects, while robustly modelling their motion and appearance. Multi-object tracking is typically attained by detecting objects and performing data association on bounding-boxes (Bewley et al., 2016). Schulter et al., 2017 used an end-to-end supervised approach that detects objects and performs data association. In the unsupervised setting, where the training data consists of only images or videos, the dominant approach is to distill the inductive bias of spatial consistency into a discriminative model. Cho et al., 2015 detect single objects and their parts in images, and Kwak et al., 2015; Xiao and Jae Lee, 2016 incorporate temporal consistency to better track single objects. SQAIR is unsupervised and hence it does not rely on bounding boxes nor additional labels for training, while being able to learn arbitrary motion and appearance models similarly to HART (Kosiorek et al., 2017). At the same time, is inherently multi-object and performs data association implicitly (cf. Appendix A). Unlike the other unsupervised approaches, temporal consistency is baked into the model structure of SQAIR and further enforced by lower KL divergence when an object is tracked. Video Prediction Many works on video prediction learn a deterministic model conditioned on the current frame to predict the future ones (Ranzato et al., 2014; Srivastava et al., 2015). Since these models do not model uncertainty in the prediction, they can suffer from the multiple futures problem — since perfect prediction is impossible, the model produces blurry predictions which are a mean of possible outcomes. This is addressed in stochastic latent variable models trained using variational inference to generate multiple plausible videos given a sequence of images (Babaeizadeh et al., 2017; Denton and Fergus, 2018). Unlike SQAIR, these approaches do not model objects or their positions explicitly, thus the representations they learn are of limited interpretability. Learning Decomposed Representations of Images and Videos Learning decomposed representations of object appearance and position lies at the heart of our model. This problem can be also seen as perceptual grouping, which involves modelling pixels as spatial mixtures of entities. Greff, Rasmus, et al., 2016 and Greff, Steenkiste, et al., 2017 learn to decompose images into separate entities by iterative refinement of spatial clusters using either learned updates or the Expectation Maximization algorithm; Ilin et al., 2017 and Steenkiste et al., 2018 extend these approaches to videos, achieving very similar results to SQAIR. Perhaps the most similar work to ours is the concurrently developed model of Hsieh et al., 2018. The above approaches rely on iterative inference procedures, but do not exhibit the object-counting behaviour of SQAIR. For this reason, their computational complexities are proportional to the predefined maximum number of objects, while SQAIR can be more computationally efficient by adapting to the number of objects currently present in an image. Another interesting line of work is the GAN-based unsupervised video generation that decomposes motion and content (Tulyakov et al., 2018; Denton and Birodkar, 2017). These methods learn interpretable features of content and motion, but deal only with single objects and do not explicitly model their locations. Nonetheless, adversarial approaches to learning structured probabilistic models of objects offer a plausible alternative direction of research. Bayesian Nonparametric Models To the best of our knowledge, Neiswanger and Wood, 2012 is the only known approach that models pixels belonging to a variable number of objects in a video together with their locations in the generative sense. This work uses a Bayesian nonparametric (BNP) model, which relies on mixtures of Dirichlet processes to cluster pixels belonging to an object. However, the choice of the model necessitates complex inference algorithms involving Gibbs sampling and Sequential Monte Carlo, to the extent that any sensible approximation of the marginal likelihood is infeasible. It also uses a fixed likelihood function, while ours is learnable. The object appearance-persistence-disappearance model in SQAIR is reminiscent of the Markov Indian buffet process (MIBP) of Gael et al., 2009, another BNP model. MIBP was used as a model for blind source separation, where multiple sources contribute toward an audio signal, and can appear, persist, disappear and reappear independently. The prior in SQAIR is similar, but the crucial differences are that SQAIR combines the BNP prior with flexible neural network models for the dynamics and likelihood, as well as variational learning via amortized inference. The interface between deep learning and BNP, and graphical models in general, remains a fertile area of research. 6 Discussion In this paper we proposed SQAIR, a probabilistic model that extends AIR to image sequences, and thereby achieves temporally consistent reconstructions and samples. In doing so, we enhanced AIR’s capability of disentangling overlapping objects and identifying partially observed objects. This work continues the thread of Greff, Steenkiste, et al., 2017, Steenkiste et al., 2018 and, together with Hsieh et al., 2018, presents unsupervised object detection & tracking with learnable likelihoods by the means of generative modelling of objects. In particular, our work is the first one to explicitly model object presence, appearance and location through time. Being a generative model, SQAIR can be used for conditional generation, where it can extrapolate sequences into the future. As such, it would be interesting to use it in a reinforcement learning setting in conjunction with ImaginationAugmented Agents (Weber et al., 2017) or more generally as a world model (Ha and Schmidhuber, 2018), especially for settings with simple backgrounds, e. g., games like Montezuma’s Revenge or Pacman. The framework offers various avenues of further research; SQAIR leads to interpretable representations, but the interpretability of what variables can be further enhanced by using alternative objectives that disentangle factors of variation in the objects (Kim and Mnih, 2018). Moreover, in its current state, SQAIR can work only with simple backgrounds and static cameras. In future work, we would like to address this shortcoming, as well as speed up the sequential inference process whose complexity is linear in the number of objects. The generative model, which currently assumes additive image composition, can be further improved by e. g., autoregressive modelling (Oord et al., 2016). It can lead to higher fidelity of the model and improved handling of occluded objects. Finally, the SQAIR model is very complex, and it would be useful to perform a series of ablation studies to further investigate the roles of different components. Acknowledgements We would like to thank Ali Eslami for his help in implementing AIR, Alex Bewley and Martin Engelcke for discussions and valuable insights and anonymous reviewers for their constructive feedback. Additionally, we acknowledge that HK and YWT’s research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 617071.
1. What is the main contribution of the paper? 2. How does the proposed model (SQAIR) extend the basic Attend, Infer, Repeat (AIR) framework to handle image sequences? 3. Can you explain how the two phases of the generative and inference networks in SQAIR work? 4. How does SQAIR perform compared to AIR and VRNN? 5. What are some of the strengths and weaknesses of the paper?
Review
Review I have read the other reviews and the author rebuttal. I am still very much in favor of accepting this paper, but I have revised my score down from a 9 to an 8; some of the issues pointed out by the other reviewers, while well-addressed in the rebuttal, made me realize that my initial view of the paper was a bit too rosy. ------------------------ This paper presents a deep generative model for unsupervised modeling of moving objects in image sequences. The model starts with the basic Attend, Infer, Repeat (AIR) framework and extends it to handle images sequences (SQAIR). This extension requires taking into account the fact that objects may enter into or leave the frame over the course of a motion sequence. To support this behavior, SQAIR's generative and inference networks for each frame have two phases. First, a *propagation* network extrapolates the positions of existing objects forward in time, then decides whether any of those objects are no longer in the frame and should be 'forgotten.' Second, a *discovery* network proposes new objects that have entered the frame, conditioned on the output of the propagation network. SQAIR is quantitatively evaluated on a moving MNIST dataset by its test-set NLL, image reconstruction accuracy, divergence between learned prior and approximate posterior, accuracy at counting the number of objects, and accuracy on a supervised task involving predicting the sums of digits present in the image using the learned latent representation. It is also qualitatively evaluated for its ability to reconstruct, complete, and generate image sequences both on the moving MNIST dataset and on the DukeMTMC pedestrain CCTV dataset. SQAIR outperforms AIR and VRNN, and is qualitatively better able to handle occlusions and objects entering/leaving the frame. This is very good work. Even a few years ago, a model that can perform unsupervised object detection and tracking in video sequences would have been unthinkable--this work points toward a future where that is very possible. Now, given the existence of AIR, perhaps a sequential extension of AIR might seem like an obvious idea. However, getting the implementation of that idea right is far from trivial, and the architecture proposed here (propagation + discovery) is well-motivated and seems to work well. The paper is written clearly and provides sufficient detail and evaluation. I am very much in favor of accepting this paper.
NIPS
Title Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects Abstract We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision. 1 Introduction The ability to identify objects in their environments and to understand relations between them is a cornerstone of human intelligence (Kemp and Tenenbaum, 2008). Arguably, in doing so we rely on a notion of spatial and temporal consistency which gives rise to an expectation that objects do not appear out of thin air, nor do they spontaneously vanish, and that they can be described by properties such as location, appearance and some dynamic behaviour that explains their evolution over time. We argue that this notion of consistency can be seen as an inductive bias that improves the efficiency of our learning. Equally, we posit that introducing such a bias towards spatio-temporal consistency into our models should greatly reduce the amount of supervision required for learning. One way of achieving such inductive biases is through model structure. While recent successes in deep learning demonstrate that progress is possible without explicitly imbuing models with interpretable structure (LeCun, Bengio, et al., 2015), recent works show that introducing such structure into deep models can indeed lead to favourable inductive biases improving performance e.g. in convolutional networks (LeCun, Boser, et al., 1989) or in tasks requiring relational reasoning (Santoro et al., 2017). Structure can also make neural networks useful in new contexts by significantly improving generalization, data efficiency (Jacobsen et al., 2016) or extending their capabilities to unstructured inputs (Graves et al., 2016). Attend, Infer, Repeat (AIR), introduced by Eslami et al., 2016, is a notable example of such a structured probabilistic model that relies on deep learning and admits efficient amortized inference. Trained without any supervision, AIR is able to decompose a visual scene into its constituent components and to generate a (learned) number of latent variables that explicitly encode the location and appearance of each object. While this approach is inspiring, its focus on modelling individual (and thereby inherently static) scenes leads to a number of limitations. For example, it often merges two objects that are close together into one since no temporal context is available to distinguish between them. ∗Corresponding author: [email protected] 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Similarly, we demonstrate that AIR struggles to identify partially occluded objects, e.g. when they extend beyond the boundaries of the scene frame (see Figure 7 in Section 4.1). Our contribution is to mitigate the shortcomings of AIR by introducing a sequential version that models sequences of frames, enabling it to discover and track objects over time as well as to generate convincing extrapolations of frames into the future. We achieve this by leveraging temporal information to learn a richer, more capable generative model. Specifically, we extend AIR into a spatio-temporal state-space model and train it on unlabelled image sequences of dynamic objects. We show that the resulting model, which we name Sequential AIR (SQAIR), retains the strengths of the original AIR formulation while outperforming it on moving MNIST digits. The rest of this work is organised as follows. In Section 2, we describe the generative model and inference of AIR. In Section 3, we discuss its limitations and how it can be improved, thereby introducing Sequential Attend, Infer, Repeat (SQAIR), our extension of AIR to image sequences. In Section 4, we demonstrate the model on a dataset of multiple moving MNIST digits (Section 4.1) and compare it against AIR trained on each frame and Variational Recurrent Neural Network (VRNN) of Chung et al., 2015 with convolutional architectures, and show the superior performance of SQAIR in terms of log marginal likelihood and interpretability of latent variables. We also investigate the utility of inferred latent variables of SQAIR in downstream tasks. In Section 4.2 we apply SQAIR on real-world pedestrian CCTV data, where SQAIR learns to reliably detect, track and generate walking pedestrians without any supervision. Code for the implementation on the MNIST dataset2 and the results video3 are available online. 2 Attend, Infer, Repeat (AIR) AIR, introduced by Eslami et al., 2016, is a structured variational auto-encoder (VAE) capable of decomposing a static scene x into its constituent objects, where each object is represented as a separate triplet of continuous latent variables z = {zwhat,i, zwhere,i, zpres,i}ni=1, n ∈ N being the (random) number of objects in the scene. Each triplet of latent variables explicitly encodes position, appearance and presence of the respective object, and the model is able to infer the number of objects present in the scene. Hence it is able to count, locate and describe objects in the scene, all learnt in an unsupervised manner, made possible by the inductive bias introduced by the model structure. Generative Model The generative model of AIR is defined as follows pθ(n) = Geom(n | θ), pθ(z w | n) = n ∏ i=1 pθ ( zw,i ) = n ∏ i=1 N ( zw,i|0, I ) , pθ(x | z) = N ( x | yt, σ 2 xI ) , with yt = n ∑ i=1 hdecθ (z what,i, zwhere,i), (1) where zw,i ..= (zwhat,i, zwhere,i), zpres,i = 1 for i = 1 . . . n and hdecθ is the object decoder with parameters θ. It is composed of a glimpse decoder fdecθ : g i t 7→ y i t, which constructs an image patch and a spatial transformer (ST, Jaderberg et al., 2015), which scales and shifts it according to zwhere; see Figure 1 for details. Inference Eslami et al., 2016 use a sequential inference algorithm, where latent variables are inferred one at a time; see Figure 2. The number of inference steps n is given by zpres,1:n+1, a random vector of n ones followed by a zero. The zi are sampled sequentially from qφ(z | x) = qφ ( zpres,n+1 = 0 | zw,1:n,x ) n ∏ i=1 qφ ( zw,i, zpres,i = 1 | z1:i−1,x ) , (2) where qφ is implemented as a neural network with parameters φ. To implement explaining away, e.g. to avoid encoding the same object twice, it is vital to capture the dependency of zw,i and zpres,i on z1:i−1 and x. This is done using a recurrent neural network (RNN) Rφ with hidden state h i, namely: ωi,hi = Rφ(x, z i−1,hi−1). The outputs ωi, which are computed iteratively and depend on the previous latent variables (cf. Algorithm 3), parametrise qφ ( zw,i, zpres,i | z1:i−1,x ) . For simplicity the latter is assumed to factorise such that qφ ( zw, zpres | z1:i−1,x ) = qφ ( zpres,n+1 = 0 | ωn+1 ) ∏n i=1 qφ ( zw,i | ωi ) qφ ( zpres,i = 1 | ωi ) . 2code: github.com/akosiorek/sqair 3video: youtu.be/-IUNQgSLE0c 3 Sequential Attend-Infer-Repeat While capable of decomposing a scene into objects, AIR only describes single images. Should we want a similar decomposition of an image sequence, it would be desirable to do so in a temporally consistent manner. For example, we might want to detect objects of the scene as well as infer dynamics and track identities of any persistent objects. Thus, we introduce Sequential Attend, Infer, Repeat (SQAIR), whereby AIR is augmented with a state-space model (SSM) to achieve temporal consistency in the generated images of the sequence. The resulting probabilistic model is composed of two parts: Discovery (DISC), which is responsible for detecting (or introducing, in the case of the generation) new objects at every time-step (essentially equivalent to AIR), and Propagation (PROP), responsible for updating (or forgetting) latent variables from the previous time-step given the new observation (image), effectively implementing the temporal SSM. We now formally introduce SQAIR by first describing its generative model and then the inference network. Generative Model The model assumes that at every-time step, objects are first propagated from the previous time-step (PROP). Then, new objects are introduced (DISC). Let t ∈ N be the current timestep. Let Pt be the set of objects propagated from the previous time-step and let Dt be the set of objects discovered at the current time-step, and let Ot = Pt∪Dt be the set of all objects present at time-step t. Consequently, at every time step, the model retains a set of latent variables zPtt = {z i t}i∈Pt , and generates a set of new latent variables zDtt = {z i t}i∈Dt . Together they form zt ..= [zPtt , z Dt t ], where the representation of the ith object zit ..= [zwhat,it , z where,i t , z pres,i t ] is composed of three components (as in AIR): z what,i t and z where,i t are real vector-valued variables representing appearance and location of the object, respectively. z pres,i t is a binary variable representing whether the object is present at the given time-step or not. At the first time-step (t = 1) there are no objects to propagate, so we sample D1, the number of objects at t = 1, from the discovery prior pD(D1). Then for each object i ∈ Dt, we sample latent variables z what,i t , z where,i t from p D ( zi1 | D1 ) . At time t = 2, the propagation step models which objects from t = 1 are propagated to t = 2, and which objects disappear from the frame, using the binary random variable (zpres,it )i∈Pt . The discovery step at t = 2 models new objects that enter the frame, with a similar procedure to t = 1: sample D2 (which depends on z P2 2 ) then sample (zwhat,i2 , z where,i 2 )i∈D2 . This procedure of propagation and discovery recurs for t = 2, . . . T . Once the zt have been formed, we may generate images xt using the exact same generative distribution pθ(xt | zt) as in AIR (cf. Equation (1), Fig. 1, and Algorithm 1). In full, the generative model is: p(x1:T , z1:T , D1:T ) = p D(D1, z D1 1 ) T ∏ t=2 pD(Dt, z Dt t |z Pt t )p P (zPtt |zt−1)pθ(xt|zt), (3) The discovery prior pD(Dt, z Dt t |z Pt t ) samples latent variables for new objects that enter the frame. The propagation prior pP (zPtt |zt−1) samples latent variables for objects that persist in the frame and removes latents of objects that disappear from the frame, thereby modelling dynamics and appearance changes. Both priors are learned during training. The exact forms of the priors are given in Appendix B. Inference Similarly to AIR, inference in SQAIR can capture the number of objects and the representation describing the location and appearance of each object that is necessary to explain every image in a sequence. As with generation, inference is divided into PROP and DISC. During PROP, the inference network achieves two tasks. Firstly, the latent variables from the previous time step are used to infer the current ones, modelling the change in location and appearance of the corresponding objects, thereby attaining temporal consistency. This is implemented by the temporal RNN RTφ , with hidden states hTt (recurs in t). Crucially, it does not access the current image directly, but uses the output of the relation RNN (cf. Santoro et al., 2017). The relation RNN takes relations between objects into account, thereby implementing the explaining away phenomenon; it is essential for capturing any interactions between objects as well as occlusion (or overlap, if one object is occluded by another). See Figure 7 for an example. These two RNNs together decide whether to retain or to forget objects that have been propagated from the previous time step. During DISC, the network infers further latent variables that are needed to describe any new objects that have entered the frame. All latent variables remaining after PROP and DISC are passed on to the next time step. See Figures 2 and 3 for the inference network structure . The full variational posterior is defined as qφ(D1:t, z1:T | x1:T ) = T ∏ t=1 qDφ ( Dt, z Dt t | xt, z Pt t ) ∏ i∈Ot−1 qPφ ( zit | z i t−1,h T,i t ,h R,i t ) . (4) Discovery, described by qDφ , is very similar to the full posterior of AIR, cf. Equation (2). The only difference is the conditioning on zPtt , which allows for a different number of discovered objects at each time-step and also for objects explained by PROP not to be explained again. The second term, or qPφ , describes propagation. The detailed structures of q D φ and q P φ are shown in Figure 3, while all the pertinent algorithms and equations can be found in Appendices A and C, respectively. Learning We train SQAIR as an importance-weighted auto-encoder (IWAE) of Burda et al., 2016. Specifically, we maximise the importance-weighted evidence lower-bound LIWAE, namely LIWAE = Ex1:T∼pdata(x1:T ) [ Eq [ log 1 K K ∑ k=1 pθ(x1:T , z1:T ) qφ(z1:T | x1:T ) ]] . (5) To optimise the above, we use RMSPROP, K = 5 and batch size of 32. We use the VIMCO gradient estimator of Mnih and Rezende, 2016 to backpropagate through the discrete latent variables zpres, and use reparameterisation for the continuous ones (Kingma and Welling, 2013). We also tried to use NVIL of Mnih and Gregor, 2014 as in the original work on AIR, but found it very sensitive to hyper-parameters, fragile and generally under-performing. 4 Experiments We evaluate SQAIR on two datasets. Firstly, we perform an extensive evaluation on moving MNIST digits, where we show that it can learn to reliably detect, track and generate moving digits (Section 4.1). Moreover, we show that SQAIR can simulate moving objects into the future — an outcome it has not been trained for. We also study the utility of learned representations for a downstream task. Secondly, we apply SQAIR to real-world pedestrian CCTV data from static cameras (DukeMTMC, Ristani et al., 2016), where we perform background subtraction as pre-processing. In this experiment, we show that SQAIR learns to detect, track, predict and generate walking pedestrians without human supervision. 4.1 Moving multi-MNIST The dataset consists of sequences of length 10 of multiple moving MNIST digits. All images are of size 50× 50 and there are zero, one or two digits in every frame (with equal probability). Sequences are generated such that no objects overlap in the first frame, and all objects are present through the sequence; the digits can move out of the frame, but always come back. See Appendix F for an experiment on a harder version of this dataset. There are 60,000 training and 10,000 testing sequences created from the respective MNIST datasets. We train two variants of SQAIR: the MLP-SQAIR uses only fully-connected networks, while the CONV-SQAIR replaces the networks used to encode images and glimpses with convolutional ones; it also uses a subpixel-convolution network as the glimpse decoder (Shi et al., 2016). See Appendix D for details of the model architectures and the training procedure. We use AIR and VRNN (Chung et al., 2015) as baselines for comparison. VRNN can be thought of as a sequential VAE with an RNN as its deterministic backbone. Being similar to a VAE, its latent variables are not structured, nor easily interpretable. For a fair comparison, we control the latent dimensionality of VRNN and the number of learnable parameters. We provide implementation details in Appendix D.3. The quantitative analysis consists of comparing all models in terms of the marginal log-likelihood log pθ(x1:T ) evaluated as the LIWAE bound with K = 1000 particles, reconstruction quality evaluated as a single-sample approximation of Eqφ [log pθ(x1:T | z1:T )] and the KL-divergence between the approximate posterior and the prior (Table 1). Additionally, we measure the accuracy of the number of objects modelled by SQAIR and AIR. SQAIR achieves superior performance across a range of metrics — its convolutional variant outperforms both AIR and the corresponding VRNN in terms of model evidence and reconstruction performance. The KL divergence for SQAIR is almost twice as low as for VRNN and by a yet larger factor for AIR. We can interpret KL values as an indicator of the ability to compress, and we can treat SQAIR/AIR type of scheme as a version of run-length encoding. While VRNN has to use information to explicitly describe every part of the image, even if some parts are empty, SQAIR can explicitly allocate content information (zwhat) to specific parts of the image (indicated by zwhere). AIR exhibits the highest values of KL, but this is due to encoding every frame of the sequence independently — its prior cannot take what and where at the previous time-step into account, hence higher KL. The fifth column of Table 1 details the object counting accuracy, that is indicative of the quality of the approximate posterior. It is measured as the sum of z pres t for a given frame against the true number of objects in that frame. As there is no zpres for VRNN no score is provided. Perhaps surprisingly, this metric is much higher for SQAIR than for AIR. This is because AIR mistakenly infers overlapping objects as a single object. Since SQAIR can incorporate temporal Figure 7: Inputs, reconstructions with marked glimpse locations and reconstructed glimpses for AIR (left) and SQAIR (right). SQAIR can model partially visible and heavily overlapping objects by aggregating temporal information. information, it does not exhibit this failure mode (cf. Figure 7). Next, we gauge the utility of the learnt representations by using them to determine the sum of the digits present in the image (Table 1, column six). To do so, we train a 19-way classifier (mapping from any combination of up to two digits in the range [0, 9] to their sum) on the extracted representations and use the summed labels of digits present in the frame as the target. Appendix D contains details of the experiment. SQAIR significantly outperforms AIR and both variants of VRNN on this tasks. VRNN under-performs due to the inability of disentangling overlapping objects, while both VRNN and AIR suffer from low temporal consistency of learned representations, see Appendix H. Finally, we evaluate SQAIR qualitatively by analyzing reconstructions and samples produced by the model against reconstructions and samples from VRNN. We observe that samples and reconstructions from SQAIR are of better quality and, unlike VRNN, preserve motion and appearance consistently through time. See Appendix H for direct comparison and additional examples. Furthermore, we examine conditional generation, where we look at samples from the generative model of SQAIR conditioned on three images from a real sequence (see Figure 6). We see that the model can preserve appearance over time, and that the simulated objects follow similar trajectories, which hints at good learning of the motion model (see Appendix H for more examples). Figure 7 shows reconstructions and corresponding glimpses of AIR and SQAIR. Unlike SQAIR, AIR is unable to recognize objects from partial observations, nor can it distinguish strongly overlapping objects (it treats them as a single object; columns five and six in the figure). We analyze failure cases of SQAIR in Appendix G. 4.2 Generative Modelling of Walking Pedestrians To evaluate the model in a more challenging, real-world setting, we turn to data from static CCTV cameras of the DukeMTMC dataset (Ristani et al., 2016). As part of pre-precessing, we use standard background subtraction algorithms (Itseez, 2015). In this experiment, we use 3150 training and 350 validation sequences of length 5. For details of model architectures, training and data pre-processing, see Appendix E. We evaluate the model qualitatively by examining reconstructions, conditional samples (conditioned on the first four frames) and samples from the prior (Figure 8 and Appendix I). We see that the model learns to reliably detect and track walking pedestrians, even when they are close to each other. There are some spurious detections and re-detections of the same objects, which is mostly caused by imperfections of the background subtraction pipeline — backgrounds are often noisy and there are sudden appearance changes when a part of a person is treated as background in the pre-processing pipeline. The object counting accuracy in this experiment is 0.5712 on the validation dataset, and we noticed that it does increase with the size of the training set. We also had to use early stopping to prevent overfitting, and the model was trained for only 315k iterations (> 1M for MNIST experiments). Hence, we conjecture that accuracy and marginal likelihood can be further improved by using a bigger dataset. 5 Related Work Object Tracking There have been many approaches to modelling objects in images and videos. Object detection and tracking are typically learned in a supervised manner, where object bounding boxes and often additional labels are part of the training data. Single-object tracking commonly use Siamese networks, which can be seen as an RNN unrolled over two time-steps (Valmadre et al., 2017). Recently, Kosiorek et al., 2017 used an RNN with an attention mechanism in the HART model to predict bounding boxes for single objects, while robustly modelling their motion and appearance. Multi-object tracking is typically attained by detecting objects and performing data association on bounding-boxes (Bewley et al., 2016). Schulter et al., 2017 used an end-to-end supervised approach that detects objects and performs data association. In the unsupervised setting, where the training data consists of only images or videos, the dominant approach is to distill the inductive bias of spatial consistency into a discriminative model. Cho et al., 2015 detect single objects and their parts in images, and Kwak et al., 2015; Xiao and Jae Lee, 2016 incorporate temporal consistency to better track single objects. SQAIR is unsupervised and hence it does not rely on bounding boxes nor additional labels for training, while being able to learn arbitrary motion and appearance models similarly to HART (Kosiorek et al., 2017). At the same time, is inherently multi-object and performs data association implicitly (cf. Appendix A). Unlike the other unsupervised approaches, temporal consistency is baked into the model structure of SQAIR and further enforced by lower KL divergence when an object is tracked. Video Prediction Many works on video prediction learn a deterministic model conditioned on the current frame to predict the future ones (Ranzato et al., 2014; Srivastava et al., 2015). Since these models do not model uncertainty in the prediction, they can suffer from the multiple futures problem — since perfect prediction is impossible, the model produces blurry predictions which are a mean of possible outcomes. This is addressed in stochastic latent variable models trained using variational inference to generate multiple plausible videos given a sequence of images (Babaeizadeh et al., 2017; Denton and Fergus, 2018). Unlike SQAIR, these approaches do not model objects or their positions explicitly, thus the representations they learn are of limited interpretability. Learning Decomposed Representations of Images and Videos Learning decomposed representations of object appearance and position lies at the heart of our model. This problem can be also seen as perceptual grouping, which involves modelling pixels as spatial mixtures of entities. Greff, Rasmus, et al., 2016 and Greff, Steenkiste, et al., 2017 learn to decompose images into separate entities by iterative refinement of spatial clusters using either learned updates or the Expectation Maximization algorithm; Ilin et al., 2017 and Steenkiste et al., 2018 extend these approaches to videos, achieving very similar results to SQAIR. Perhaps the most similar work to ours is the concurrently developed model of Hsieh et al., 2018. The above approaches rely on iterative inference procedures, but do not exhibit the object-counting behaviour of SQAIR. For this reason, their computational complexities are proportional to the predefined maximum number of objects, while SQAIR can be more computationally efficient by adapting to the number of objects currently present in an image. Another interesting line of work is the GAN-based unsupervised video generation that decomposes motion and content (Tulyakov et al., 2018; Denton and Birodkar, 2017). These methods learn interpretable features of content and motion, but deal only with single objects and do not explicitly model their locations. Nonetheless, adversarial approaches to learning structured probabilistic models of objects offer a plausible alternative direction of research. Bayesian Nonparametric Models To the best of our knowledge, Neiswanger and Wood, 2012 is the only known approach that models pixels belonging to a variable number of objects in a video together with their locations in the generative sense. This work uses a Bayesian nonparametric (BNP) model, which relies on mixtures of Dirichlet processes to cluster pixels belonging to an object. However, the choice of the model necessitates complex inference algorithms involving Gibbs sampling and Sequential Monte Carlo, to the extent that any sensible approximation of the marginal likelihood is infeasible. It also uses a fixed likelihood function, while ours is learnable. The object appearance-persistence-disappearance model in SQAIR is reminiscent of the Markov Indian buffet process (MIBP) of Gael et al., 2009, another BNP model. MIBP was used as a model for blind source separation, where multiple sources contribute toward an audio signal, and can appear, persist, disappear and reappear independently. The prior in SQAIR is similar, but the crucial differences are that SQAIR combines the BNP prior with flexible neural network models for the dynamics and likelihood, as well as variational learning via amortized inference. The interface between deep learning and BNP, and graphical models in general, remains a fertile area of research. 6 Discussion In this paper we proposed SQAIR, a probabilistic model that extends AIR to image sequences, and thereby achieves temporally consistent reconstructions and samples. In doing so, we enhanced AIR’s capability of disentangling overlapping objects and identifying partially observed objects. This work continues the thread of Greff, Steenkiste, et al., 2017, Steenkiste et al., 2018 and, together with Hsieh et al., 2018, presents unsupervised object detection & tracking with learnable likelihoods by the means of generative modelling of objects. In particular, our work is the first one to explicitly model object presence, appearance and location through time. Being a generative model, SQAIR can be used for conditional generation, where it can extrapolate sequences into the future. As such, it would be interesting to use it in a reinforcement learning setting in conjunction with ImaginationAugmented Agents (Weber et al., 2017) or more generally as a world model (Ha and Schmidhuber, 2018), especially for settings with simple backgrounds, e. g., games like Montezuma’s Revenge or Pacman. The framework offers various avenues of further research; SQAIR leads to interpretable representations, but the interpretability of what variables can be further enhanced by using alternative objectives that disentangle factors of variation in the objects (Kim and Mnih, 2018). Moreover, in its current state, SQAIR can work only with simple backgrounds and static cameras. In future work, we would like to address this shortcoming, as well as speed up the sequential inference process whose complexity is linear in the number of objects. The generative model, which currently assumes additive image composition, can be further improved by e. g., autoregressive modelling (Oord et al., 2016). It can lead to higher fidelity of the model and improved handling of occluded objects. Finally, the SQAIR model is very complex, and it would be useful to perform a series of ablation studies to further investigate the roles of different components. Acknowledgements We would like to thank Ali Eslami for his help in implementing AIR, Alex Bewley and Martin Engelcke for discussions and valuable insights and anonymous reviewers for their constructive feedback. Additionally, we acknowledge that HK and YWT’s research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 617071.
1. What is the main contribution of the paper regarding structured image generative models? 2. What are the strengths and weaknesses of the proposed approach compared to prior works in video generation? 3. How does the reviewer assess the significance of the experiments conducted, especially on real-world datasets? 4. What are some minor issues or typos pointed out by the reviewer?
Review
Review The authors propose a temporal extension of the structured image generative model AIR. The model is composed of two parts: the first one, DISC, detects new objects at each time step, and is very similar to AIR. The only difference is that its predictions are conditioned on the updated latent variables of previous frames. The second one PROP, is responsable for updating or forgetting latent variables given a new image. It does so by employing two RNNs : the first one (temporal) updates the previous latent states. The second one (relational) models relations between objects, and is recurrent over the objects in the frame. The authors show extensive experimentation on a stochastic moving MNIST dataset and qualitative results on a dataset collected from static CCTV cameras. The structure of their model allows them to learn to detect and track objects in an unsupervised fashion, perform video prediction, and use the latent variables for solving simple tasks (addition, counting of objects). The authors propose a compelling and ambitious approach that tackles the extremely important problem of structured generative modeling, where the structure can be used for unsupervised learning of high level recognition tasks. Incorporating structure in the model to reflect the prior that a scene is composed of objects that should be mainly described by their location and their appearance, and showing that the learned latent variables are interpretable and thus useful in down stream tasks, is an extremely appealing direction. At this point, the approach does not model the background, and has not been shown to be applicable in contexts where the camera is non static (as background subtraction is necessary), but it constitues a nice step towards in this important direction. It is nice that the authors show a real-world application of their method. Unfortunately, the experimental evaluation is rather lacking, and I have several questions in regards to this matter: - the authors should compare to other VAE-flavoured state of the art video generative modelling methods, such as Denton et al. ICML 2018. In particular, the qualitative results of Figure 12 are clearly not state of the art. - how important is the relational RNN ? The authors transparently acknowledge that the method is complex and an ablation study would be very useful. It would also be helpful to show in a more tangible manner how this RNN is "implementing the explaining away phenomenon" ? l.122 - "All objects are present through the sequence; the digits can move out of the frame, but always come back." l.152, 153 Why impose this constraint on the dataset ? What severely limits the model from dealing where cases when the digit does not come back ? Also, can the digits appear during the sequence and if not, why not ? If this showed failures of the model, it would be interesting to analyse why. - I don't agree with the justifications for not filling in Table 1: a classifier could be learned for VRNN for counting, just like in the addition experiment. And for completion, why not perform these experiments for the MLP-VRNN ; it is better in terms of KL divergence than its CONV counterpart. - "We will show that the resulting model, which we name Sequential AIR (Sequential Attend, Infer, Repeat (SQAIR )), retains the strengths of the original AIR formulation while outperforming it in both synthetic and real-world scenarios." I did not find the results for AIR on the CCTV dataset. - can you give a qualitative interpretation of the fact that the reconstruction metric of MLP-AIR is better than MLP-SQAIR ? - Figure 10: why are there two entirely lines input fully black inputs ? - "which signifies that the latent representation can be allocated to regions of the frames that need it, as opposed to describing the whole scene" l.170 This is not clear to me. Cumulatively, this leaves the reader a little bit unsatisfied (to summarize: missing results, no ablation study, missing comparison to an important state of the art paper, main results on a toyish dataset that could be slightly less toy). Also, it would strengthen the paper a lot to show a more thorough and quantitative evaluation on the real-world dataset. This is what leads me to the current decision of "Marginally below the acceptance threshold." Regarding clarity of the method, the paper is generally well written, but I would like to have the following clarifications: - l. 118 you specify that the temporal RNN has no access to the image. Then in appendix D, equation (14), as well as Figure 3, imply the opposite. Can you clarify ? - Figure 6 is ambiguous: the lower lines are predictions or reconstructions ? Finally, a note that for this type of problem, accompanying videos are much appreciated to ease the qualitative evaluation of the samples and reconstructions, especially to appreciate temporal consistency. Typos / errors : - then should be than l.53 and l.188 - l.59 in the AIR model, n is at most N (a fixed hyperparameter of the method) - l.65: according to Figure 1, f_theta^dec 's input is z_t^what,i and output is g_t^i. Calling the decoder a "glimpse decoder" seems to imply some form of attention mechanism on the generated image, which is not the case. Also, z^where should be indexed with i. - l.97 vector valued should be -valued vector - l.103 logical flow of text would require that it is also explained that z^what and z^where are sampled during the propagation step - images l.151 should be digits - l.160 Being similar to a VAE. -------- Assessment after the rebuttal -------- The authors have addressed most of my concerns. The experimental protocol is still slightly lacking, as they do not experimentally validate the impact of using a relational RNN on top of the temporal RNN; and I still think that the toy dataset should have had certain properties to test the strengths of the proposed model (like appearing and disappearing digits.) However, like the authors and the other reviewers, I am convinced of the importance of the problem addressed, and I acknowledge that the authors have proposed a compelling and ambitious approach to tackle this problem. Therefore, with the clarifications and improvements that the authors have promised to bring, I now think that this paper is good and should be accepted.
NIPS
Title Generalized Laplacian Eigenmaps Abstract Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and upper-bounded by AIRM scaled by the factor of √ m. We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. N/A accuracy/scalability compared to state-of-the-art baselines. 1 Introduction Laplacian Eigenmaps [3] and IsoMap [36] are graph embedding methods that reduce the dimensionality of data by assuming the data exists on a low-dimensional manifold. The objective function in such models encourages node embeddings to lie near each other in the embedding space if nodes are close to each other in the original space. While the classical methods capture the related node pairs, they neglect modeling unrelated node pairs. In contrast, modern graph embedding models such as [35, 10, 44] and Graph Contrastive Learning (GCL) [37, 56, 11, 57, 55] are unified under the (Sampled) Noise Contrastive Estimation framework, called (Sampled)NCE [27, 23]. Most of GCL methods do not incorporate the graph information into the loss but follow the setting from computer vision, i.e., they assume that randomly drawn pairs should be dissimilar, whereas the original sample and its augmentations should be similar [39]. In contrast, COntrastive Laplacian EigenmapS (COLES) [55] is a framework which combines a (graph) neural network with Laplacian eigenmaps utilizing the graph Laplacian matrix within a contrastive loss. Based on the NCE framework, COLES minimizes the trace difference of Laplacians. In this paper, we analyze the relation among within-class, between-class and total scatter matrices under the rank inequality, and prove that, under a simple assumption, the distance between any dissimilar (negative) samples would be greater/equal than the inter-class distance between their corresponding class centers. Based on such a condition, we derive GLEN, a reformulation of graph embedding into a rank difference problem, which is a more general framework than other graph *The corresponding author. Code: https://github.com/allenhaozhu/GLEN. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). embedding frameworks, i.e., under specific relaxations of the rank difference problem, we can recover different frameworks.To that end, we demonstrate how to optimize the rank difference problem with a difference of LogDet expressions, a differentiable relaxation suitable for use with (graph) neural networks. We consider other surrogates of the rank difference problem, based on the Nuclear norm, γ-nuclear norm, Schatten norm, and the Geman norm. Moreover, we provide theoretical considerations regarding the low-rank optimization and connection to the Riemannian manifold in order to interpret our approach. In summary, our contributions are threefold: i. We propose a rank-based condition connecting within-class, between-class and total scatter matrices under which we provide the minimum class separation guarantee. We propose a loss function, Generalized Laplacian EigenNaps (GLEN), that realizes this condition. ii. As the rank difference problem is NP-hard, we consider a difference of LogDet surrogate to learn node embeddings, as opposed to the trace difference (an upper bound of the difference of LogDet terms) used by other graph embedding models. We also consider other surrogates. iii. We study the distance between symmetric positive (semi-)definite matrices and the LogDet-based relaxation of GLEN. While enjoying fewer computations, the difference of LogDet terms of GLEN enjoys the Affine-invariant Riemannian metric (AIRM) for a lower bound and AIRM scaled by √ m as an upper bound. We explain how GLEN connects to other graph embeddings. 2 Related Works Graph Embeddings. By assuming that the data lies on a low-dimensional manifold, graph embedding methods such as Laplacian Eigenmaps [3] and IsoMap [36] optimize low-dimensional data embeddings. These methods [5] construct a similarity graph by measuring the similarity of high-dimensional feature vectors and embed the nodes into a low-dimensional space. DeepWalk [31] uses truncated random walks to explore the graph structure, and the skip-gram model for word embedding to determine the embedding vectors of nodes. By setting the walk length to one and using negative sampling [26], LINE [35] explores a similar idea with an explicit objective function while REFINE [52] imposes additional orthogonality constraints which deem REFINE extremely fast. Node2Vec [9] interpolates between breadth- and depth-first sampling. COLES [55] unifies traditional graph embedding and negative sampling by introducing a positive contrastive term that captures the graph structure, and a negative contrastive random sampling. COLES solves the trace difference problem akin to traditional graph embedding models [43]. In this paper, we propose a more general loss for graph embedding, i.e., COLES solves the trace difference (Nuclear norms difference) relaxation of GLEN. Graph embedding techniques [43] provide a general framework for dimensionality reduction such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) [12]. All methods within this category can be considered as solving the same problem under varying assumptions, i.e., maximising the intra- and inter-class separation by optimizing the trace difference, also used in metric learning [22]. However, such a family of objective functions is not motivated by the guarantee on the minimum class separation between feature vectors from different categories. GLEN, in its purest NP-hard form, provides the minimum class separation guarantee and can be realised by several formulations depending on chosen trade-offs. Unsupervised Representation Learning for Graph Neural Networks (GNN). Unsupervised GNN training can be reconstruction-, contrastive- or diffusion-based. To train a graph encoder in an unsupervised manner, GCN [17] minimizes a reconstruction error which only considers the similarity matrix and ignores the dissimilarity information. At various scales of the graph, contrastive methods determine the positive and negative sets. For example, local-local CL and global-local CL strategies are highly popular. GraphSAGE [10], inspired by DeepWalk [31], uses the contrastive loss which encourages neighbor nodes to have similar representations, while preserving dissimilarity between representations of disparate nodes. DGI [37], inspired by Deep InfoMax (DIM) [13], uses an objective with global-local sampling strategy to maximize the Mutual Information (MI) between global and local graph embeddings. Augmented Multiscale Deep InfoMax (AMDIM) [2] maximizes MI between multiple data views. MVRLG [11] contrasts encodings from first-order neighbors and a graph diffusion. Fisher-Bures Adversary GCN [34] treats the graph as generated w.r.t. some observation noise. COSTA [50] constructs the views by injecting features with the random noise. However, such contrastive approaches often require thousands of epochs to converge and perform well. In addition, many contrastive losses have an exponential increase in memory overhead w.r.t. the number of nodes. In contrast, our method does not explicitly use the local-local setting but the total scatter matrix, and thus saves computational and storage cost. Linear GNNs, i.e., SGC [42] and S2GC [53], capture the neighborhood and increasingly larger neighborhoods of each node due to the diffusion, respectively. SGC and S2GC have no projection layer, thus the size of embeddings is equal to the input dimension. GLEN can learn a projection layer in an unsupervised manner with a linear function or Multi-Layer Perceptron (MLP) applied to linear GNNs or any other GNN models [18, 34, 49], etc. 3 Preliminaries Notations. Let G=(V,E) be a simple, connected and undirected graph with n= |V | nodes and m= |E| edges. Let i ∈ {1, · · · , n} be the node index of G, and dj be the degree of node j of G. Let W be the adjacency matrix, and D be the diagonal matrix containing degrees of nodes. Let X ∈ Rn×d denote the node feature matrix where each node v is associated with a feature vector xv ∈ Rd. Let the normalized graph Laplacian matrix be defined as L = I− W̃ ∈ Sn+, a symmetric positive semi-definite matrix and W̃ = D−1/2WD−1/2. Sm+(+) is a set of symmetric positive (semi)definite matrices. Let Z = fΘ(X) ∈ Rn×m be a generalized node embedding, i.e., X could be identity matrix (e.g., no node attributes), fΘ(X) could be GNN or a linear function with parameters Θ. Scalars/vectors/matrices are denoted by lowercase regular/lowercase bold/uppercase bold fonts. 3.1 Scatter Matrices Below are given standard definitions of scatter matrices, including the total scatter matrix St ∈ Sm+(+), the within-class matrix Sw ∈ Sm+(+), and between-class matrix Sb ∈ Sm+(+): St = n∑ i=1 (zi − z̄) (zi − z̄)⊤ = Z⊤ ( I− W̃t ) Z where W̃t = 1 n ee⊤, Sw = n∑ i=1 (zi − µyi) (zi − µyi) ⊤ = Z⊤ ( I− W̃w ) Z where W̃w = C∑ c=1 1 nc ecec⊤, Sb = C∑ c=1 nc (µc − z̄) (µc − z̄)⊤ . (1) Let e be an n-dimensional vector with all coefficients equal one, I be an identity matrix, St be the total scatter (covariance) matrix, and z̄ ∈ Rm be the mean of all samples. Let µyi ∈ Rm be the class center of the i-th sample and µc ∈ Rm be the c-th class center. Let the total number of categories be given by C, whereas nc be the number of samples for the c-th category. Let ec ∈ Rn be a vector where a given coefficient indexed by node is equal one if its node is of class c, otherwise it is equal zero. We note that both St and Sw can take a form akin to Laplacian eigenmaps such that W̃t and W̃w are the corresponding normalized adjacent matrices. Let us also define graph Laplacian matrices Lt = I− W̃t ∈ Sn+ and Lw = I− W̃w ∈ Sn+ which will be used in the sequel. Importantly, let us assume that a graph Laplacian matrix L containing graph links could be seen as a noisy version of Lw in which all nodes of a given class c connect under the weight equal 1/nc. Observe that St = Sw + Sb. Thus, Rank(St) ≤ Rank(Sw) + Rank(Sb) due to the rank inequality. Below we highlight the condition underpinning the subsequent motivation: Condition 1. Rank(St) = Rank(Sw) + Rank(Sb). 3.2 Motivation Figure 1 shows some three optimal solutions for Condition 1. The rank of between-class scatter matrix Sb for the whole dataset is at most C − 1 (where C is the number of classes). Since Rank(AB) ≤ min(Rank(A),Rank(B)), we have† Rank(S−1w Sb) ≤ Rank(Sb) ≤ C − 1. The rank is the number of non-zero eigenvalues of a matrix so S−1w Sb has at most C − 1 non-zero eigenvalues. Condition 1 implies that Rank(S−1w Sb) = 0 results in the minimum class separation guarantee under that condition. Theorem 1. Let the feature dimension be larger than the class number (i.e., m > C) and Condition 1 hold. Then, the minimum class separation is equal to the distance between class centers. In other words, the distance between any two vectors zi and zj with labels yi ̸= yj is greater/equal the distance between class centers µyi and µyj : ∥µyi − µyj∥2 ≤ ∥zi − zj∥2, ∀yi ̸= yj , i, j ∈ {1, · · · , C}. (2) Proof. As Sw is the orthogonal complement of Sb, i.e., S−1w Sb = 0, Sw + Sb = UΣU ⊤, Sw = U1:kΣ1:kU ⊤ 1:k and Sb = Uk+1:mΣk+1:mU ⊤ k+1:m where 1 ≤ k < m. Let zi = µyi +U⊤ϵi where ϵi is the representation under the basis U and ϵ(k+1:m),i = 0 because only top k components 1 :k represent Sw. Thus, the orthogonal projection Uk+1:m fulfills ∥Uk+1:m(zi − zj)∥2 ≤ ∥zi − zj∥2. Moreover, Uk+1:m(zi − µyi) = Uk+1:m(U⊤ϵi) = 0. That is, all {zi : yi = c} are projected onto the mean µc. Thus, the inequality in Eq. 2 holds. Theorem 1 guarantees the worst inter-class distance§. Figure 1 shows some cases that meet Condition 1. Figure 1a shows the case for which the class centers collapse to a single point and thus the inter-class distance equals zero (collapse of the feature space). Figures 1b and 1c show other cases. 4 Methodology Condition 1 points to a promising research direction in learning discriminative feature spaces. However, optimizing over the rank is NP-hard and non-differentiable. In what follows, we provide the formulation of Generalized Laplacian EigeNmaps (GLEN) and its relaxation, which is differentiable. 4.1 Generalized Laplacian Eigenmaps As solving Condition 1 is NP-hard, we propose a relaxation where Rank(St) is encouraged to be as large as possible (bounded by the feature dimension m). On the contrary, if Rank(St) ≈ Rank(Sb) then the small Rank(Sw) limits the feature diversity. In the extreme case, if Rank(Sw) = 0, the feature representation collapses. Larger 0 < Rank(Sb) ≤ C − 1 improves the inter-class diversity. We propose a new Generalized Laplacian EigeNmaps (GLEN) framework for unsupervised network embedding. In the most general form, GLEN maximizes the difference of rank terms: Θ∗ = argmax Θ Rank ( St ( fΘ(X) )) − Rank ( Sw ( fΘ(X) )) . (3) As the general matrix Rank Minimization Problem (RMP) [7] is NP-hard and so is the difference of rank terms in Eq. 3, we relax this problem by the difference of LogDet terms that serve as a surrogate of the NP-hard problem. Appendix I derives GLEN from the SampledNCE framework. †We write S−1w but if Sw is rank-deficient, −1 is replaced with the Moore–Penrose inverse (pseudo-inverse). §Other graph embedding models that maximize/minimize inter-/intra-class distances have no such guarantees. GLEN (LogDet relaxation). I. Define: δ(St,Sw;α, λ) = log det(I+ αSt)− λ log det(I+ αSw), (4) where λ ≥ 0 controls the impact of log det(Sw). If λ = 0, δ(·) encourages Rank(fΘ(X)) = m. II. Let St = fΘ(X)⊤LtfΘ(X) and Sw = fΘ(X)⊤LwfΘ(X). Then the LogDet relaxation becomes: Θ∗ = argmax Θ log det ( I+ αfΘ(X) ⊤LtfΘ(X) ) − log det ( I+ αfΘ(X) ⊤LwfΘ(X) ) , (5) where I ensures I + αfΘ(X)⊤LfΘ(X) > 0 as fΘ(X)⊤LfΘ(X) may be Sm+ leading to det(fΘ(X) ⊤LfΘ(X)) = 0. Thus, we use log det(I+ αS) as a smooth surrogate for Rank(S). Proposition 1. Let σ(S) be the vector of eigenvalues of matrix S ∈ Sm+(+), and Eig(S) be a diagonal matrix with σ(S) as its diagonal. Let S,S′ ∈ Sm+(+) and α > 0. Then, δ(S,S′;α, λ) = δ(Eig(S),Eig(S′);α, λ), i.e., δ(·) depends on eigenvalues rather than eigenvectors of S and S′. Proof. The proof follows from the equality det(I+ αS) = ∏ i σi(I+ αS) = ∏ i(1 + ασi(S)) = det(I + αEig(S)). Thus δ(S,S′;α, λ) = log det(I + αS) − λ log det(I + αS′) = log det(I + αEig(S))− λ log det(I+ αEig(S′)) = δ(Eig(S),Eig(S′);α, λ). 5 Theoretical Analysis Below, we compare our approach and other methods by looking at (i) the low-rank optimization and (ii) the non-Euclidean distances between symmetric positive (semi-)definite matrices. 5.1 Nuclear Norm vs. LogDet for Rank Minimization Claim 1. COLES [55] is a convex relaxation (using the nuclear norm) of the rank difference in Eq. 3: Θ∗ = argmax Θ Tr ( fΘ(X) ⊤LtfΘ(X) ) − λTr ( fΘ(X) ⊤LwfΘ(X) ) s.t. Ω(fΘ(X)) = B, (6) where Tr ( fΘ(X) ⊤LtfΘ(X) ) = ∥St∥∗ and Tr ( fΘ(X) ⊤LwfΘ(X) ) = ∥Sw∥∗. The nuclear norm ∥ · ∥∗ can be regarded as the ℓ1 norm over singular values. As the ℓ1 norm induces sparsity, the nuclear norm encourages sparse singular values leading to low-rank solutions. If fΘ(X)⊤fΘ(X) is restricted to be diagonal, ∥fΘ(X)⊤fΘ(X)∥∗ = ∥Diag ( fΘ(X) ⊤fΘ(X) ) ∥1 and the nuclear norm surrogate for the rank minimization reduces to the ℓ1 norm surrogate for the cardinality (rank) minimization. However, for the m-dimensional embedding, the solution of trace difference lies on a subspace of dimension less than m− 1 [3]. Thus, the constraint Ω(fΘ(X)) = B prevents the dimensional collapse, i.e., fΘ(X)⊤fΘ(X) = I. Compared with the trace-based relaxation, LogDet is more suitable for cardinality minimization as it is less sensitive to large singular values. Also, the difference of LogDet terms does not require decorrelation of features to prevent the dimensional collapse. We discuss this matter in Appendix A. In our case, the difference of LogDet terms is always bounded by the difference of trace terms as follows. Proposition 2. Given an embedding matrix fΘ(X) ∈ Rn×m, a fixed small constant α > 0, we have the following inequality: log det (I+ αSt)− log det (I+ αSw) < αTr(St − Sw). (7) Proof. log det (I+αSt)−log det (I+ αSw) = log det (I+αEig(St))−log det (I+ αEig(Sw)) = Tr (log(I+ αEig(St)− log(I+ αEig(Sw)) < αTr(St − Sw). (8) Proposition 2 is also related to the inequality Rank(S) ≤ log det(I+ S) ≤ Tr(S) [7]. 5.2 Distance between Symmetric Positive (Semi-)Definite Matrices. Below, we provide a perspective on non-Euclidean distances between matrices from Sm+(+) to compare the proposed method with other graph embeddings, e.g., Laplacian Eigenmaps [3] and COLES [55]. For clarity, we also reformulate the Laplacian eigenmaps and COLES into forms in Prop. 3 and 4. Proposition 3. Laplacian Eigenmaps [3] method equals to maximizing the Frobenius norm: Θ∗ = argmax Θ ∥fΘ(X)fΘ(X)⊤ − Lw∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (9) Proposition 4. Contrastive Laplacian Eigenmaps [55] equals to maximizing the difference of Frobenius norm terms: Θ∗=argmax Θ ∥fΘ(X)fΘ(X)⊤−Lw∥2F−∥fΘ(X)fΘ(X)⊤−Lt∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (10) Proof. ∥fΘ(X)fΘ(X)⊤− L∥2F = Tr(fΘ(X)fΘ(X)⊤fΘ(X)fΘ(X)⊤− 2fΘ(X)LfΘ(X)⊤ + L⊤L) = constant− 2Tr(fΘ(X)LfΘ(X)⊤) ≥ 0. (11) Note that Eq. 9 encourages the linear kernel matrix fΘ(X)fΘ(X)⊤ to be close to W̃w while Eq. 10 encourage the linear kernel matrix to be far from the W̃w at the same time. Our loss follows the non-Euclidean geometry. Below, we demonstrate the relation of Eq. 4 to the Affine-invariant Riemannian metric (AIRM). Indeed, our loss function is bounded from both sides by AIRM and AIRM scaled by √ m respectively. Proposition 5. Let σ(S) be the vector of eigenvalues of S, for any matrix St,Sw ∈ Sm+(+), we have: ∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F ≤ log det(I+ St)− log det(I+ Sw) ≤ √ m∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F . (12) Proof. Given A = I+ St and B = I+ Sw, we have: log det(A)− log det(B) = log(det(A) det(B−1)) = log(det(A) det(B−1/2) det(B−1/2)) = Tr log(B−1/2AB−1/2). (13) We have Tr(A) = ∥σ(A)∥1, ∥A∥F = ∥σ(A)∥2 and ∥x∥2 ≤ ∥x∥1 ≤ √ m∥x∥2. Thus, Eq. 4 is trying to find a mapping function maximizing an approximation of AIRM distance between the total scatter matrix and the within-class matrix. 5.3 Relationship of the LogDet model to the Schatten norm Below we demonstrate the relationship between the LogDet, Trace and Rank operators, respectively, under the Schatten norm [28] framework. Essential is the following family of objective functions: fα,γ(S) = 1 c m∑ i=1 log (ασi(S) + γ) = log det (αS+ γI) , α, γ ≥ 0, (14) where σi(S), i = 1, . . . ,m, are the eigenvalues of either St ∈ Sm+(+) or Sw ∈ Sm+(+), which are the total scatter matrix and the within scatter matrix from our experiments, respectively. Moreover, we define a normalization constant c where c = 1 or c = log(α+ γ) as detailed below. Given c = 1, we have: lim p→0 Spγ,p(S)−m p = f1,γ(S) where Sγ,p(S) = ( m∑ i=1 (σi(S) + γ) p ) )1/p . (15) From the asymptotic analysis, we conclude that the LogDet is an arbitrarily accurate rational approximation of ℓ0 (the so-called pseudo-norm counting non-zero elements) over the eigenvalues of S. The case p = 1 yields the nuclear norm (trace) which makes the ‘smoothed’ rank difference of GLEN become equivalent of COLES. The opposing limit case, denoted as p = 0 recovers the LogDet formula. One can also recover the exact Rank from the LogDet formulation by: lim α→∞ fα,1(S) = Rank(S) if c = log(1 + α). (16) This is apparent because: lim α→∞ log(1 + ασi) log(1 + α) = 1 if σi > 0 and lim α→∞ log(1 + ασi) log(1 + α) = 0 if σi = 0. (17) 6 Experiments We evaluate GLEN (its relaxation) on transductive and inductive node classification tasks and node clustering. GLEN is compared to popular unsupervised, contrastive, and (semi-)supervised approaches. Except for the classifier, unsupervised models do not use labels. To learn similarity/dissimilarity, contrastive models employ the contrastive setting. Labels are used to train the projection layer and classifier in semi-supervised models. A fraction of nodes (i.e., 5 or 20 per class) used for training are labeled for semi-supervised setting. A SoftMax classifier is used for (semi-)supervised models, while a logistic regression classifier is used for unsupervised and contrastive approaches. See Appendix E for implementation details. Datasets. GLEN is evaluated on four citation networks: Cora, Citeseer, Pubmed, Cora Full [17, 4] for transductive setting. We also employ the large scale Ogbn-arxiv from OGB [14]. See Appendix D for details of datasets. Metrics. As fixed data splits [45] often on transductive models benefit models that overfit, we average results over 50 random splits for each dataset. We evaluate performance for 5 and 20 samples per class. Nonetheless, we also evaluate our model on the standard splits. Baseline models. We group baseline models into unsupervised, contrastive and (semi-)supervised methods, and implement them in the same framework/testbed. Contrastive methods include DeepWalk [31], GCN+SampledNCE developed as an alternative to GraphSAGE+SampledNCE [10], Graph2Gauss [4], SCE [47], DGI [37], GRACE [56], GCA [57], GraphCL [46] and COLES [55], which are our main competitors. Note that GRACE, GCA and GraphCL are based on multi-view and data augmentation, and GraphCL is mainly intended for graph classification. We do not study graph classification as it requires advanced node pooling with mixed- or high-order statistics [40, 19, 20]. We compare results with representative (semi-)supervised GCN [17], GAT [37] and MixHop [1] models. SGC and S2GC are unsupervised spectral filter networks. They do not have any learnable parameters. COLES and GLEN could be regarded as dimension reduction techniques for SGC and S2GC, thus we compare them to PCA-S2GC and RP-S2GC, which use PCA and random projections to obtain the projection layer. We set hyperparameters based on the settings described in prior papers. 6.1 Transductive Learning In this section, we consider transductive learning where all nodes are available in the training process. COLES vs. GLEN. Table 1 shows the performance of GLEN vs. COLES on two different backbones, i.e., GCN and S2GC. On both backbones, GLEN shows non-trivial improvements on all four datasets. GLEN-S2GC outperforms the COLES by up to 4.6%. Table 2 evaluates GLEN on Cora, Citeseer, PubMed on the standard splits instead of the random splits. See Appendix G for comparisons to additional contrastive learning frameworks. Contrastive Embedding Baselines vs. GLEN. Table 1 shows that GLEN-GCN and GLEN-S2GC outperform unsupervised models. In particular, GLEN-GCN outperforms GCN+SampledNCE on all four datasets, which shows that GLEN has an advantage over the SampledNCE framework. In addition, GLEN-S2GC outperforms the best contrastive baseline DGI by up to 3.4%. On Cora with 5 training samples, GLEN-S2GC outperforms S2GC by 6.8%. Finally, Table 3 shows that GLEN-S2GC (small number of trainable parameters) outperforms other methods on the challenging Ogbn-arxiv. Semi-supervised GNNs vs. GLEN. Table 1 shows that the contrastive GCN baselines perform worse than semi-supervised variants, especially when 20 labeled samples per class are available. In contrast, GLEN-GCN outperformed the semi-supervised GCN on Cora by 10% and 3.4% given 5 and 20 labeled samples per class. GLEN-GCN also outperforms GCN on Citeseer and Pubmed by 9.9% and 5.2% given 5 labeled samples per class. These results show the superiority of GLEN on four datasets when the number of samples per class is 5. Even for 20 labeled samples per class, GLEN-S2GC outperforms the best semi-supervised baselines on all four datasets e.g., by 3.3% on Cora. Semi-supervised models (e.g., GAT and MixHop) are affected by the low number of labeled samples, which is consistent with [25]. The accuracy of GLEN-GCN and GLEN-S2GC is unaffected. Unsupervised GNNs vs. GLEN. SGC and S2GC are unsupervised linear networks based on spectral filters which do not use labels (except for the classifier). As a dimension reduction method, GLEN helps both methods reduce the dimension and achieve discriminative features. Table 1 shows that GLEN-S2GC outperforms RP-S2GC and PCA-S2GC under the same projection size. GLEN-S2GC also outperforms the unsupervised S2GC baseline (high-dimensional representation). 6.2 Node Clustering We compare GLEN-GCN and GLEN-S2GC with three types of clustering methods: i. Methods that use only node features e.g., k-means and spectral clustering (spectral-f) construct a similarity matrix with the node features by a linear kernel. ii. Structural clustering methods that only use the graph structure: spectral clustering (spectral-g) that takes the graph adjacency matrix as the similarity matrix, and DeepWalk [31]. iii. Attributed graph clustering methods that use node features and the graph: Graph Autoencoder (GAE), Graph Variational Autoencoder (VGAE) [17], Adversarially Regularized Graph Autoencoder (ARGE), Var. Graph Autoencoder (ARVGE) [30], SGC [42] , S2GC [53], COLES [55]. We measure and report the clustering Accuracy (Acc), Normalized Mutual Information (NMI) and macro F1-score (F1). We run each method 10 times on Cora, CiteSeer and PubMed. We set the number of propagation steps to 8 for SGC, S2GC, COLES-S2GC and COLES-S2GC following [48]. Table 4 shows that GLEN-S2GC outperforms other methods in all cases, whereas GLEN-GCN outperforms COLES-GCN, COLES-GCN (Stiefel) and contrastive GCN on all datasets. 6.3 Comparison of Surrogates of Rank Table 5 above shows results on four additional surrogates of Rank(S): • Nuclear norm: RNN(S) = ∑ i σi(S). • γ-nuclear norm [16]: Rγ-NN = ∑ i (1+γ)σi(S) γ+σi(S) . • Sp norm [28]: RSp = ∑ i σi(S) p. • Geman norm [8]: RGeman = ∑ i σi(S) γ+σi(S) . 6.4 Transductive One-shot Learning on Image Classification Datasets The most common setting in FSL is the inductive setting. In such a scenario, only samples in the support set can be used to fine-tune the model or learn a function for the inference of query labels. In contrast, in the transductive scenario, the model has access to all the query data (unlabeled) that needs to be classified. EASE [54] is a transductive few-shot learner for so-called episodic image classification. Given feature matrix Z ∈ Rn×m from a CNN backbone (ResNet-12), EASE minimizes Tr(UZ⊤LwZU⊤) − Tr(UZ⊤LtZU⊤) (subject to UU⊤ = I) in order to learn a linear projection U. We extend GLEN to EASE to learn the linear projection U by minimizing log det(UZ⊤LwZU⊤)− log det(UZ⊤LtZU ⊤) (subject to UU⊤ = I. We also apply the Sp norm instead of log det. Table 6 shows the results of EASE based on the LogDet and the Sp-norm based relaxations of GLEN. For the simplicity of experiment, we use soft k-means rather than Sinkhorn k-means as in the EASE pipeline. Please refer to EASE [54] for the experimental setup of one-shot learning. We evaluate our approach on four few-shot classification benchmarks, mini-ImageNet [38], tieredImageNet [32], CUB [41], and CIFAR-FS [21]. The performance numbers are given as accuracy % and the 0.95 confidence intervals are reported. We use publicly available pre-trained ResNet-12 [29] that are trained on the base class training set. Scalability. GraphSAGE and DGI require neighbor sampling with redundant forward/backward steps (long runtime). In contrast, GLEN-S2GC enjoys a simple implementation with low memory usage/low runtime. For graphs with over 100 thousands nodes and 10 millions edges (Reddit), GLEN runs fast on NVIDIA 1080 GPU. Even on larger graph benchmarks, GLEN is fast as it optimizes the total scatter and the within-class matrices whose size depends on embedding size rather than the node number. The runtime of GLEN-S2GC is also favourable in comparison to multi-view augmentation-based GraphCL. Specifically, GLEN-S2GC took 0.54s, 0.3s, 5.3s and 15.4s on Cora, Citeseer, Pubmed and Cora Full, respectively. GraphCL took 110.19s, 101.0s, ≥ 8h and ≥ 8h respectively. Although the LogDet difference is somewhat slower than the trace difference in forward/backward propagation, it converges faster, thus enjoying a similar low runtime. 7 Conclusions In this paper, we model contrastvie learning as a rank difference problem to approximate the condition that the rank of total scatter matrix should equal the sum of ranks of within-scatter and between-scatter matrices. We relax this NP-hard assumption with a differentiable difference of LogDet terms. We also show two perspectives on GLEN and the existing methods based on the low-rank optimization and distance between symmetric positive (semi-)definite matrices matrices. In low-rank optimization, we explain why the LogDet difference is a better surrogate function to optimize rank difference compared to the trace difference. We also show that our solution encourages linear kernel of embeddings become the geometric mean between the total scatter matrix and the within-class matrix. GLEN works well with many backbones outperforming many unsupervised, contrastive and (semi-)supervised methods. Acknowledgments and Disclosure of Funding We thank reviewers for stimulating questions that helped us improve several aspects of our analysis. Hao Zhu is supported by an Australian Government Research Training Program (RTP) Scholarship. Piotr Koniusz is supported by CSIRO’s Machine Learning and Artificial Intelligence Future Science Platform (MLAI FSP).
1. What is the focus and contribution of the paper on graph embedding? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis and performance? 3. What are the weaknesses and limitations of the paper, especially regarding its claims and comparisons with other works? 4. Do you have any questions or concerns about the relationship between the proposed model and contrastive learning or other methods that can solve the rank difference problem? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a novel graph-embedding framework, which is a rank difference model. This rank model is NP-hard to solve, so the authors optimize the loss formulation by means of the logdet. The transformed model is then solvable and proven to be theoretically effective. The paper also offers connection between the given model and other graph embedding methods, and calculate the upper and lower bound of the proposed loss function. In total, the paper is well written and theoretically innovative. Strengths And Weaknesses Strength The COLES can be the special case for the proposed GLEN framework, and the GLEN outperforms the COLES. The theoretical analysis gives the upper and lower bound of the proposed GLEN, which makes the model have good interpretation. The experiments are effective to demonstrate the proposed framework does have good performance. Weakness It is not clear about the relationship between the proposed model and the contrastive learning, although the contrastive learning is introduced in related work. Compared with the trace model in COLES, the generalization of the proposed rank difference framework is not illustrated clearly. The author should give proof how the trace model can be generalized into the rank model as a special case. Although GLEN is called the generalized Laplacian Eigenmaps framework, the paper shows no other cases that can also be generalized to GLEN except COLES. The paper does not compare the logdet model with other methods that can serve as a surrogate of the rank problem. There are many methods can be used to solve the problem at present, so the authors should list and compare these methods and explain why the logdet is chosen. The logdet terms are chosen to solve the rank model, which actually transfers the rank model into a logdet model. Does the logdet model still maintain the generalization property? If so, please give the proof. Questions Can authors give the proof for the Proposition 1? This proposition lacks the necessary proof. Are there any other cases except COLES that can be generalized to GLEN? If so, please supply examples of these cases. Otherwise, what’s the meaning of the ‘generalization’ of GLEN? Is the logdet to solve the rank problem the original work of the paper? If not, listing the references is necessary. Does the logdet framework still hold the generalization property?If so, please give the proof. Is there any other effective methods to solve the rank difference problem? The authors should compare these methods and explain the reason why logdet is chosen. Limitations The generalization of the proposed framework is not clearly described, and many other cases need to supply except COLES. Besides, there are other methods to solve the rank difference problem except using logdet. The paper does not list and compare these methods.
NIPS
Title Generalized Laplacian Eigenmaps Abstract Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and upper-bounded by AIRM scaled by the factor of √ m. We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. N/A accuracy/scalability compared to state-of-the-art baselines. 1 Introduction Laplacian Eigenmaps [3] and IsoMap [36] are graph embedding methods that reduce the dimensionality of data by assuming the data exists on a low-dimensional manifold. The objective function in such models encourages node embeddings to lie near each other in the embedding space if nodes are close to each other in the original space. While the classical methods capture the related node pairs, they neglect modeling unrelated node pairs. In contrast, modern graph embedding models such as [35, 10, 44] and Graph Contrastive Learning (GCL) [37, 56, 11, 57, 55] are unified under the (Sampled) Noise Contrastive Estimation framework, called (Sampled)NCE [27, 23]. Most of GCL methods do not incorporate the graph information into the loss but follow the setting from computer vision, i.e., they assume that randomly drawn pairs should be dissimilar, whereas the original sample and its augmentations should be similar [39]. In contrast, COntrastive Laplacian EigenmapS (COLES) [55] is a framework which combines a (graph) neural network with Laplacian eigenmaps utilizing the graph Laplacian matrix within a contrastive loss. Based on the NCE framework, COLES minimizes the trace difference of Laplacians. In this paper, we analyze the relation among within-class, between-class and total scatter matrices under the rank inequality, and prove that, under a simple assumption, the distance between any dissimilar (negative) samples would be greater/equal than the inter-class distance between their corresponding class centers. Based on such a condition, we derive GLEN, a reformulation of graph embedding into a rank difference problem, which is a more general framework than other graph *The corresponding author. Code: https://github.com/allenhaozhu/GLEN. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). embedding frameworks, i.e., under specific relaxations of the rank difference problem, we can recover different frameworks.To that end, we demonstrate how to optimize the rank difference problem with a difference of LogDet expressions, a differentiable relaxation suitable for use with (graph) neural networks. We consider other surrogates of the rank difference problem, based on the Nuclear norm, γ-nuclear norm, Schatten norm, and the Geman norm. Moreover, we provide theoretical considerations regarding the low-rank optimization and connection to the Riemannian manifold in order to interpret our approach. In summary, our contributions are threefold: i. We propose a rank-based condition connecting within-class, between-class and total scatter matrices under which we provide the minimum class separation guarantee. We propose a loss function, Generalized Laplacian EigenNaps (GLEN), that realizes this condition. ii. As the rank difference problem is NP-hard, we consider a difference of LogDet surrogate to learn node embeddings, as opposed to the trace difference (an upper bound of the difference of LogDet terms) used by other graph embedding models. We also consider other surrogates. iii. We study the distance between symmetric positive (semi-)definite matrices and the LogDet-based relaxation of GLEN. While enjoying fewer computations, the difference of LogDet terms of GLEN enjoys the Affine-invariant Riemannian metric (AIRM) for a lower bound and AIRM scaled by √ m as an upper bound. We explain how GLEN connects to other graph embeddings. 2 Related Works Graph Embeddings. By assuming that the data lies on a low-dimensional manifold, graph embedding methods such as Laplacian Eigenmaps [3] and IsoMap [36] optimize low-dimensional data embeddings. These methods [5] construct a similarity graph by measuring the similarity of high-dimensional feature vectors and embed the nodes into a low-dimensional space. DeepWalk [31] uses truncated random walks to explore the graph structure, and the skip-gram model for word embedding to determine the embedding vectors of nodes. By setting the walk length to one and using negative sampling [26], LINE [35] explores a similar idea with an explicit objective function while REFINE [52] imposes additional orthogonality constraints which deem REFINE extremely fast. Node2Vec [9] interpolates between breadth- and depth-first sampling. COLES [55] unifies traditional graph embedding and negative sampling by introducing a positive contrastive term that captures the graph structure, and a negative contrastive random sampling. COLES solves the trace difference problem akin to traditional graph embedding models [43]. In this paper, we propose a more general loss for graph embedding, i.e., COLES solves the trace difference (Nuclear norms difference) relaxation of GLEN. Graph embedding techniques [43] provide a general framework for dimensionality reduction such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) [12]. All methods within this category can be considered as solving the same problem under varying assumptions, i.e., maximising the intra- and inter-class separation by optimizing the trace difference, also used in metric learning [22]. However, such a family of objective functions is not motivated by the guarantee on the minimum class separation between feature vectors from different categories. GLEN, in its purest NP-hard form, provides the minimum class separation guarantee and can be realised by several formulations depending on chosen trade-offs. Unsupervised Representation Learning for Graph Neural Networks (GNN). Unsupervised GNN training can be reconstruction-, contrastive- or diffusion-based. To train a graph encoder in an unsupervised manner, GCN [17] minimizes a reconstruction error which only considers the similarity matrix and ignores the dissimilarity information. At various scales of the graph, contrastive methods determine the positive and negative sets. For example, local-local CL and global-local CL strategies are highly popular. GraphSAGE [10], inspired by DeepWalk [31], uses the contrastive loss which encourages neighbor nodes to have similar representations, while preserving dissimilarity between representations of disparate nodes. DGI [37], inspired by Deep InfoMax (DIM) [13], uses an objective with global-local sampling strategy to maximize the Mutual Information (MI) between global and local graph embeddings. Augmented Multiscale Deep InfoMax (AMDIM) [2] maximizes MI between multiple data views. MVRLG [11] contrasts encodings from first-order neighbors and a graph diffusion. Fisher-Bures Adversary GCN [34] treats the graph as generated w.r.t. some observation noise. COSTA [50] constructs the views by injecting features with the random noise. However, such contrastive approaches often require thousands of epochs to converge and perform well. In addition, many contrastive losses have an exponential increase in memory overhead w.r.t. the number of nodes. In contrast, our method does not explicitly use the local-local setting but the total scatter matrix, and thus saves computational and storage cost. Linear GNNs, i.e., SGC [42] and S2GC [53], capture the neighborhood and increasingly larger neighborhoods of each node due to the diffusion, respectively. SGC and S2GC have no projection layer, thus the size of embeddings is equal to the input dimension. GLEN can learn a projection layer in an unsupervised manner with a linear function or Multi-Layer Perceptron (MLP) applied to linear GNNs or any other GNN models [18, 34, 49], etc. 3 Preliminaries Notations. Let G=(V,E) be a simple, connected and undirected graph with n= |V | nodes and m= |E| edges. Let i ∈ {1, · · · , n} be the node index of G, and dj be the degree of node j of G. Let W be the adjacency matrix, and D be the diagonal matrix containing degrees of nodes. Let X ∈ Rn×d denote the node feature matrix where each node v is associated with a feature vector xv ∈ Rd. Let the normalized graph Laplacian matrix be defined as L = I− W̃ ∈ Sn+, a symmetric positive semi-definite matrix and W̃ = D−1/2WD−1/2. Sm+(+) is a set of symmetric positive (semi)definite matrices. Let Z = fΘ(X) ∈ Rn×m be a generalized node embedding, i.e., X could be identity matrix (e.g., no node attributes), fΘ(X) could be GNN or a linear function with parameters Θ. Scalars/vectors/matrices are denoted by lowercase regular/lowercase bold/uppercase bold fonts. 3.1 Scatter Matrices Below are given standard definitions of scatter matrices, including the total scatter matrix St ∈ Sm+(+), the within-class matrix Sw ∈ Sm+(+), and between-class matrix Sb ∈ Sm+(+): St = n∑ i=1 (zi − z̄) (zi − z̄)⊤ = Z⊤ ( I− W̃t ) Z where W̃t = 1 n ee⊤, Sw = n∑ i=1 (zi − µyi) (zi − µyi) ⊤ = Z⊤ ( I− W̃w ) Z where W̃w = C∑ c=1 1 nc ecec⊤, Sb = C∑ c=1 nc (µc − z̄) (µc − z̄)⊤ . (1) Let e be an n-dimensional vector with all coefficients equal one, I be an identity matrix, St be the total scatter (covariance) matrix, and z̄ ∈ Rm be the mean of all samples. Let µyi ∈ Rm be the class center of the i-th sample and µc ∈ Rm be the c-th class center. Let the total number of categories be given by C, whereas nc be the number of samples for the c-th category. Let ec ∈ Rn be a vector where a given coefficient indexed by node is equal one if its node is of class c, otherwise it is equal zero. We note that both St and Sw can take a form akin to Laplacian eigenmaps such that W̃t and W̃w are the corresponding normalized adjacent matrices. Let us also define graph Laplacian matrices Lt = I− W̃t ∈ Sn+ and Lw = I− W̃w ∈ Sn+ which will be used in the sequel. Importantly, let us assume that a graph Laplacian matrix L containing graph links could be seen as a noisy version of Lw in which all nodes of a given class c connect under the weight equal 1/nc. Observe that St = Sw + Sb. Thus, Rank(St) ≤ Rank(Sw) + Rank(Sb) due to the rank inequality. Below we highlight the condition underpinning the subsequent motivation: Condition 1. Rank(St) = Rank(Sw) + Rank(Sb). 3.2 Motivation Figure 1 shows some three optimal solutions for Condition 1. The rank of between-class scatter matrix Sb for the whole dataset is at most C − 1 (where C is the number of classes). Since Rank(AB) ≤ min(Rank(A),Rank(B)), we have† Rank(S−1w Sb) ≤ Rank(Sb) ≤ C − 1. The rank is the number of non-zero eigenvalues of a matrix so S−1w Sb has at most C − 1 non-zero eigenvalues. Condition 1 implies that Rank(S−1w Sb) = 0 results in the minimum class separation guarantee under that condition. Theorem 1. Let the feature dimension be larger than the class number (i.e., m > C) and Condition 1 hold. Then, the minimum class separation is equal to the distance between class centers. In other words, the distance between any two vectors zi and zj with labels yi ̸= yj is greater/equal the distance between class centers µyi and µyj : ∥µyi − µyj∥2 ≤ ∥zi − zj∥2, ∀yi ̸= yj , i, j ∈ {1, · · · , C}. (2) Proof. As Sw is the orthogonal complement of Sb, i.e., S−1w Sb = 0, Sw + Sb = UΣU ⊤, Sw = U1:kΣ1:kU ⊤ 1:k and Sb = Uk+1:mΣk+1:mU ⊤ k+1:m where 1 ≤ k < m. Let zi = µyi +U⊤ϵi where ϵi is the representation under the basis U and ϵ(k+1:m),i = 0 because only top k components 1 :k represent Sw. Thus, the orthogonal projection Uk+1:m fulfills ∥Uk+1:m(zi − zj)∥2 ≤ ∥zi − zj∥2. Moreover, Uk+1:m(zi − µyi) = Uk+1:m(U⊤ϵi) = 0. That is, all {zi : yi = c} are projected onto the mean µc. Thus, the inequality in Eq. 2 holds. Theorem 1 guarantees the worst inter-class distance§. Figure 1 shows some cases that meet Condition 1. Figure 1a shows the case for which the class centers collapse to a single point and thus the inter-class distance equals zero (collapse of the feature space). Figures 1b and 1c show other cases. 4 Methodology Condition 1 points to a promising research direction in learning discriminative feature spaces. However, optimizing over the rank is NP-hard and non-differentiable. In what follows, we provide the formulation of Generalized Laplacian EigeNmaps (GLEN) and its relaxation, which is differentiable. 4.1 Generalized Laplacian Eigenmaps As solving Condition 1 is NP-hard, we propose a relaxation where Rank(St) is encouraged to be as large as possible (bounded by the feature dimension m). On the contrary, if Rank(St) ≈ Rank(Sb) then the small Rank(Sw) limits the feature diversity. In the extreme case, if Rank(Sw) = 0, the feature representation collapses. Larger 0 < Rank(Sb) ≤ C − 1 improves the inter-class diversity. We propose a new Generalized Laplacian EigeNmaps (GLEN) framework for unsupervised network embedding. In the most general form, GLEN maximizes the difference of rank terms: Θ∗ = argmax Θ Rank ( St ( fΘ(X) )) − Rank ( Sw ( fΘ(X) )) . (3) As the general matrix Rank Minimization Problem (RMP) [7] is NP-hard and so is the difference of rank terms in Eq. 3, we relax this problem by the difference of LogDet terms that serve as a surrogate of the NP-hard problem. Appendix I derives GLEN from the SampledNCE framework. †We write S−1w but if Sw is rank-deficient, −1 is replaced with the Moore–Penrose inverse (pseudo-inverse). §Other graph embedding models that maximize/minimize inter-/intra-class distances have no such guarantees. GLEN (LogDet relaxation). I. Define: δ(St,Sw;α, λ) = log det(I+ αSt)− λ log det(I+ αSw), (4) where λ ≥ 0 controls the impact of log det(Sw). If λ = 0, δ(·) encourages Rank(fΘ(X)) = m. II. Let St = fΘ(X)⊤LtfΘ(X) and Sw = fΘ(X)⊤LwfΘ(X). Then the LogDet relaxation becomes: Θ∗ = argmax Θ log det ( I+ αfΘ(X) ⊤LtfΘ(X) ) − log det ( I+ αfΘ(X) ⊤LwfΘ(X) ) , (5) where I ensures I + αfΘ(X)⊤LfΘ(X) > 0 as fΘ(X)⊤LfΘ(X) may be Sm+ leading to det(fΘ(X) ⊤LfΘ(X)) = 0. Thus, we use log det(I+ αS) as a smooth surrogate for Rank(S). Proposition 1. Let σ(S) be the vector of eigenvalues of matrix S ∈ Sm+(+), and Eig(S) be a diagonal matrix with σ(S) as its diagonal. Let S,S′ ∈ Sm+(+) and α > 0. Then, δ(S,S′;α, λ) = δ(Eig(S),Eig(S′);α, λ), i.e., δ(·) depends on eigenvalues rather than eigenvectors of S and S′. Proof. The proof follows from the equality det(I+ αS) = ∏ i σi(I+ αS) = ∏ i(1 + ασi(S)) = det(I + αEig(S)). Thus δ(S,S′;α, λ) = log det(I + αS) − λ log det(I + αS′) = log det(I + αEig(S))− λ log det(I+ αEig(S′)) = δ(Eig(S),Eig(S′);α, λ). 5 Theoretical Analysis Below, we compare our approach and other methods by looking at (i) the low-rank optimization and (ii) the non-Euclidean distances between symmetric positive (semi-)definite matrices. 5.1 Nuclear Norm vs. LogDet for Rank Minimization Claim 1. COLES [55] is a convex relaxation (using the nuclear norm) of the rank difference in Eq. 3: Θ∗ = argmax Θ Tr ( fΘ(X) ⊤LtfΘ(X) ) − λTr ( fΘ(X) ⊤LwfΘ(X) ) s.t. Ω(fΘ(X)) = B, (6) where Tr ( fΘ(X) ⊤LtfΘ(X) ) = ∥St∥∗ and Tr ( fΘ(X) ⊤LwfΘ(X) ) = ∥Sw∥∗. The nuclear norm ∥ · ∥∗ can be regarded as the ℓ1 norm over singular values. As the ℓ1 norm induces sparsity, the nuclear norm encourages sparse singular values leading to low-rank solutions. If fΘ(X)⊤fΘ(X) is restricted to be diagonal, ∥fΘ(X)⊤fΘ(X)∥∗ = ∥Diag ( fΘ(X) ⊤fΘ(X) ) ∥1 and the nuclear norm surrogate for the rank minimization reduces to the ℓ1 norm surrogate for the cardinality (rank) minimization. However, for the m-dimensional embedding, the solution of trace difference lies on a subspace of dimension less than m− 1 [3]. Thus, the constraint Ω(fΘ(X)) = B prevents the dimensional collapse, i.e., fΘ(X)⊤fΘ(X) = I. Compared with the trace-based relaxation, LogDet is more suitable for cardinality minimization as it is less sensitive to large singular values. Also, the difference of LogDet terms does not require decorrelation of features to prevent the dimensional collapse. We discuss this matter in Appendix A. In our case, the difference of LogDet terms is always bounded by the difference of trace terms as follows. Proposition 2. Given an embedding matrix fΘ(X) ∈ Rn×m, a fixed small constant α > 0, we have the following inequality: log det (I+ αSt)− log det (I+ αSw) < αTr(St − Sw). (7) Proof. log det (I+αSt)−log det (I+ αSw) = log det (I+αEig(St))−log det (I+ αEig(Sw)) = Tr (log(I+ αEig(St)− log(I+ αEig(Sw)) < αTr(St − Sw). (8) Proposition 2 is also related to the inequality Rank(S) ≤ log det(I+ S) ≤ Tr(S) [7]. 5.2 Distance between Symmetric Positive (Semi-)Definite Matrices. Below, we provide a perspective on non-Euclidean distances between matrices from Sm+(+) to compare the proposed method with other graph embeddings, e.g., Laplacian Eigenmaps [3] and COLES [55]. For clarity, we also reformulate the Laplacian eigenmaps and COLES into forms in Prop. 3 and 4. Proposition 3. Laplacian Eigenmaps [3] method equals to maximizing the Frobenius norm: Θ∗ = argmax Θ ∥fΘ(X)fΘ(X)⊤ − Lw∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (9) Proposition 4. Contrastive Laplacian Eigenmaps [55] equals to maximizing the difference of Frobenius norm terms: Θ∗=argmax Θ ∥fΘ(X)fΘ(X)⊤−Lw∥2F−∥fΘ(X)fΘ(X)⊤−Lt∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (10) Proof. ∥fΘ(X)fΘ(X)⊤− L∥2F = Tr(fΘ(X)fΘ(X)⊤fΘ(X)fΘ(X)⊤− 2fΘ(X)LfΘ(X)⊤ + L⊤L) = constant− 2Tr(fΘ(X)LfΘ(X)⊤) ≥ 0. (11) Note that Eq. 9 encourages the linear kernel matrix fΘ(X)fΘ(X)⊤ to be close to W̃w while Eq. 10 encourage the linear kernel matrix to be far from the W̃w at the same time. Our loss follows the non-Euclidean geometry. Below, we demonstrate the relation of Eq. 4 to the Affine-invariant Riemannian metric (AIRM). Indeed, our loss function is bounded from both sides by AIRM and AIRM scaled by √ m respectively. Proposition 5. Let σ(S) be the vector of eigenvalues of S, for any matrix St,Sw ∈ Sm+(+), we have: ∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F ≤ log det(I+ St)− log det(I+ Sw) ≤ √ m∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F . (12) Proof. Given A = I+ St and B = I+ Sw, we have: log det(A)− log det(B) = log(det(A) det(B−1)) = log(det(A) det(B−1/2) det(B−1/2)) = Tr log(B−1/2AB−1/2). (13) We have Tr(A) = ∥σ(A)∥1, ∥A∥F = ∥σ(A)∥2 and ∥x∥2 ≤ ∥x∥1 ≤ √ m∥x∥2. Thus, Eq. 4 is trying to find a mapping function maximizing an approximation of AIRM distance between the total scatter matrix and the within-class matrix. 5.3 Relationship of the LogDet model to the Schatten norm Below we demonstrate the relationship between the LogDet, Trace and Rank operators, respectively, under the Schatten norm [28] framework. Essential is the following family of objective functions: fα,γ(S) = 1 c m∑ i=1 log (ασi(S) + γ) = log det (αS+ γI) , α, γ ≥ 0, (14) where σi(S), i = 1, . . . ,m, are the eigenvalues of either St ∈ Sm+(+) or Sw ∈ Sm+(+), which are the total scatter matrix and the within scatter matrix from our experiments, respectively. Moreover, we define a normalization constant c where c = 1 or c = log(α+ γ) as detailed below. Given c = 1, we have: lim p→0 Spγ,p(S)−m p = f1,γ(S) where Sγ,p(S) = ( m∑ i=1 (σi(S) + γ) p ) )1/p . (15) From the asymptotic analysis, we conclude that the LogDet is an arbitrarily accurate rational approximation of ℓ0 (the so-called pseudo-norm counting non-zero elements) over the eigenvalues of S. The case p = 1 yields the nuclear norm (trace) which makes the ‘smoothed’ rank difference of GLEN become equivalent of COLES. The opposing limit case, denoted as p = 0 recovers the LogDet formula. One can also recover the exact Rank from the LogDet formulation by: lim α→∞ fα,1(S) = Rank(S) if c = log(1 + α). (16) This is apparent because: lim α→∞ log(1 + ασi) log(1 + α) = 1 if σi > 0 and lim α→∞ log(1 + ασi) log(1 + α) = 0 if σi = 0. (17) 6 Experiments We evaluate GLEN (its relaxation) on transductive and inductive node classification tasks and node clustering. GLEN is compared to popular unsupervised, contrastive, and (semi-)supervised approaches. Except for the classifier, unsupervised models do not use labels. To learn similarity/dissimilarity, contrastive models employ the contrastive setting. Labels are used to train the projection layer and classifier in semi-supervised models. A fraction of nodes (i.e., 5 or 20 per class) used for training are labeled for semi-supervised setting. A SoftMax classifier is used for (semi-)supervised models, while a logistic regression classifier is used for unsupervised and contrastive approaches. See Appendix E for implementation details. Datasets. GLEN is evaluated on four citation networks: Cora, Citeseer, Pubmed, Cora Full [17, 4] for transductive setting. We also employ the large scale Ogbn-arxiv from OGB [14]. See Appendix D for details of datasets. Metrics. As fixed data splits [45] often on transductive models benefit models that overfit, we average results over 50 random splits for each dataset. We evaluate performance for 5 and 20 samples per class. Nonetheless, we also evaluate our model on the standard splits. Baseline models. We group baseline models into unsupervised, contrastive and (semi-)supervised methods, and implement them in the same framework/testbed. Contrastive methods include DeepWalk [31], GCN+SampledNCE developed as an alternative to GraphSAGE+SampledNCE [10], Graph2Gauss [4], SCE [47], DGI [37], GRACE [56], GCA [57], GraphCL [46] and COLES [55], which are our main competitors. Note that GRACE, GCA and GraphCL are based on multi-view and data augmentation, and GraphCL is mainly intended for graph classification. We do not study graph classification as it requires advanced node pooling with mixed- or high-order statistics [40, 19, 20]. We compare results with representative (semi-)supervised GCN [17], GAT [37] and MixHop [1] models. SGC and S2GC are unsupervised spectral filter networks. They do not have any learnable parameters. COLES and GLEN could be regarded as dimension reduction techniques for SGC and S2GC, thus we compare them to PCA-S2GC and RP-S2GC, which use PCA and random projections to obtain the projection layer. We set hyperparameters based on the settings described in prior papers. 6.1 Transductive Learning In this section, we consider transductive learning where all nodes are available in the training process. COLES vs. GLEN. Table 1 shows the performance of GLEN vs. COLES on two different backbones, i.e., GCN and S2GC. On both backbones, GLEN shows non-trivial improvements on all four datasets. GLEN-S2GC outperforms the COLES by up to 4.6%. Table 2 evaluates GLEN on Cora, Citeseer, PubMed on the standard splits instead of the random splits. See Appendix G for comparisons to additional contrastive learning frameworks. Contrastive Embedding Baselines vs. GLEN. Table 1 shows that GLEN-GCN and GLEN-S2GC outperform unsupervised models. In particular, GLEN-GCN outperforms GCN+SampledNCE on all four datasets, which shows that GLEN has an advantage over the SampledNCE framework. In addition, GLEN-S2GC outperforms the best contrastive baseline DGI by up to 3.4%. On Cora with 5 training samples, GLEN-S2GC outperforms S2GC by 6.8%. Finally, Table 3 shows that GLEN-S2GC (small number of trainable parameters) outperforms other methods on the challenging Ogbn-arxiv. Semi-supervised GNNs vs. GLEN. Table 1 shows that the contrastive GCN baselines perform worse than semi-supervised variants, especially when 20 labeled samples per class are available. In contrast, GLEN-GCN outperformed the semi-supervised GCN on Cora by 10% and 3.4% given 5 and 20 labeled samples per class. GLEN-GCN also outperforms GCN on Citeseer and Pubmed by 9.9% and 5.2% given 5 labeled samples per class. These results show the superiority of GLEN on four datasets when the number of samples per class is 5. Even for 20 labeled samples per class, GLEN-S2GC outperforms the best semi-supervised baselines on all four datasets e.g., by 3.3% on Cora. Semi-supervised models (e.g., GAT and MixHop) are affected by the low number of labeled samples, which is consistent with [25]. The accuracy of GLEN-GCN and GLEN-S2GC is unaffected. Unsupervised GNNs vs. GLEN. SGC and S2GC are unsupervised linear networks based on spectral filters which do not use labels (except for the classifier). As a dimension reduction method, GLEN helps both methods reduce the dimension and achieve discriminative features. Table 1 shows that GLEN-S2GC outperforms RP-S2GC and PCA-S2GC under the same projection size. GLEN-S2GC also outperforms the unsupervised S2GC baseline (high-dimensional representation). 6.2 Node Clustering We compare GLEN-GCN and GLEN-S2GC with three types of clustering methods: i. Methods that use only node features e.g., k-means and spectral clustering (spectral-f) construct a similarity matrix with the node features by a linear kernel. ii. Structural clustering methods that only use the graph structure: spectral clustering (spectral-g) that takes the graph adjacency matrix as the similarity matrix, and DeepWalk [31]. iii. Attributed graph clustering methods that use node features and the graph: Graph Autoencoder (GAE), Graph Variational Autoencoder (VGAE) [17], Adversarially Regularized Graph Autoencoder (ARGE), Var. Graph Autoencoder (ARVGE) [30], SGC [42] , S2GC [53], COLES [55]. We measure and report the clustering Accuracy (Acc), Normalized Mutual Information (NMI) and macro F1-score (F1). We run each method 10 times on Cora, CiteSeer and PubMed. We set the number of propagation steps to 8 for SGC, S2GC, COLES-S2GC and COLES-S2GC following [48]. Table 4 shows that GLEN-S2GC outperforms other methods in all cases, whereas GLEN-GCN outperforms COLES-GCN, COLES-GCN (Stiefel) and contrastive GCN on all datasets. 6.3 Comparison of Surrogates of Rank Table 5 above shows results on four additional surrogates of Rank(S): • Nuclear norm: RNN(S) = ∑ i σi(S). • γ-nuclear norm [16]: Rγ-NN = ∑ i (1+γ)σi(S) γ+σi(S) . • Sp norm [28]: RSp = ∑ i σi(S) p. • Geman norm [8]: RGeman = ∑ i σi(S) γ+σi(S) . 6.4 Transductive One-shot Learning on Image Classification Datasets The most common setting in FSL is the inductive setting. In such a scenario, only samples in the support set can be used to fine-tune the model or learn a function for the inference of query labels. In contrast, in the transductive scenario, the model has access to all the query data (unlabeled) that needs to be classified. EASE [54] is a transductive few-shot learner for so-called episodic image classification. Given feature matrix Z ∈ Rn×m from a CNN backbone (ResNet-12), EASE minimizes Tr(UZ⊤LwZU⊤) − Tr(UZ⊤LtZU⊤) (subject to UU⊤ = I) in order to learn a linear projection U. We extend GLEN to EASE to learn the linear projection U by minimizing log det(UZ⊤LwZU⊤)− log det(UZ⊤LtZU ⊤) (subject to UU⊤ = I. We also apply the Sp norm instead of log det. Table 6 shows the results of EASE based on the LogDet and the Sp-norm based relaxations of GLEN. For the simplicity of experiment, we use soft k-means rather than Sinkhorn k-means as in the EASE pipeline. Please refer to EASE [54] for the experimental setup of one-shot learning. We evaluate our approach on four few-shot classification benchmarks, mini-ImageNet [38], tieredImageNet [32], CUB [41], and CIFAR-FS [21]. The performance numbers are given as accuracy % and the 0.95 confidence intervals are reported. We use publicly available pre-trained ResNet-12 [29] that are trained on the base class training set. Scalability. GraphSAGE and DGI require neighbor sampling with redundant forward/backward steps (long runtime). In contrast, GLEN-S2GC enjoys a simple implementation with low memory usage/low runtime. For graphs with over 100 thousands nodes and 10 millions edges (Reddit), GLEN runs fast on NVIDIA 1080 GPU. Even on larger graph benchmarks, GLEN is fast as it optimizes the total scatter and the within-class matrices whose size depends on embedding size rather than the node number. The runtime of GLEN-S2GC is also favourable in comparison to multi-view augmentation-based GraphCL. Specifically, GLEN-S2GC took 0.54s, 0.3s, 5.3s and 15.4s on Cora, Citeseer, Pubmed and Cora Full, respectively. GraphCL took 110.19s, 101.0s, ≥ 8h and ≥ 8h respectively. Although the LogDet difference is somewhat slower than the trace difference in forward/backward propagation, it converges faster, thus enjoying a similar low runtime. 7 Conclusions In this paper, we model contrastvie learning as a rank difference problem to approximate the condition that the rank of total scatter matrix should equal the sum of ranks of within-scatter and between-scatter matrices. We relax this NP-hard assumption with a differentiable difference of LogDet terms. We also show two perspectives on GLEN and the existing methods based on the low-rank optimization and distance between symmetric positive (semi-)definite matrices matrices. In low-rank optimization, we explain why the LogDet difference is a better surrogate function to optimize rank difference compared to the trace difference. We also show that our solution encourages linear kernel of embeddings become the geometric mean between the total scatter matrix and the within-class matrix. GLEN works well with many backbones outperforming many unsupervised, contrastive and (semi-)supervised methods. Acknowledgments and Disclosure of Funding We thank reviewers for stimulating questions that helped us improve several aspects of our analysis. Hao Zhu is supported by an Australian Government Research Training Program (RTP) Scholarship. Piotr Koniusz is supported by CSIRO’s Machine Learning and Artificial Intelligence Future Science Platform (MLAI FSP).
1. What is the main contribution of the paper, and how does it differ from popular contrastive learning methods? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its application to general contrastive learning and the use of surrogates? 3. Do you have any concerns about the novelty of the paper, especially regarding its similarity to existing literature such as Calibrated Multi-Task Learning? 4. How do you assess the reproducibility of the paper's results without access to the source code? 5. Are there any typos or unclear aspects in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a new unsupervised representation learning method, mainly based on GNN. The idea is motivated by the scatter matrices that are usually used in LDA. Based on the fact that the features would be discriminative provided that Condition 1 holds, the model aims to maximize rank( S w ) and minimize rank( S b ) simultaneously, which is different from the losses of the popular contrastive learning. In Section 3, the authors show that the equivalence between the specific two-layer (featureless) GAE and linear (featureless) GAE. In Section 4, the authors try to investigate the real impact of ReLU on the hidden layer. Then, as the original goal is NP-hard, a surrogate that approximates the rank better than the classical nuclear norm is introduced. Finally, sufficient experiments are conducted to verify the idea. Strengths And Weaknesses Pros: The idea to use the scatter matrices to learn discriminative features seems novel. It is different from the popular contrastive models. The motivation is convincing and interesting to me. The experimental results, especially on semi-supervised node classification when labels are pretty rare, seem to show effectiveness. Cons: An important question that confuses me is why not to testify the idea on the setting of general contrastive learning. If I don't misunderstand the model, the graph (i.e., adjacency) seems to be only used in the implementation of f Θ , which indicates that f Θ could be any neural networks (or other projection techniques). So why do you constrain the model on the GNNs? If some similar ideas have been proposed in the general contrastive learning (which I'm not familiar with the newest publications), it will severely affect the novelty. A major concern is the surrogate may be not novel. The idea to use log ⁡ ( ⋅ ) to replace the ℓ p -norm (which is equivalent to the Schatten- p norm for the rank) has been well studied. Is there a difference between the following literature and this paper? It limits the novelty of the paper. [1] Calibrated Multi-Task Learning, SIGKDD, 2018. Could the authors also provide some experiments under the common settings of Cora/Citeseer/PubMed, instead of the random split? It is also an important comparison with the existing GNN models. No source code is provided so that it may limit the reproducibility. There are some typos including but not limited to: The meaning of letters in boldface is confusing. For example, in Figure 1, the matrix is denoted by S b while in Section 3.1, all matrices are highlighted by boldface (e.g., S w ). In Line 135, C is also bold. In Line-145, Theorem 2 -> Theorem 1? Overall, I would like to update my score after reading other reviews and the response. Questions (More details can be found in the previous part) Why do you constrain the model on the GNNs? In other words, why not conduct experiments on the general datasets?> Is there a difference between the following literature [1] and this paper? It limits the novelty of the paper. [1] Calibrated Multi-Task Learning, SIGKDD, 2018. Could the authors also provide some experiments under the common settings of Cora/Citeseer/PubMed, instead of the random split? Limitations N/A
NIPS
Title Generalized Laplacian Eigenmaps Abstract Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and upper-bounded by AIRM scaled by the factor of √ m. We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. N/A accuracy/scalability compared to state-of-the-art baselines. 1 Introduction Laplacian Eigenmaps [3] and IsoMap [36] are graph embedding methods that reduce the dimensionality of data by assuming the data exists on a low-dimensional manifold. The objective function in such models encourages node embeddings to lie near each other in the embedding space if nodes are close to each other in the original space. While the classical methods capture the related node pairs, they neglect modeling unrelated node pairs. In contrast, modern graph embedding models such as [35, 10, 44] and Graph Contrastive Learning (GCL) [37, 56, 11, 57, 55] are unified under the (Sampled) Noise Contrastive Estimation framework, called (Sampled)NCE [27, 23]. Most of GCL methods do not incorporate the graph information into the loss but follow the setting from computer vision, i.e., they assume that randomly drawn pairs should be dissimilar, whereas the original sample and its augmentations should be similar [39]. In contrast, COntrastive Laplacian EigenmapS (COLES) [55] is a framework which combines a (graph) neural network with Laplacian eigenmaps utilizing the graph Laplacian matrix within a contrastive loss. Based on the NCE framework, COLES minimizes the trace difference of Laplacians. In this paper, we analyze the relation among within-class, between-class and total scatter matrices under the rank inequality, and prove that, under a simple assumption, the distance between any dissimilar (negative) samples would be greater/equal than the inter-class distance between their corresponding class centers. Based on such a condition, we derive GLEN, a reformulation of graph embedding into a rank difference problem, which is a more general framework than other graph *The corresponding author. Code: https://github.com/allenhaozhu/GLEN. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). embedding frameworks, i.e., under specific relaxations of the rank difference problem, we can recover different frameworks.To that end, we demonstrate how to optimize the rank difference problem with a difference of LogDet expressions, a differentiable relaxation suitable for use with (graph) neural networks. We consider other surrogates of the rank difference problem, based on the Nuclear norm, γ-nuclear norm, Schatten norm, and the Geman norm. Moreover, we provide theoretical considerations regarding the low-rank optimization and connection to the Riemannian manifold in order to interpret our approach. In summary, our contributions are threefold: i. We propose a rank-based condition connecting within-class, between-class and total scatter matrices under which we provide the minimum class separation guarantee. We propose a loss function, Generalized Laplacian EigenNaps (GLEN), that realizes this condition. ii. As the rank difference problem is NP-hard, we consider a difference of LogDet surrogate to learn node embeddings, as opposed to the trace difference (an upper bound of the difference of LogDet terms) used by other graph embedding models. We also consider other surrogates. iii. We study the distance between symmetric positive (semi-)definite matrices and the LogDet-based relaxation of GLEN. While enjoying fewer computations, the difference of LogDet terms of GLEN enjoys the Affine-invariant Riemannian metric (AIRM) for a lower bound and AIRM scaled by √ m as an upper bound. We explain how GLEN connects to other graph embeddings. 2 Related Works Graph Embeddings. By assuming that the data lies on a low-dimensional manifold, graph embedding methods such as Laplacian Eigenmaps [3] and IsoMap [36] optimize low-dimensional data embeddings. These methods [5] construct a similarity graph by measuring the similarity of high-dimensional feature vectors and embed the nodes into a low-dimensional space. DeepWalk [31] uses truncated random walks to explore the graph structure, and the skip-gram model for word embedding to determine the embedding vectors of nodes. By setting the walk length to one and using negative sampling [26], LINE [35] explores a similar idea with an explicit objective function while REFINE [52] imposes additional orthogonality constraints which deem REFINE extremely fast. Node2Vec [9] interpolates between breadth- and depth-first sampling. COLES [55] unifies traditional graph embedding and negative sampling by introducing a positive contrastive term that captures the graph structure, and a negative contrastive random sampling. COLES solves the trace difference problem akin to traditional graph embedding models [43]. In this paper, we propose a more general loss for graph embedding, i.e., COLES solves the trace difference (Nuclear norms difference) relaxation of GLEN. Graph embedding techniques [43] provide a general framework for dimensionality reduction such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) [12]. All methods within this category can be considered as solving the same problem under varying assumptions, i.e., maximising the intra- and inter-class separation by optimizing the trace difference, also used in metric learning [22]. However, such a family of objective functions is not motivated by the guarantee on the minimum class separation between feature vectors from different categories. GLEN, in its purest NP-hard form, provides the minimum class separation guarantee and can be realised by several formulations depending on chosen trade-offs. Unsupervised Representation Learning for Graph Neural Networks (GNN). Unsupervised GNN training can be reconstruction-, contrastive- or diffusion-based. To train a graph encoder in an unsupervised manner, GCN [17] minimizes a reconstruction error which only considers the similarity matrix and ignores the dissimilarity information. At various scales of the graph, contrastive methods determine the positive and negative sets. For example, local-local CL and global-local CL strategies are highly popular. GraphSAGE [10], inspired by DeepWalk [31], uses the contrastive loss which encourages neighbor nodes to have similar representations, while preserving dissimilarity between representations of disparate nodes. DGI [37], inspired by Deep InfoMax (DIM) [13], uses an objective with global-local sampling strategy to maximize the Mutual Information (MI) between global and local graph embeddings. Augmented Multiscale Deep InfoMax (AMDIM) [2] maximizes MI between multiple data views. MVRLG [11] contrasts encodings from first-order neighbors and a graph diffusion. Fisher-Bures Adversary GCN [34] treats the graph as generated w.r.t. some observation noise. COSTA [50] constructs the views by injecting features with the random noise. However, such contrastive approaches often require thousands of epochs to converge and perform well. In addition, many contrastive losses have an exponential increase in memory overhead w.r.t. the number of nodes. In contrast, our method does not explicitly use the local-local setting but the total scatter matrix, and thus saves computational and storage cost. Linear GNNs, i.e., SGC [42] and S2GC [53], capture the neighborhood and increasingly larger neighborhoods of each node due to the diffusion, respectively. SGC and S2GC have no projection layer, thus the size of embeddings is equal to the input dimension. GLEN can learn a projection layer in an unsupervised manner with a linear function or Multi-Layer Perceptron (MLP) applied to linear GNNs or any other GNN models [18, 34, 49], etc. 3 Preliminaries Notations. Let G=(V,E) be a simple, connected and undirected graph with n= |V | nodes and m= |E| edges. Let i ∈ {1, · · · , n} be the node index of G, and dj be the degree of node j of G. Let W be the adjacency matrix, and D be the diagonal matrix containing degrees of nodes. Let X ∈ Rn×d denote the node feature matrix where each node v is associated with a feature vector xv ∈ Rd. Let the normalized graph Laplacian matrix be defined as L = I− W̃ ∈ Sn+, a symmetric positive semi-definite matrix and W̃ = D−1/2WD−1/2. Sm+(+) is a set of symmetric positive (semi)definite matrices. Let Z = fΘ(X) ∈ Rn×m be a generalized node embedding, i.e., X could be identity matrix (e.g., no node attributes), fΘ(X) could be GNN or a linear function with parameters Θ. Scalars/vectors/matrices are denoted by lowercase regular/lowercase bold/uppercase bold fonts. 3.1 Scatter Matrices Below are given standard definitions of scatter matrices, including the total scatter matrix St ∈ Sm+(+), the within-class matrix Sw ∈ Sm+(+), and between-class matrix Sb ∈ Sm+(+): St = n∑ i=1 (zi − z̄) (zi − z̄)⊤ = Z⊤ ( I− W̃t ) Z where W̃t = 1 n ee⊤, Sw = n∑ i=1 (zi − µyi) (zi − µyi) ⊤ = Z⊤ ( I− W̃w ) Z where W̃w = C∑ c=1 1 nc ecec⊤, Sb = C∑ c=1 nc (µc − z̄) (µc − z̄)⊤ . (1) Let e be an n-dimensional vector with all coefficients equal one, I be an identity matrix, St be the total scatter (covariance) matrix, and z̄ ∈ Rm be the mean of all samples. Let µyi ∈ Rm be the class center of the i-th sample and µc ∈ Rm be the c-th class center. Let the total number of categories be given by C, whereas nc be the number of samples for the c-th category. Let ec ∈ Rn be a vector where a given coefficient indexed by node is equal one if its node is of class c, otherwise it is equal zero. We note that both St and Sw can take a form akin to Laplacian eigenmaps such that W̃t and W̃w are the corresponding normalized adjacent matrices. Let us also define graph Laplacian matrices Lt = I− W̃t ∈ Sn+ and Lw = I− W̃w ∈ Sn+ which will be used in the sequel. Importantly, let us assume that a graph Laplacian matrix L containing graph links could be seen as a noisy version of Lw in which all nodes of a given class c connect under the weight equal 1/nc. Observe that St = Sw + Sb. Thus, Rank(St) ≤ Rank(Sw) + Rank(Sb) due to the rank inequality. Below we highlight the condition underpinning the subsequent motivation: Condition 1. Rank(St) = Rank(Sw) + Rank(Sb). 3.2 Motivation Figure 1 shows some three optimal solutions for Condition 1. The rank of between-class scatter matrix Sb for the whole dataset is at most C − 1 (where C is the number of classes). Since Rank(AB) ≤ min(Rank(A),Rank(B)), we have† Rank(S−1w Sb) ≤ Rank(Sb) ≤ C − 1. The rank is the number of non-zero eigenvalues of a matrix so S−1w Sb has at most C − 1 non-zero eigenvalues. Condition 1 implies that Rank(S−1w Sb) = 0 results in the minimum class separation guarantee under that condition. Theorem 1. Let the feature dimension be larger than the class number (i.e., m > C) and Condition 1 hold. Then, the minimum class separation is equal to the distance between class centers. In other words, the distance between any two vectors zi and zj with labels yi ̸= yj is greater/equal the distance between class centers µyi and µyj : ∥µyi − µyj∥2 ≤ ∥zi − zj∥2, ∀yi ̸= yj , i, j ∈ {1, · · · , C}. (2) Proof. As Sw is the orthogonal complement of Sb, i.e., S−1w Sb = 0, Sw + Sb = UΣU ⊤, Sw = U1:kΣ1:kU ⊤ 1:k and Sb = Uk+1:mΣk+1:mU ⊤ k+1:m where 1 ≤ k < m. Let zi = µyi +U⊤ϵi where ϵi is the representation under the basis U and ϵ(k+1:m),i = 0 because only top k components 1 :k represent Sw. Thus, the orthogonal projection Uk+1:m fulfills ∥Uk+1:m(zi − zj)∥2 ≤ ∥zi − zj∥2. Moreover, Uk+1:m(zi − µyi) = Uk+1:m(U⊤ϵi) = 0. That is, all {zi : yi = c} are projected onto the mean µc. Thus, the inequality in Eq. 2 holds. Theorem 1 guarantees the worst inter-class distance§. Figure 1 shows some cases that meet Condition 1. Figure 1a shows the case for which the class centers collapse to a single point and thus the inter-class distance equals zero (collapse of the feature space). Figures 1b and 1c show other cases. 4 Methodology Condition 1 points to a promising research direction in learning discriminative feature spaces. However, optimizing over the rank is NP-hard and non-differentiable. In what follows, we provide the formulation of Generalized Laplacian EigeNmaps (GLEN) and its relaxation, which is differentiable. 4.1 Generalized Laplacian Eigenmaps As solving Condition 1 is NP-hard, we propose a relaxation where Rank(St) is encouraged to be as large as possible (bounded by the feature dimension m). On the contrary, if Rank(St) ≈ Rank(Sb) then the small Rank(Sw) limits the feature diversity. In the extreme case, if Rank(Sw) = 0, the feature representation collapses. Larger 0 < Rank(Sb) ≤ C − 1 improves the inter-class diversity. We propose a new Generalized Laplacian EigeNmaps (GLEN) framework for unsupervised network embedding. In the most general form, GLEN maximizes the difference of rank terms: Θ∗ = argmax Θ Rank ( St ( fΘ(X) )) − Rank ( Sw ( fΘ(X) )) . (3) As the general matrix Rank Minimization Problem (RMP) [7] is NP-hard and so is the difference of rank terms in Eq. 3, we relax this problem by the difference of LogDet terms that serve as a surrogate of the NP-hard problem. Appendix I derives GLEN from the SampledNCE framework. †We write S−1w but if Sw is rank-deficient, −1 is replaced with the Moore–Penrose inverse (pseudo-inverse). §Other graph embedding models that maximize/minimize inter-/intra-class distances have no such guarantees. GLEN (LogDet relaxation). I. Define: δ(St,Sw;α, λ) = log det(I+ αSt)− λ log det(I+ αSw), (4) where λ ≥ 0 controls the impact of log det(Sw). If λ = 0, δ(·) encourages Rank(fΘ(X)) = m. II. Let St = fΘ(X)⊤LtfΘ(X) and Sw = fΘ(X)⊤LwfΘ(X). Then the LogDet relaxation becomes: Θ∗ = argmax Θ log det ( I+ αfΘ(X) ⊤LtfΘ(X) ) − log det ( I+ αfΘ(X) ⊤LwfΘ(X) ) , (5) where I ensures I + αfΘ(X)⊤LfΘ(X) > 0 as fΘ(X)⊤LfΘ(X) may be Sm+ leading to det(fΘ(X) ⊤LfΘ(X)) = 0. Thus, we use log det(I+ αS) as a smooth surrogate for Rank(S). Proposition 1. Let σ(S) be the vector of eigenvalues of matrix S ∈ Sm+(+), and Eig(S) be a diagonal matrix with σ(S) as its diagonal. Let S,S′ ∈ Sm+(+) and α > 0. Then, δ(S,S′;α, λ) = δ(Eig(S),Eig(S′);α, λ), i.e., δ(·) depends on eigenvalues rather than eigenvectors of S and S′. Proof. The proof follows from the equality det(I+ αS) = ∏ i σi(I+ αS) = ∏ i(1 + ασi(S)) = det(I + αEig(S)). Thus δ(S,S′;α, λ) = log det(I + αS) − λ log det(I + αS′) = log det(I + αEig(S))− λ log det(I+ αEig(S′)) = δ(Eig(S),Eig(S′);α, λ). 5 Theoretical Analysis Below, we compare our approach and other methods by looking at (i) the low-rank optimization and (ii) the non-Euclidean distances between symmetric positive (semi-)definite matrices. 5.1 Nuclear Norm vs. LogDet for Rank Minimization Claim 1. COLES [55] is a convex relaxation (using the nuclear norm) of the rank difference in Eq. 3: Θ∗ = argmax Θ Tr ( fΘ(X) ⊤LtfΘ(X) ) − λTr ( fΘ(X) ⊤LwfΘ(X) ) s.t. Ω(fΘ(X)) = B, (6) where Tr ( fΘ(X) ⊤LtfΘ(X) ) = ∥St∥∗ and Tr ( fΘ(X) ⊤LwfΘ(X) ) = ∥Sw∥∗. The nuclear norm ∥ · ∥∗ can be regarded as the ℓ1 norm over singular values. As the ℓ1 norm induces sparsity, the nuclear norm encourages sparse singular values leading to low-rank solutions. If fΘ(X)⊤fΘ(X) is restricted to be diagonal, ∥fΘ(X)⊤fΘ(X)∥∗ = ∥Diag ( fΘ(X) ⊤fΘ(X) ) ∥1 and the nuclear norm surrogate for the rank minimization reduces to the ℓ1 norm surrogate for the cardinality (rank) minimization. However, for the m-dimensional embedding, the solution of trace difference lies on a subspace of dimension less than m− 1 [3]. Thus, the constraint Ω(fΘ(X)) = B prevents the dimensional collapse, i.e., fΘ(X)⊤fΘ(X) = I. Compared with the trace-based relaxation, LogDet is more suitable for cardinality minimization as it is less sensitive to large singular values. Also, the difference of LogDet terms does not require decorrelation of features to prevent the dimensional collapse. We discuss this matter in Appendix A. In our case, the difference of LogDet terms is always bounded by the difference of trace terms as follows. Proposition 2. Given an embedding matrix fΘ(X) ∈ Rn×m, a fixed small constant α > 0, we have the following inequality: log det (I+ αSt)− log det (I+ αSw) < αTr(St − Sw). (7) Proof. log det (I+αSt)−log det (I+ αSw) = log det (I+αEig(St))−log det (I+ αEig(Sw)) = Tr (log(I+ αEig(St)− log(I+ αEig(Sw)) < αTr(St − Sw). (8) Proposition 2 is also related to the inequality Rank(S) ≤ log det(I+ S) ≤ Tr(S) [7]. 5.2 Distance between Symmetric Positive (Semi-)Definite Matrices. Below, we provide a perspective on non-Euclidean distances between matrices from Sm+(+) to compare the proposed method with other graph embeddings, e.g., Laplacian Eigenmaps [3] and COLES [55]. For clarity, we also reformulate the Laplacian eigenmaps and COLES into forms in Prop. 3 and 4. Proposition 3. Laplacian Eigenmaps [3] method equals to maximizing the Frobenius norm: Θ∗ = argmax Θ ∥fΘ(X)fΘ(X)⊤ − Lw∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (9) Proposition 4. Contrastive Laplacian Eigenmaps [55] equals to maximizing the difference of Frobenius norm terms: Θ∗=argmax Θ ∥fΘ(X)fΘ(X)⊤−Lw∥2F−∥fΘ(X)fΘ(X)⊤−Lt∥2F , s.t. fΘ(X)⊤fΘ(X) = I. (10) Proof. ∥fΘ(X)fΘ(X)⊤− L∥2F = Tr(fΘ(X)fΘ(X)⊤fΘ(X)fΘ(X)⊤− 2fΘ(X)LfΘ(X)⊤ + L⊤L) = constant− 2Tr(fΘ(X)LfΘ(X)⊤) ≥ 0. (11) Note that Eq. 9 encourages the linear kernel matrix fΘ(X)fΘ(X)⊤ to be close to W̃w while Eq. 10 encourage the linear kernel matrix to be far from the W̃w at the same time. Our loss follows the non-Euclidean geometry. Below, we demonstrate the relation of Eq. 4 to the Affine-invariant Riemannian metric (AIRM). Indeed, our loss function is bounded from both sides by AIRM and AIRM scaled by √ m respectively. Proposition 5. Let σ(S) be the vector of eigenvalues of S, for any matrix St,Sw ∈ Sm+(+), we have: ∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F ≤ log det(I+ St)− log det(I+ Sw) ≤ √ m∥ log((I+ St)−1/2(I+ Sw)(I+ St)−1/2)∥F . (12) Proof. Given A = I+ St and B = I+ Sw, we have: log det(A)− log det(B) = log(det(A) det(B−1)) = log(det(A) det(B−1/2) det(B−1/2)) = Tr log(B−1/2AB−1/2). (13) We have Tr(A) = ∥σ(A)∥1, ∥A∥F = ∥σ(A)∥2 and ∥x∥2 ≤ ∥x∥1 ≤ √ m∥x∥2. Thus, Eq. 4 is trying to find a mapping function maximizing an approximation of AIRM distance between the total scatter matrix and the within-class matrix. 5.3 Relationship of the LogDet model to the Schatten norm Below we demonstrate the relationship between the LogDet, Trace and Rank operators, respectively, under the Schatten norm [28] framework. Essential is the following family of objective functions: fα,γ(S) = 1 c m∑ i=1 log (ασi(S) + γ) = log det (αS+ γI) , α, γ ≥ 0, (14) where σi(S), i = 1, . . . ,m, are the eigenvalues of either St ∈ Sm+(+) or Sw ∈ Sm+(+), which are the total scatter matrix and the within scatter matrix from our experiments, respectively. Moreover, we define a normalization constant c where c = 1 or c = log(α+ γ) as detailed below. Given c = 1, we have: lim p→0 Spγ,p(S)−m p = f1,γ(S) where Sγ,p(S) = ( m∑ i=1 (σi(S) + γ) p ) )1/p . (15) From the asymptotic analysis, we conclude that the LogDet is an arbitrarily accurate rational approximation of ℓ0 (the so-called pseudo-norm counting non-zero elements) over the eigenvalues of S. The case p = 1 yields the nuclear norm (trace) which makes the ‘smoothed’ rank difference of GLEN become equivalent of COLES. The opposing limit case, denoted as p = 0 recovers the LogDet formula. One can also recover the exact Rank from the LogDet formulation by: lim α→∞ fα,1(S) = Rank(S) if c = log(1 + α). (16) This is apparent because: lim α→∞ log(1 + ασi) log(1 + α) = 1 if σi > 0 and lim α→∞ log(1 + ασi) log(1 + α) = 0 if σi = 0. (17) 6 Experiments We evaluate GLEN (its relaxation) on transductive and inductive node classification tasks and node clustering. GLEN is compared to popular unsupervised, contrastive, and (semi-)supervised approaches. Except for the classifier, unsupervised models do not use labels. To learn similarity/dissimilarity, contrastive models employ the contrastive setting. Labels are used to train the projection layer and classifier in semi-supervised models. A fraction of nodes (i.e., 5 or 20 per class) used for training are labeled for semi-supervised setting. A SoftMax classifier is used for (semi-)supervised models, while a logistic regression classifier is used for unsupervised and contrastive approaches. See Appendix E for implementation details. Datasets. GLEN is evaluated on four citation networks: Cora, Citeseer, Pubmed, Cora Full [17, 4] for transductive setting. We also employ the large scale Ogbn-arxiv from OGB [14]. See Appendix D for details of datasets. Metrics. As fixed data splits [45] often on transductive models benefit models that overfit, we average results over 50 random splits for each dataset. We evaluate performance for 5 and 20 samples per class. Nonetheless, we also evaluate our model on the standard splits. Baseline models. We group baseline models into unsupervised, contrastive and (semi-)supervised methods, and implement them in the same framework/testbed. Contrastive methods include DeepWalk [31], GCN+SampledNCE developed as an alternative to GraphSAGE+SampledNCE [10], Graph2Gauss [4], SCE [47], DGI [37], GRACE [56], GCA [57], GraphCL [46] and COLES [55], which are our main competitors. Note that GRACE, GCA and GraphCL are based on multi-view and data augmentation, and GraphCL is mainly intended for graph classification. We do not study graph classification as it requires advanced node pooling with mixed- or high-order statistics [40, 19, 20]. We compare results with representative (semi-)supervised GCN [17], GAT [37] and MixHop [1] models. SGC and S2GC are unsupervised spectral filter networks. They do not have any learnable parameters. COLES and GLEN could be regarded as dimension reduction techniques for SGC and S2GC, thus we compare them to PCA-S2GC and RP-S2GC, which use PCA and random projections to obtain the projection layer. We set hyperparameters based on the settings described in prior papers. 6.1 Transductive Learning In this section, we consider transductive learning where all nodes are available in the training process. COLES vs. GLEN. Table 1 shows the performance of GLEN vs. COLES on two different backbones, i.e., GCN and S2GC. On both backbones, GLEN shows non-trivial improvements on all four datasets. GLEN-S2GC outperforms the COLES by up to 4.6%. Table 2 evaluates GLEN on Cora, Citeseer, PubMed on the standard splits instead of the random splits. See Appendix G for comparisons to additional contrastive learning frameworks. Contrastive Embedding Baselines vs. GLEN. Table 1 shows that GLEN-GCN and GLEN-S2GC outperform unsupervised models. In particular, GLEN-GCN outperforms GCN+SampledNCE on all four datasets, which shows that GLEN has an advantage over the SampledNCE framework. In addition, GLEN-S2GC outperforms the best contrastive baseline DGI by up to 3.4%. On Cora with 5 training samples, GLEN-S2GC outperforms S2GC by 6.8%. Finally, Table 3 shows that GLEN-S2GC (small number of trainable parameters) outperforms other methods on the challenging Ogbn-arxiv. Semi-supervised GNNs vs. GLEN. Table 1 shows that the contrastive GCN baselines perform worse than semi-supervised variants, especially when 20 labeled samples per class are available. In contrast, GLEN-GCN outperformed the semi-supervised GCN on Cora by 10% and 3.4% given 5 and 20 labeled samples per class. GLEN-GCN also outperforms GCN on Citeseer and Pubmed by 9.9% and 5.2% given 5 labeled samples per class. These results show the superiority of GLEN on four datasets when the number of samples per class is 5. Even for 20 labeled samples per class, GLEN-S2GC outperforms the best semi-supervised baselines on all four datasets e.g., by 3.3% on Cora. Semi-supervised models (e.g., GAT and MixHop) are affected by the low number of labeled samples, which is consistent with [25]. The accuracy of GLEN-GCN and GLEN-S2GC is unaffected. Unsupervised GNNs vs. GLEN. SGC and S2GC are unsupervised linear networks based on spectral filters which do not use labels (except for the classifier). As a dimension reduction method, GLEN helps both methods reduce the dimension and achieve discriminative features. Table 1 shows that GLEN-S2GC outperforms RP-S2GC and PCA-S2GC under the same projection size. GLEN-S2GC also outperforms the unsupervised S2GC baseline (high-dimensional representation). 6.2 Node Clustering We compare GLEN-GCN and GLEN-S2GC with three types of clustering methods: i. Methods that use only node features e.g., k-means and spectral clustering (spectral-f) construct a similarity matrix with the node features by a linear kernel. ii. Structural clustering methods that only use the graph structure: spectral clustering (spectral-g) that takes the graph adjacency matrix as the similarity matrix, and DeepWalk [31]. iii. Attributed graph clustering methods that use node features and the graph: Graph Autoencoder (GAE), Graph Variational Autoencoder (VGAE) [17], Adversarially Regularized Graph Autoencoder (ARGE), Var. Graph Autoencoder (ARVGE) [30], SGC [42] , S2GC [53], COLES [55]. We measure and report the clustering Accuracy (Acc), Normalized Mutual Information (NMI) and macro F1-score (F1). We run each method 10 times on Cora, CiteSeer and PubMed. We set the number of propagation steps to 8 for SGC, S2GC, COLES-S2GC and COLES-S2GC following [48]. Table 4 shows that GLEN-S2GC outperforms other methods in all cases, whereas GLEN-GCN outperforms COLES-GCN, COLES-GCN (Stiefel) and contrastive GCN on all datasets. 6.3 Comparison of Surrogates of Rank Table 5 above shows results on four additional surrogates of Rank(S): • Nuclear norm: RNN(S) = ∑ i σi(S). • γ-nuclear norm [16]: Rγ-NN = ∑ i (1+γ)σi(S) γ+σi(S) . • Sp norm [28]: RSp = ∑ i σi(S) p. • Geman norm [8]: RGeman = ∑ i σi(S) γ+σi(S) . 6.4 Transductive One-shot Learning on Image Classification Datasets The most common setting in FSL is the inductive setting. In such a scenario, only samples in the support set can be used to fine-tune the model or learn a function for the inference of query labels. In contrast, in the transductive scenario, the model has access to all the query data (unlabeled) that needs to be classified. EASE [54] is a transductive few-shot learner for so-called episodic image classification. Given feature matrix Z ∈ Rn×m from a CNN backbone (ResNet-12), EASE minimizes Tr(UZ⊤LwZU⊤) − Tr(UZ⊤LtZU⊤) (subject to UU⊤ = I) in order to learn a linear projection U. We extend GLEN to EASE to learn the linear projection U by minimizing log det(UZ⊤LwZU⊤)− log det(UZ⊤LtZU ⊤) (subject to UU⊤ = I. We also apply the Sp norm instead of log det. Table 6 shows the results of EASE based on the LogDet and the Sp-norm based relaxations of GLEN. For the simplicity of experiment, we use soft k-means rather than Sinkhorn k-means as in the EASE pipeline. Please refer to EASE [54] for the experimental setup of one-shot learning. We evaluate our approach on four few-shot classification benchmarks, mini-ImageNet [38], tieredImageNet [32], CUB [41], and CIFAR-FS [21]. The performance numbers are given as accuracy % and the 0.95 confidence intervals are reported. We use publicly available pre-trained ResNet-12 [29] that are trained on the base class training set. Scalability. GraphSAGE and DGI require neighbor sampling with redundant forward/backward steps (long runtime). In contrast, GLEN-S2GC enjoys a simple implementation with low memory usage/low runtime. For graphs with over 100 thousands nodes and 10 millions edges (Reddit), GLEN runs fast on NVIDIA 1080 GPU. Even on larger graph benchmarks, GLEN is fast as it optimizes the total scatter and the within-class matrices whose size depends on embedding size rather than the node number. The runtime of GLEN-S2GC is also favourable in comparison to multi-view augmentation-based GraphCL. Specifically, GLEN-S2GC took 0.54s, 0.3s, 5.3s and 15.4s on Cora, Citeseer, Pubmed and Cora Full, respectively. GraphCL took 110.19s, 101.0s, ≥ 8h and ≥ 8h respectively. Although the LogDet difference is somewhat slower than the trace difference in forward/backward propagation, it converges faster, thus enjoying a similar low runtime. 7 Conclusions In this paper, we model contrastvie learning as a rank difference problem to approximate the condition that the rank of total scatter matrix should equal the sum of ranks of within-scatter and between-scatter matrices. We relax this NP-hard assumption with a differentiable difference of LogDet terms. We also show two perspectives on GLEN and the existing methods based on the low-rank optimization and distance between symmetric positive (semi-)definite matrices matrices. In low-rank optimization, we explain why the LogDet difference is a better surrogate function to optimize rank difference compared to the trace difference. We also show that our solution encourages linear kernel of embeddings become the geometric mean between the total scatter matrix and the within-class matrix. GLEN works well with many backbones outperforming many unsupervised, contrastive and (semi-)supervised methods. Acknowledgments and Disclosure of Funding We thank reviewers for stimulating questions that helped us improve several aspects of our analysis. Hao Zhu is supported by an Australian Government Research Training Program (RTP) Scholarship. Piotr Koniusz is supported by CSIRO’s Machine Learning and Artificial Intelligence Future Science Platform (MLAI FSP).
1. What is the focus and contribution of the paper on graph embedding? 2. What are the strengths of the proposed approach, particularly in terms of scalability and experimental results? 3. What are the weaknesses of the paper, especially regarding the objective function and notation confusion? 4. Do you have any questions regarding the motivation and interpretation of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a novel objective for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), to learn graph representation by maximizing the difference of logdet between the total scatter matrix and the within-class scatter matrix. The authors interpret this as a surrogate of rank difference maximization and give some theoretical results. Experiments show that GLEN offers good accuracy and scalability against state-of-the-art baselines on various benchmarks. ** Post Rebuttal Update ** I've read the rebuttal and the other reviewers' comments. I appreciate the update the authors have made, for example, the (supposedly) new experimental updates regarding Reviewer bffF's comment. I appreciate the experimental results against the prior art. My general concern is whether the theorem indeed shows a difference from the prior art. It seems to me that the rank difference formulation or the minimum class separation has been identified in the literature. What's more interesting is to explain why the logdet can be a better objective, which seems quite possible given the new experimental results. Strengths And Weaknesses ** Strengths ** S1. The proposed algorithm is based on the scatter matrices, which are of the size of d × d , not n × n . Note that d and n are the embedding dimension and the node number, respectively. Thus, the method is quite scalable to large graphs. S2. Experiments show strong results in various settings and datasets. ** Weaknesses ** W1. The authors approximate the rank difference with logdet difference. However, it is unclear why optimizing the rank difference or the logdet difference leads to good results. W2. The notations are confusing and may contain errors. For example, in section 3.1, it seems that Z is d × n instead of n × m . Also, the S matrices should be d × d instead of n × n . If Z is n × m and S is n × n , then the algorithm should not be scalable as n is the number of nodes. Questions Q1. The motivation of Condition 1 is unclear. In particular, why Rank ( S t ) = Rank ( S w ) + Rank ( S b ) yields good embedding? Q2. Why formulate the main problem as rank difference? Why not directly analyze the logdet difference? Limitations The authors didn't discuss the limitation and potential social impact.
NIPS
Title Never Go Full Batch (in Stochastic Convex Optimization) Abstract We study the generalization performance of full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ε after $ (1/ε2) iterations, full-batch methods either need at least Ω(1/ε4) iterations or exhibit a dimension-dependent sample complexity. 1 Introduction Stochastic ConvexOptimization (SCO) is a fundamental problem that received considerable attention from the machine learning community in recent years [28, 15, 4, 11, 2]. In this problem, we assume a learner that is provided with a finite sample of convex functions drawn i.i.d. from an unknown distribution. The learner’s goal is to minimize the expected function. Owing to its simplicity, it serves as an almost ideal theoretical model for studying generalization properties of optimization algorithms ubiquitous in practice, particularly first-order methods which utilize only first derivatives of the loss rather than higher-order ones. One prominent approach for SCO—and learning more broadly—is to consider the empirical risk (the average objective over the sample) and apply a first-order optimization algorithm to minimize it. The problem of learning is then decoupled into controlling the optimization error over the empirical risk (training error) and bounding the difference between the empirical error and the expected error (generalization error). In convex optimization, the convergence of different first-order methods has been researched extensively for many years (e.g., [26, 25, 5]), and we currently have a very good understanding of this setting in terms of upper as well lower bounds on worst-case complexity. However, in SCO where the generalization error must also be taken into account, our understanding is still lacking. In fact, this is one of the few theoretical learning models where the optimization method affects not only the optimization error but also the generalization error (distinctively from models such as PAC learning and generalized linear models). In particular, it has been shown [28, 15] that some minima of the empirical risk may obtain large generalization error, while other minima have a vanishingly small 35th Conference on Neural Information Processing Systems (NeurIPS 2021). generalization error. To put differently, learning in SCO is not only a question of minimizing the empirical risk, but also a question of how one minimizes it. However, the results of [28, 15] leave open the question of whether concrete optimization also have different generalization properties. Towards better understanding, Amir et al. [2] recently studied the generalization properties of fullbatch gradient descent (GD), where each step is taken with respect to the gradient of the empirical risk. For GD (and a regularized variant thereof), they gave a lower bound on the generalization error as a function of iteration number, which is strictly larger than the well-known optimal rate obtained by stochastic gradient descent (SGD), where each step is taken with respect to the gradient at a sampled example. Notably, the lower bound of [2] precisely matches the dimension-independent stability-based upper bound recently shown for full-batch GD by Bassily et al. [4]. The separation between full-batch GD and SGD is the first evidence that not only abstract Empirical RiskMinimizers may fail to generalize in SCO, but in fact also basic methods such as GD could be prone to such overfitting. A natural question is, then, whether overfitting is inherent to full-batch algorithms, that minimize the objective only through access to the exact empirical risk, or whether this suboptimality can be remedied by adding regularization, noise, smoothing, or any other mechanism for improving the generalization of GD. In this work we present and analyze a model of full-batch optimization algorithms for SCO. Namely, we focus on algorithms that access the empirical risk only via a first-order oracle that computes the exact (full-batch) gradient of the empirical loss, rather than directly accessing gradients with respect to individual samples. Our main result provides a negative answer to the question above by significantly generalizing and extending the result of Amir et al. [2]: we show that any optimization method that uses full-batch gradients needs at least Ω(1/ε4) iterations to minimize the expected loss to within ε error. This is in contrast with the empirical loss, which can be minimized with only $ (1/ε2) steps. Comparing SGD and GD in terms of the sample size =, we see that SGD converges to an optimal generalization error of $ (1/ √ =) after $ (=) iterations, whereas a full-batch method must perform Ω(=2) iterations to achieve the same $ (1/ √ =) test error. We emphasize that we account here for the oracle complexity, which coincides with the iteration complexity in the case of gradient methods. In terms of individual gradients calculations, while SGD uses at most $ (=) gradient calculations (one sample per iteration), a full-batch method will perform Ω(=3) calculations (= samples per iteration). The above result is applicable to a wide family of full-batch learning algorithms: regularized GD (with any data-independent regularization function), noisy GD, GDwith line-search or adaptive step sizes, GD with momentum, proximal methods, coordinate methods, and many more. Taken together with upper bound of Bassily et al. [4], we obtain a sharp rate of Θ(1/ε4) for the generalizationcomplexity of full-batch methods. Surprisingly, this rate is achieved by standard GD (with an unusual step-size choice of η = Θ(ε3)), and it cannot be improved by adding regularization of any sort, nor by adding noise or any other form of implicit/explicit bias. 1.1 Related work This work extends and generalizes the results of Amir et al. [2] who proved generalization lower bounds for GD (and a specific instance of regularized GD). Our work shows that in fact any full-batch method will suffer from similar lower bounds. Our construction builds upon the one used in [2], which in turn builds upon previous constructions [4, 28]. However, our arguments and proofs here are more challenging, as we need to reason about a general family of algorithms, and not about a specific algorithm whose trajectory can be analyzed directly. Our developments also build on ideas from the literature on oracle complexity lower bounds in optimization [25, 26, 30, 8, 12, 9]. In particular, we first prove our result in the simplified setting of algorithms constrained to the span of observed gradients [25, 26] and subsequently lift it to general algorithms using a random high-dimensional embedding technique proposed byWoodworth and Srebro [30] and later refined in [8, 12]. However, while these works lower bound what we call the empirical risk, we lower bound the generalization error. This requires us to develop a somewhat different argument for how the span of the gradients evolve during the optimization: in prior work, the algorithm learns the component of the solution coordinate by coordinate, whereas in our work the true (generalizing) solution is present in the observed gradients from the first query, but spurious sampling artifacts drown it out. Empirical studies (outside of the scope of SCO) support the claim that generalization capabilities degrade with the increase of the batch size. Specifically, Zhu et al. [33] indicates that SGD outperforms GD in terms of generalization. The works of Keskar et al. [22] and Hoffer et al. [20] exhibit a similar phenomenon in which small-batch SGD generalizes better than large-batch SGD with the same iteration budget. We provide the first theoretical evidence for this phenomenon for convex losses. Several theoretical studies explore the convergence of stochastic methods that use mini-batches [10, 23, 31]. Note that this setting differs from ours, as they assume access to minibatches sampled without replacement whereas full-batch means we reuse the same (full) batch with each gradient step. There has also been recent progress in improving the generalization capabilities of GD. Wu et al. [32] interprets mini-batch SGD as a noisy version of GD. They propose a modified algorithm with noise injected to the full-batch gradients. Geiping et al. [16] propose a GD-based training scheme that achieves CIFAR-10 generalization performance comparable to standard SGD training. Interestingly, both proposed algorithms require access to sample-points and are therefore not “fullbatch” by our definition: The scheme [32] requires sample-point data for computing the noise, while the GD variant [16] uses mini-batch statistics to compute a regularization term (as well as batch normalization). Our work shows that (in SCO) this is unavoidable: namely, no data-independent noise or full-batch regularization can be used to improve generalization at a reasonable computational budget. Several other works study the generalization performance of GD [29, 17, 21, 24]. The work of Soudry et al. [29], for example, examines GD on unregularized logistic regression problems. They show that, in the limit, GD converges to a well-generalizing solution by arguing about the bias of the algorithm. Interestingly, both our and their results require slow-training, beyond what is required for empirical error optimization. Another work that highlights the slow convergence of GD is that of Bassily et al. [4]. They were the first to address uniform stability of (non-smooth) GD and SGD, and provided tight bounds. Stability entails generalization, hence our results lead to stability lower bounds for any full-batch method. Consequently, we extend the lower bounds for GD in the work of Bassily et al. [4] to a wider class. It might be thought that the instability argument of Bassily et al. [4] can be used to obtain similar generalization lower bounds—however, we note that their techniques also prove instability of SGD (which does generalize). Hence, instability does not immediately imply, in this setting, lack of generalization. Finally, we note that under smoothness and strong convexity, it is well known that improved rates can be obtained. Specifically, using the stability bound of Bousquet and Elisseeff [6] one can show that we can achieve generalization error of $ (1/ √ =) after $ (=) iterations if the population risk is $ (1)-strongly convex. The arguments of Hardt et al. [19] imply generalization bound to instances where every sample risk is $ ( √ =) smooth. Our result implies that, even though these special families of functions enjoy appealing learning rates, in general it is impossible to obtain better rates by strong-convexifying or smoothing problem instances via first-order full-batch oracle queries. 2 Problem Setup and Main Results We study the standard setting of stochastic convex optimization. In this setting, a learning problem is specified by a fixed domain W ⊆ ℝ3 in 3-dimensional Euclidean space, and a loss function 5 : W × Z → ℝ, which is both convex and !-Lipschitz with respect to its first argument (that is, for any I ∈ Z the function 5 (F; I) is !-Lipschitz and convex with respect to F). In particular, throughout the paper, our construction consists of 1-Lipschitz functions and we will focus on a fixed domain W defined to be the unit Euclidean ball in ℝ3 , namely W = {F : ‖F‖2 ≤ 1}. We also assume that there exists an unknown distribution over parameters I and the goal of the learner is to optimize the true risk (or true loss, or population risk) defined as follows: (F) B E I∼ [ 5 (F; I)], (1) We assume that a sample ( = {I1, . . . , I=} is drawn from the distribution , and the learner has to output F( ∈ W (the exact access the learner has to the sample, and how F( may depend on ( is discussed below). We require the solution to be ε-optimal in expectation for some parameter ε > 0, i.e., E (∼ = [ (F()] − min F★∈W (F★) ≤ ε. As discussed, the standard setting assumes that the learner has direct access to the i.i.d. sample, as well as to the gradients of the loss function (i.e., a first-order oracle). In this work, though, we focus on a specific family full-batch methods. Hence, the optimization process is described as follows: First, an i.i.d. sample ( = (I1, . . . , I=) is drawn from . Then, the learner is provided with access only to the empirical risk via a full-batch first-order oracle which we define next. Full-batch first-order oracle. Consider a fixed sample ( = (I1, . . . , I=) of size =, drawn i.i.d. from . The empirical risk over the sample ( is ( (F) = 1 = =∑ 8=1 5 (F; I8). Then, a full-batch first-order oracle is a procedure that, given input F ∈W, outputs O(F) := (∇ ( (F); ( (F)). where ∇ ( (F) is an empirical risk sub-gradient of the form ∇ ( (F) = 1 = =∑ 8=1 ∇ 5 (F; I8), (2) and each sub-gradient ∇ 5 (F, I8) is computed by the oracle as a function of F and I8 (that is, independently of I 9 for 9 ≠ 8). We emphasize that the sample is fixed throughout the optimization, so that the oracle computes the gradient of the same empirical risk function at every call, hence the name full-batch. Note that the subgradient with respect to a single data point, i.e., ∇ 5 (F; I8), is not accessible through this oracle, which only returns the average gradient over the sample (. Notice that our definition above is slightly narrower than a general sub-gradient oracle for the empirical risk due to the requirement that the sub-gradients ∇ 5 (F, I8) are chosen independently of I 9 for 9 ≠ 8 – since we provide here with a lower bound, this restriction strengthens our result. We make this restriction to avoid some degenerate constructions (that in fact can even be used to fail SGD if the gradient at I8 may depend on the whole sample), which are of no practical implications. Full-batch first-order algorithm. A full-batch (first-order) method is naturally defined as any algorithm that has access to the optimization objective—namely the empirical risk (—only via the full-batch first order oracle. In particular, if FC is the C’th query of the algorithm to the full-batch oracle then FC has to be of the form FC = &C (O(F0), . . . ,O(FC−1)), (3) where &C : (ℝ3+1)C → W is a fixed (possibly randomized) mapping. At the end of the process the algorithm outputs F( . We study the algorithm’s oracle complexity, which is the number of iterations ) the algorithm performs before halting. Therefore, we assume without loss of generality that F( = F) , i.e., the algorithm’s output is its )’th query. 2.1 Main result In this sectionwe establish ourmain result, which provides a generalization lower-bound for full-batch first order algorithms. The complete proof is provided in the full version of the paper [1]. Theorem 2.1. Let ε > 0 and =, ) ∈ ℕ; there exists 3 = poly(2=, ), 1/ε) such that the following holds. For any full-batch first-order algorithm with oracle complexity at most ) , there exists a 1-Lipschitz convex function 5 (F; I) inW, the unit-ball inℝ3 , and a distribution overZ such that, for some universal constant 2 > 0: E (∼ = [ (F()] ≥ min F★∈W (F★) + ε + Ω ( min { 1 − 2ε2 √ ), 0 }) . (4) An immediate consequence of Theorem 2.1 is that in order to obtain less than ε true risk we need at least ) = Ω(1/ε4) iterations. For simplicity, we state and prove the lower bound in Theorem 2.1 for the class of first-order fullbatch algorithms defined above. However, our constructions readily generalize to local full-batch oracles that provide a complete description of ( in an arbitrarily small neighborhood of the query point [25, 18]. Such oracles subsume second-order oracles, and consequently our generalization lower bounds hold also for second-order full-batch algorithms. 2.2 Discussion Theorem 2.1 suggests that full-batch first-order algorithms are inferior to other types of first-order algorithms that operate with access to individual examples, such as SGD. Importantly, this separation is achieved not in terms of the optimization performance but in terms of the generalization performance. In light of this result, we next discuss and revisit the role of the optimization algorithm in the context of SCO. In particular, we wish to discuss the implications to what are perhaps the two most prominent full-batch optimization methods, GD and regularzied-GD, and in turn compare them. Gradient descent. Perhaps the simplest example of a full-batch method is (projected) GD: GD is an iterative algorithm that at each iteration performs an update step FC = ΠW [FC−1 − η∇ ( (FC )], where W is a convex set on which we project the iterated step. The output of GD is normally taken to be F( = 1) ∑ FC (or a randomly chosen FC ). Notice, that each step requires one call to a full batch oracle, and a single projection operation. The convergence analysis of GD to the optimal solution of the empirical risk has been widely studied. Specifically, if = is the sample-size, it is known that with η = $ (1/ √ =) and ) = $ (=), GD converges to a minimizer of ( that is $ (1/ √ =)-sub optimal. For the exact variant of GD depicted above, the generalization performance was analyzed in the work of Amir et al. [2] that showed that with ) = $ (=) steps, GD will suffer Ω(1/ 4 √ =) generalization error. Theorem 2.1 extends the above result to any variant of GD (dynamic learning-rate, noisy GD, normalized GD, etc.). Regularized gradient descent. We would also like to discuss the implication of Theorem 2.1 with respect to regularized variants of GD that operate on the regularized empirical risk ̂ (F) = λA (F) + ( (F). The main motivation of introducing the regularization term A is to avoid overfitting, and a popular choice for A is the Euclidean norm A (F) = ‖F‖22. This choice leads to the following update rule for GD: FC+1 = ΠW [(1 − ηC ) · (2λFC ) − ηC∇ ( (FC )] , Again, this update can be implemented using a single first-order full-batch oracle call that computes the quantity∇ ( (FC ). More generally, for any data-independent A , GDon ̂ is a full-batch algorithm1. When A is the Euclidean norm, the minimizer of ̂ is known to enjoy (with choice λ = $ (1/ √ =)), an optimal generalization error of $ (1/ √ =) [6, 28]. This demonstrates the power of regularization and how it can provably induce generalization. Nevertheless, Theorem 2.1 still applies to any optimization method over ̂. Since optimization of ̂ (the regularized empirical risk) to $ (1/ √ =)- precision can be done via a full-batch method, and with less than $ (=) calls, we observe that there are methods that minimize the regularized-empirical risk but, due to Theorem 2.1 do not reach the optimal generalization error. The role of regularization. Finally, in light of Theorem 2.1 let us compare the different variants of GD and regularized GD that do generalize well, in order to sharpen our understanding of the role of regularization in generalization. The conclusion of Theorem 2.1 is that any full-batch method that generalizes well performs at least $ (=2) steps. For regularized GD, with `2 regularization, $ (=2) are indeed sufficient. In particular, with $ (=2) iterations we can find a solution that has $ (1/=) empirical error. Any such solution would enjoy a generalization error of $ (1/ √ =) [28]. For GD, Bassily et al. [4] showed that $ (=2) iterations would also suffice to achieve $ (1/ √ =) error. This is achieved by tuning the learning rate to η = $ (1/=3/2). Notice that this improvement does not require any type of added regularization. To summarize, both GD and regularized GD with optimal parameters require Θ(=2) iterations to attain the optimal$ (1/ √ =) generalization error. Overall then, explicitly adding regularization is not necessary nor does it improve the convergence rate. One might be tempted to believe that tuning the learning rate in GD induces implicitly some sort of regularization. For example, one might 1Note that we are not concerned with the computational cost of computing ∇A (FC ) since it does not factor into oracle complexity. imagine that GD can be biased towards minimal norm solution, which might explain redundancy of regularizing by this norm. However, this turns out also to be false: Dauber et al. [11] showed how GD (with any reasonable choice of learning rate) can diverge from the minimal norm solution. In fact, for any regularization term A, one can find examples where GD does not converge to the regularized solution. Thus, even though GD and regularized-GD are comparable algorithms in terms of generalization and oracle complexity, they are distinct in terms of the solutions they select. 3 Technical Overview In this section we give an overview of our construction and approach towards proving Theorem 2.1. For the sake of exposition, we will describe here a slightly simpler construction which proves the main result only for algorithms that remain in the span of the gradients. In more detail, let us examine the family of iterative algorithms of the form FC ∈ span{∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1)} ∩W, (5) where W is the unit ball and ∇ ( (FC ) is full-batch oracle response to query FC as defined in (2) above. Well-studied algorithms such as GD and GD with standard `2 norm regularization fall into this category of algorithms. To extend the lower bound to algorithms not restricted to the gradient span we refine the simpler construction and apply well-established techniques of random embedding in high-dimensional space. We discuss thesemodifications briefly in the end of this section and provide the full details in Section 4 and the full version of the paper [1]. 3.1 A simpler construction Let us fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ2 = Z, such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2 > 0. Define the hard instance 5(6) : ℝ3+2 ×Z→ ℝ as follows: 5(6) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (6) where 6γ, Eα and A are • 6γ(F;α) B √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) B { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) B max{0,max8∈[3+1]{F(8)}}, • Eα (8) B − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 43+2 is the (3 + 2)’th standard basis vector. The distribution we will consider is uniform over α. That is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(6) (F; (α, ε, γ)). The parameters γ1 and γ2 of the construction should be thought of as arbitrarily small. In particular, the term γ1Eα · F in Eq. (6) should be thought of as negligible, and the first term, 6γ, is roughly 6γ(F;α) ≈ √∑ 8∈3 α(8) (max{−F(8), 0})2. Another useful property of the construction is the population risk (F) = EI∼ 5(6) (F; I) is minimized at F★ ≈ −43+2, with expected loss (F★) ≈ −ε. However, as we will see, the choice of the perturbation vector Eα and the term A (F) hinder the learner from observing this coordinate and; the first Ω(ε−4) queries are constrained to a linear subspace where all the points have a high generalization error due to the expectation of the first term 6γ. 3.2 Analysis We next state the main lemmas we use, with proofs deferred to the full version of the paper [1]. Given a sample (, let us denote Ē = 1 = ∑ α∈( Eα, and span1{D1, D2, . . .} B span{D1, D2, . . .} ∩W. Additionally, given a fixed sample we write I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1} for the set of coordinates 8 ∈ [3] such that α(8) = 0 for every α in the sample (, plus the coordinate 3 + 1. Lemma 3.1. Let γ1 ≤ 12) , γ2 = 2γ1 ε , and suppose that the sample ( satisfies |I(() | > ) . Then there exists a first-order full-batch oracle such that for any algorithm that adheres to FC ∈ span1 { ∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1) } , (7) with respect to 5 (F; (α, ε, γ)) defined in Eq. (6), we have FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } for all C ∈ [)], where IC (() is the set of the C + 1 largest coordinates in I((). We next observe that in any span of the form {γ1Ē + ε43+2 + 48}8∈I) (() such that |I) (() | ≤ ) , we cannot find a solution with better risk than 0. On the other hand, note that for F̄ = −43+2, we have that 5(6) (F̄; (α, ε, γ)) = −ε. In other words, our lower bound stems from the following result: Lemma 3.2. For sufficiently small γ1 ≤ 2=εγ2, γ2 ≤ ε/ √ 4) , and any vector ‖Ē‖ ≤ √ 3, any output F( ∈ span1 8∈I) (() {γ1Ē + ε43+2 + 48}, satisfies 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)) + εF( (43+2) ≥ min { 1 − 2ε2 √ ), 0 } − 1 2 ε. (8) Lower bound proof sketch for span-restricted algorithms of the form (5). First, observe that the probability of an arbitrary index 8 to satisfy α(8) = 0 for all α ∈ ( is (1/2)=. Therefore, |I(() | −1, the number of indexes that hold this from the possible 3, is distributed as a binomial with 3 experiments and success probability ? = 2−=. Using elementary probability arguments one can show that for sufficiently large 3 we have |I(() | > ) with high probability; see Claim B.2 in the appendix. This implies that the conditions of Lemmas 3.1 and 3.2 hold w.h.p. To conclude, we relate the LHS of Eq. (8) to the expected risk (F) = E α∼ [ 5(6) (F; (α, ε, γ))] = E α∼ [6γ(F;α)] + γ1 · E α∼ [Eα] · F + εF · 43+2 + A (F). As 6γ(F;α) is convex w.r.t. α (since α(8) = α2 (8)) we can apply Jensen’s inequality with Eα∼ [α(8)] = 12 to obtain: E α∼ [6γ(F(;α)] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)). Applying theCauchy-Schwarz inequality to the second termwhile also using the facts that ‖Eα‖ ≤ √ 3 and that F( is in the unit ball, we get: γ1 E α∼ [Eα] · F ≥ −γ1 E α∼ [‖Eα‖ · ‖F‖] ≥ −γ1 √ 3. For sufficiently small γ1 this term is negligible, and since A (F) ≥ 0 we get that the expected risk is approximately the LHS term in Eq. (8). Lastly, recalling that (−43+2) = −ε we get that (F() − min F ∈W (F) ≥ 1 2 ε +min { 1 − 2ε2 √ ), 0 } w.h.p. The same lower bound (up to a constant) also holds in expectation by the the law of total expectation. Our distribution is supported on 5-Lipschitz convex functions, so that re-parametrizing 110ε→ ε as well as 5(6) yields the claimed lower bound (4) for the case of span-restricted algorithms. 3.3 Handling general full-batch algorithms The above construction establishes an Ω(1/ε4) oracle complexity lower bound on any algorithm whose iterates lie in the span of the previous gradients. While this covers a large class of algorithms, techniques like preconditioning [13], coordinate methods [27] and randomized smoothing [14] do not satisfy this assumption. In fact, a trivial algorithm that always outputs −43+2 will solve the hard instance (6) in a single iteration. To address general algorithms, we employ a well-established technique in optimization lower bounds [30, 8, 12] wherein we embed a hard instance 5 (F; I) for span-constrained algorithms in a random high-dimensional space. More concretely, we draw a random orthogonal matrix * ∈ ℝ3′×3 (*>* = 3×3) and consider the 3 ′ > 3-dimensional instance 5* (F; I) = 5 (*>F; I) along with its corresponding empirical objective (,* (F) = 1= ∑ 8∈[=] 5* (F; I8). Roughly speaking, we show that for a general algorithm operating with the appropriate subgradient oracle for (,* the iterate FC is approximately in the span of {∇ (,* (F0), . . . ,∇ (,* (FC−1)} in the sense that the component of FC outside that span is nearly orthogonal to the columns of*. Consequently, the response of the oracle to the query FC at iteration C is, with high probability, identical to the information it would return if queried with the projection of FC to the span of the previously observed gradients. This reduces, in a sense, the problem back to the span-restricted setting described above. For the embedding technique to work, we must robustify the hard instance construction so that small perturbations around points in the span of previous gradients do not “leak” additional information about the embedding *. To do that we make a fairly standard modification to the component A (F) in (6) (known as Nemirovski’s function [12, 7]), replacing it with max{0,max8∈[3 ]{F(8) + 8γ′}, F(3 + 1) + γ′′}, where γ′, γ′′ are small offset coefficients that go to zero as the embedding dimension 3 ′ tends to infinity. We provide the full construction and the proof of Theorem 2.1 in Section 4 and the full version of the paper [1]. 4 The Full Construction As explained above, the key difference between the simplified construction 5(6) and the full construction with which we prove Theorem 2.1 is that we modify the Nemirvoski function term A (F) in order to make it robust to queries that are nearly within a certain linear subspace. In particular, we bias the different terms in the maximization defining A (F) so as to control the index of the coordinate attaining the maximum. For ease of reference, we now provide a self-contained definition of our full construction with the modified Nemirovski function. Fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ3 = Z are such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2, γ3 > 0. Define the hard instance 5(9) : ℝ3+2 ×Z→ ℝ as follows: 5(9) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (9) where 6γ, Eα and A are • 6γ(F;α) := √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) := { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) := max{0,max8∈[3+1]{F(8) + σ8}} with σ8 := { 8 · γ1γ343= if 8 ∈ [3]; 2γ3 if 8 = 3 + 1. • Eα (8) := − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 48 is the 8’th standard basis vector in ℝ3+2. We consider a distribution over α that is distributed uniformly over {0, 1}3; that is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(9) (F; (α, ε, γ)). The rest of the parameters are set throughout the proof as follows: γ1 = εγ2 4 , γ2 = ε ) √ 3 , γ3 = ε 16 . (10) With this choice of distribution as well our choice of parameters we obtain, since ‖Eα‖ ≤ √ 3 and by our choice of γ1 (as well as Jensen’s inequality and A (·) ≥ 0): (F) = E α∼ [ 5(9) (F; (α, ε, γ)) ] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F(8)) + εF(3 + 2) − ε 4 . (11) Notice that we also have that for a choice F★ = −43+2, since A (F★) = 2γ3: (F★) = −ε + ε 8 = −7ε 8 (12) Our development makes frequent use of the following notation from Section 3: I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1}, IC (() = C largest elements in I((), and Ē = 1 = ∑ α∈( Eα. We begin with the following lemma, which is a robust version of Lemma 3.1 in Section 3. The proof is provided in the full version of the paper [1]. Lemma 4.1. Suppose that F0 = 0. Consider 5(9) (F; (α, ε, γ)) with parameters as in Eq. (10). Suppose ( is a sample such that |I(() | > C + 1. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ min {γ2 3 , γ1γ3 163= } . (13) Then, ∇ ( (F) = γ1Ē + ε43+2 + 48 , for some 8 ∈ IC+1 ((), where IC (() is the set of the C + 1 largest coordinates in I((). The following corollary states that the gradient oracle’s answers are resilient to small perturbation of the query (as long as they are in vicinity of the “right" subspace): the proof is provided in the full version of the paper [1]: Corollary 4.2. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ 1 4 √ 3 min {γ2 3 , γ1γ3 163= } . (14) Then, ∇ ( (F) = ∇ ( (ΠC+1 (F)), ( (F) = ( (ΠC+1 (F)), where ΠC is a projection onto span8∈IC (() {γ1Ē + ε43+2 + 48}. Acknowledgements and Disclosure of Funding This work has received support from the Israeli Science Foundation (ISF) grant no. 2549/19 and grant no. 2188/20, from the Len Blavatnik and the Blavatnik Family foundation, from the Yandex Initiative in Machine Learning, and from an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the focus of the paper regarding stochastic convex optimization? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns or questions about the proof technique used in the paper? 4. How does the reviewer assess the significance and relevance of the lower bound construction in the paper? 5. Are there any limitations or potential improvements suggested by the reviewer for future research?
Summary Of The Paper Review
Summary Of The Paper The paper provides a lower bound on the iteration complexity of any full batch algorithm in stochastic convex optimization. In particular, they show that for every T, there exists a function (which depends on T) for which the suboptimality in terms of excess risk for any full batch algorithm run for T steps is lower bounded by 1/T^{1/4}. The proof technique extends upon the lower bound construction of Amir et. al. 2021 [1] for the iteration complexity of GD algorithm, and is based on a span based argument. Review I support accepting the paper. The main reasons for a low score are: (1. ) SGD vs ERM: We already know from earlier works that Empirical risk minimizing (ERM) solutions may fail to have low suboptimality at the population level (for eg. in [14], [25]). Full batch descent algorithms find the ERM solution. I do not see a lot of motivation in proving lower bounds for algorithms which go towards solutions which we know may not generalize. (2.) Iteration complexity vs work: We know that SGD is statistically optimal for stochastic convex optimization. Furthermore, it is also optimal work wise. It is easy to see that running event 1 iteration of GD is as expensive as running SGD to convergence. Why should we care about lower bounds for full batch algorithms for which we already know that even 1 iteration is worse than the optimal and easy to implement algorithm. (3.) GD with projection vs without projection: In the paper, the authors run GD with projection and compare the suboptimality of the returned solution with the minimizer in the unit norm ball. In practice, we generally run SGD / GD without projection? Is there any hope to extend the lower bounds for full batch methods run without the projection step. (4.) Statement of the results: (a) The authors should add in the main body that the function depends on T (as the parameters and the problem dimension are chosen depending on the value of T). (b) I am confused by the way the result is presented in Theorem 3.1. If we look at the proof at the end of page 9, we see that there is an extra \beta-dependent term multiplying (1/4 - \eps^2 T) which is data dependent. This term can be arbitrarily small and has been hidden in the proof of Theorem 3.1. One can simply get rid of this term by setting \eps = 1/T^{1/4} which the authors should do. This would just give a lower bound of 1/T^{1/4} in Theorem 3.1. The paper is well written otherwise. It might be useful to explicitly state the upper / lower bounds on \gamma_1 and \gamma_2 in various lemmas in the main body.
NIPS
Title Never Go Full Batch (in Stochastic Convex Optimization) Abstract We study the generalization performance of full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ε after $ (1/ε2) iterations, full-batch methods either need at least Ω(1/ε4) iterations or exhibit a dimension-dependent sample complexity. 1 Introduction Stochastic ConvexOptimization (SCO) is a fundamental problem that received considerable attention from the machine learning community in recent years [28, 15, 4, 11, 2]. In this problem, we assume a learner that is provided with a finite sample of convex functions drawn i.i.d. from an unknown distribution. The learner’s goal is to minimize the expected function. Owing to its simplicity, it serves as an almost ideal theoretical model for studying generalization properties of optimization algorithms ubiquitous in practice, particularly first-order methods which utilize only first derivatives of the loss rather than higher-order ones. One prominent approach for SCO—and learning more broadly—is to consider the empirical risk (the average objective over the sample) and apply a first-order optimization algorithm to minimize it. The problem of learning is then decoupled into controlling the optimization error over the empirical risk (training error) and bounding the difference between the empirical error and the expected error (generalization error). In convex optimization, the convergence of different first-order methods has been researched extensively for many years (e.g., [26, 25, 5]), and we currently have a very good understanding of this setting in terms of upper as well lower bounds on worst-case complexity. However, in SCO where the generalization error must also be taken into account, our understanding is still lacking. In fact, this is one of the few theoretical learning models where the optimization method affects not only the optimization error but also the generalization error (distinctively from models such as PAC learning and generalized linear models). In particular, it has been shown [28, 15] that some minima of the empirical risk may obtain large generalization error, while other minima have a vanishingly small 35th Conference on Neural Information Processing Systems (NeurIPS 2021). generalization error. To put differently, learning in SCO is not only a question of minimizing the empirical risk, but also a question of how one minimizes it. However, the results of [28, 15] leave open the question of whether concrete optimization also have different generalization properties. Towards better understanding, Amir et al. [2] recently studied the generalization properties of fullbatch gradient descent (GD), where each step is taken with respect to the gradient of the empirical risk. For GD (and a regularized variant thereof), they gave a lower bound on the generalization error as a function of iteration number, which is strictly larger than the well-known optimal rate obtained by stochastic gradient descent (SGD), where each step is taken with respect to the gradient at a sampled example. Notably, the lower bound of [2] precisely matches the dimension-independent stability-based upper bound recently shown for full-batch GD by Bassily et al. [4]. The separation between full-batch GD and SGD is the first evidence that not only abstract Empirical RiskMinimizers may fail to generalize in SCO, but in fact also basic methods such as GD could be prone to such overfitting. A natural question is, then, whether overfitting is inherent to full-batch algorithms, that minimize the objective only through access to the exact empirical risk, or whether this suboptimality can be remedied by adding regularization, noise, smoothing, or any other mechanism for improving the generalization of GD. In this work we present and analyze a model of full-batch optimization algorithms for SCO. Namely, we focus on algorithms that access the empirical risk only via a first-order oracle that computes the exact (full-batch) gradient of the empirical loss, rather than directly accessing gradients with respect to individual samples. Our main result provides a negative answer to the question above by significantly generalizing and extending the result of Amir et al. [2]: we show that any optimization method that uses full-batch gradients needs at least Ω(1/ε4) iterations to minimize the expected loss to within ε error. This is in contrast with the empirical loss, which can be minimized with only $ (1/ε2) steps. Comparing SGD and GD in terms of the sample size =, we see that SGD converges to an optimal generalization error of $ (1/ √ =) after $ (=) iterations, whereas a full-batch method must perform Ω(=2) iterations to achieve the same $ (1/ √ =) test error. We emphasize that we account here for the oracle complexity, which coincides with the iteration complexity in the case of gradient methods. In terms of individual gradients calculations, while SGD uses at most $ (=) gradient calculations (one sample per iteration), a full-batch method will perform Ω(=3) calculations (= samples per iteration). The above result is applicable to a wide family of full-batch learning algorithms: regularized GD (with any data-independent regularization function), noisy GD, GDwith line-search or adaptive step sizes, GD with momentum, proximal methods, coordinate methods, and many more. Taken together with upper bound of Bassily et al. [4], we obtain a sharp rate of Θ(1/ε4) for the generalizationcomplexity of full-batch methods. Surprisingly, this rate is achieved by standard GD (with an unusual step-size choice of η = Θ(ε3)), and it cannot be improved by adding regularization of any sort, nor by adding noise or any other form of implicit/explicit bias. 1.1 Related work This work extends and generalizes the results of Amir et al. [2] who proved generalization lower bounds for GD (and a specific instance of regularized GD). Our work shows that in fact any full-batch method will suffer from similar lower bounds. Our construction builds upon the one used in [2], which in turn builds upon previous constructions [4, 28]. However, our arguments and proofs here are more challenging, as we need to reason about a general family of algorithms, and not about a specific algorithm whose trajectory can be analyzed directly. Our developments also build on ideas from the literature on oracle complexity lower bounds in optimization [25, 26, 30, 8, 12, 9]. In particular, we first prove our result in the simplified setting of algorithms constrained to the span of observed gradients [25, 26] and subsequently lift it to general algorithms using a random high-dimensional embedding technique proposed byWoodworth and Srebro [30] and later refined in [8, 12]. However, while these works lower bound what we call the empirical risk, we lower bound the generalization error. This requires us to develop a somewhat different argument for how the span of the gradients evolve during the optimization: in prior work, the algorithm learns the component of the solution coordinate by coordinate, whereas in our work the true (generalizing) solution is present in the observed gradients from the first query, but spurious sampling artifacts drown it out. Empirical studies (outside of the scope of SCO) support the claim that generalization capabilities degrade with the increase of the batch size. Specifically, Zhu et al. [33] indicates that SGD outperforms GD in terms of generalization. The works of Keskar et al. [22] and Hoffer et al. [20] exhibit a similar phenomenon in which small-batch SGD generalizes better than large-batch SGD with the same iteration budget. We provide the first theoretical evidence for this phenomenon for convex losses. Several theoretical studies explore the convergence of stochastic methods that use mini-batches [10, 23, 31]. Note that this setting differs from ours, as they assume access to minibatches sampled without replacement whereas full-batch means we reuse the same (full) batch with each gradient step. There has also been recent progress in improving the generalization capabilities of GD. Wu et al. [32] interprets mini-batch SGD as a noisy version of GD. They propose a modified algorithm with noise injected to the full-batch gradients. Geiping et al. [16] propose a GD-based training scheme that achieves CIFAR-10 generalization performance comparable to standard SGD training. Interestingly, both proposed algorithms require access to sample-points and are therefore not “fullbatch” by our definition: The scheme [32] requires sample-point data for computing the noise, while the GD variant [16] uses mini-batch statistics to compute a regularization term (as well as batch normalization). Our work shows that (in SCO) this is unavoidable: namely, no data-independent noise or full-batch regularization can be used to improve generalization at a reasonable computational budget. Several other works study the generalization performance of GD [29, 17, 21, 24]. The work of Soudry et al. [29], for example, examines GD on unregularized logistic regression problems. They show that, in the limit, GD converges to a well-generalizing solution by arguing about the bias of the algorithm. Interestingly, both our and their results require slow-training, beyond what is required for empirical error optimization. Another work that highlights the slow convergence of GD is that of Bassily et al. [4]. They were the first to address uniform stability of (non-smooth) GD and SGD, and provided tight bounds. Stability entails generalization, hence our results lead to stability lower bounds for any full-batch method. Consequently, we extend the lower bounds for GD in the work of Bassily et al. [4] to a wider class. It might be thought that the instability argument of Bassily et al. [4] can be used to obtain similar generalization lower bounds—however, we note that their techniques also prove instability of SGD (which does generalize). Hence, instability does not immediately imply, in this setting, lack of generalization. Finally, we note that under smoothness and strong convexity, it is well known that improved rates can be obtained. Specifically, using the stability bound of Bousquet and Elisseeff [6] one can show that we can achieve generalization error of $ (1/ √ =) after $ (=) iterations if the population risk is $ (1)-strongly convex. The arguments of Hardt et al. [19] imply generalization bound to instances where every sample risk is $ ( √ =) smooth. Our result implies that, even though these special families of functions enjoy appealing learning rates, in general it is impossible to obtain better rates by strong-convexifying or smoothing problem instances via first-order full-batch oracle queries. 2 Problem Setup and Main Results We study the standard setting of stochastic convex optimization. In this setting, a learning problem is specified by a fixed domain W ⊆ ℝ3 in 3-dimensional Euclidean space, and a loss function 5 : W × Z → ℝ, which is both convex and !-Lipschitz with respect to its first argument (that is, for any I ∈ Z the function 5 (F; I) is !-Lipschitz and convex with respect to F). In particular, throughout the paper, our construction consists of 1-Lipschitz functions and we will focus on a fixed domain W defined to be the unit Euclidean ball in ℝ3 , namely W = {F : ‖F‖2 ≤ 1}. We also assume that there exists an unknown distribution over parameters I and the goal of the learner is to optimize the true risk (or true loss, or population risk) defined as follows: (F) B E I∼ [ 5 (F; I)], (1) We assume that a sample ( = {I1, . . . , I=} is drawn from the distribution , and the learner has to output F( ∈ W (the exact access the learner has to the sample, and how F( may depend on ( is discussed below). We require the solution to be ε-optimal in expectation for some parameter ε > 0, i.e., E (∼ = [ (F()] − min F★∈W (F★) ≤ ε. As discussed, the standard setting assumes that the learner has direct access to the i.i.d. sample, as well as to the gradients of the loss function (i.e., a first-order oracle). In this work, though, we focus on a specific family full-batch methods. Hence, the optimization process is described as follows: First, an i.i.d. sample ( = (I1, . . . , I=) is drawn from . Then, the learner is provided with access only to the empirical risk via a full-batch first-order oracle which we define next. Full-batch first-order oracle. Consider a fixed sample ( = (I1, . . . , I=) of size =, drawn i.i.d. from . The empirical risk over the sample ( is ( (F) = 1 = =∑ 8=1 5 (F; I8). Then, a full-batch first-order oracle is a procedure that, given input F ∈W, outputs O(F) := (∇ ( (F); ( (F)). where ∇ ( (F) is an empirical risk sub-gradient of the form ∇ ( (F) = 1 = =∑ 8=1 ∇ 5 (F; I8), (2) and each sub-gradient ∇ 5 (F, I8) is computed by the oracle as a function of F and I8 (that is, independently of I 9 for 9 ≠ 8). We emphasize that the sample is fixed throughout the optimization, so that the oracle computes the gradient of the same empirical risk function at every call, hence the name full-batch. Note that the subgradient with respect to a single data point, i.e., ∇ 5 (F; I8), is not accessible through this oracle, which only returns the average gradient over the sample (. Notice that our definition above is slightly narrower than a general sub-gradient oracle for the empirical risk due to the requirement that the sub-gradients ∇ 5 (F, I8) are chosen independently of I 9 for 9 ≠ 8 – since we provide here with a lower bound, this restriction strengthens our result. We make this restriction to avoid some degenerate constructions (that in fact can even be used to fail SGD if the gradient at I8 may depend on the whole sample), which are of no practical implications. Full-batch first-order algorithm. A full-batch (first-order) method is naturally defined as any algorithm that has access to the optimization objective—namely the empirical risk (—only via the full-batch first order oracle. In particular, if FC is the C’th query of the algorithm to the full-batch oracle then FC has to be of the form FC = &C (O(F0), . . . ,O(FC−1)), (3) where &C : (ℝ3+1)C → W is a fixed (possibly randomized) mapping. At the end of the process the algorithm outputs F( . We study the algorithm’s oracle complexity, which is the number of iterations ) the algorithm performs before halting. Therefore, we assume without loss of generality that F( = F) , i.e., the algorithm’s output is its )’th query. 2.1 Main result In this sectionwe establish ourmain result, which provides a generalization lower-bound for full-batch first order algorithms. The complete proof is provided in the full version of the paper [1]. Theorem 2.1. Let ε > 0 and =, ) ∈ ℕ; there exists 3 = poly(2=, ), 1/ε) such that the following holds. For any full-batch first-order algorithm with oracle complexity at most ) , there exists a 1-Lipschitz convex function 5 (F; I) inW, the unit-ball inℝ3 , and a distribution overZ such that, for some universal constant 2 > 0: E (∼ = [ (F()] ≥ min F★∈W (F★) + ε + Ω ( min { 1 − 2ε2 √ ), 0 }) . (4) An immediate consequence of Theorem 2.1 is that in order to obtain less than ε true risk we need at least ) = Ω(1/ε4) iterations. For simplicity, we state and prove the lower bound in Theorem 2.1 for the class of first-order fullbatch algorithms defined above. However, our constructions readily generalize to local full-batch oracles that provide a complete description of ( in an arbitrarily small neighborhood of the query point [25, 18]. Such oracles subsume second-order oracles, and consequently our generalization lower bounds hold also for second-order full-batch algorithms. 2.2 Discussion Theorem 2.1 suggests that full-batch first-order algorithms are inferior to other types of first-order algorithms that operate with access to individual examples, such as SGD. Importantly, this separation is achieved not in terms of the optimization performance but in terms of the generalization performance. In light of this result, we next discuss and revisit the role of the optimization algorithm in the context of SCO. In particular, we wish to discuss the implications to what are perhaps the two most prominent full-batch optimization methods, GD and regularzied-GD, and in turn compare them. Gradient descent. Perhaps the simplest example of a full-batch method is (projected) GD: GD is an iterative algorithm that at each iteration performs an update step FC = ΠW [FC−1 − η∇ ( (FC )], where W is a convex set on which we project the iterated step. The output of GD is normally taken to be F( = 1) ∑ FC (or a randomly chosen FC ). Notice, that each step requires one call to a full batch oracle, and a single projection operation. The convergence analysis of GD to the optimal solution of the empirical risk has been widely studied. Specifically, if = is the sample-size, it is known that with η = $ (1/ √ =) and ) = $ (=), GD converges to a minimizer of ( that is $ (1/ √ =)-sub optimal. For the exact variant of GD depicted above, the generalization performance was analyzed in the work of Amir et al. [2] that showed that with ) = $ (=) steps, GD will suffer Ω(1/ 4 √ =) generalization error. Theorem 2.1 extends the above result to any variant of GD (dynamic learning-rate, noisy GD, normalized GD, etc.). Regularized gradient descent. We would also like to discuss the implication of Theorem 2.1 with respect to regularized variants of GD that operate on the regularized empirical risk ̂ (F) = λA (F) + ( (F). The main motivation of introducing the regularization term A is to avoid overfitting, and a popular choice for A is the Euclidean norm A (F) = ‖F‖22. This choice leads to the following update rule for GD: FC+1 = ΠW [(1 − ηC ) · (2λFC ) − ηC∇ ( (FC )] , Again, this update can be implemented using a single first-order full-batch oracle call that computes the quantity∇ ( (FC ). More generally, for any data-independent A , GDon ̂ is a full-batch algorithm1. When A is the Euclidean norm, the minimizer of ̂ is known to enjoy (with choice λ = $ (1/ √ =)), an optimal generalization error of $ (1/ √ =) [6, 28]. This demonstrates the power of regularization and how it can provably induce generalization. Nevertheless, Theorem 2.1 still applies to any optimization method over ̂. Since optimization of ̂ (the regularized empirical risk) to $ (1/ √ =)- precision can be done via a full-batch method, and with less than $ (=) calls, we observe that there are methods that minimize the regularized-empirical risk but, due to Theorem 2.1 do not reach the optimal generalization error. The role of regularization. Finally, in light of Theorem 2.1 let us compare the different variants of GD and regularized GD that do generalize well, in order to sharpen our understanding of the role of regularization in generalization. The conclusion of Theorem 2.1 is that any full-batch method that generalizes well performs at least $ (=2) steps. For regularized GD, with `2 regularization, $ (=2) are indeed sufficient. In particular, with $ (=2) iterations we can find a solution that has $ (1/=) empirical error. Any such solution would enjoy a generalization error of $ (1/ √ =) [28]. For GD, Bassily et al. [4] showed that $ (=2) iterations would also suffice to achieve $ (1/ √ =) error. This is achieved by tuning the learning rate to η = $ (1/=3/2). Notice that this improvement does not require any type of added regularization. To summarize, both GD and regularized GD with optimal parameters require Θ(=2) iterations to attain the optimal$ (1/ √ =) generalization error. Overall then, explicitly adding regularization is not necessary nor does it improve the convergence rate. One might be tempted to believe that tuning the learning rate in GD induces implicitly some sort of regularization. For example, one might 1Note that we are not concerned with the computational cost of computing ∇A (FC ) since it does not factor into oracle complexity. imagine that GD can be biased towards minimal norm solution, which might explain redundancy of regularizing by this norm. However, this turns out also to be false: Dauber et al. [11] showed how GD (with any reasonable choice of learning rate) can diverge from the minimal norm solution. In fact, for any regularization term A, one can find examples where GD does not converge to the regularized solution. Thus, even though GD and regularized-GD are comparable algorithms in terms of generalization and oracle complexity, they are distinct in terms of the solutions they select. 3 Technical Overview In this section we give an overview of our construction and approach towards proving Theorem 2.1. For the sake of exposition, we will describe here a slightly simpler construction which proves the main result only for algorithms that remain in the span of the gradients. In more detail, let us examine the family of iterative algorithms of the form FC ∈ span{∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1)} ∩W, (5) where W is the unit ball and ∇ ( (FC ) is full-batch oracle response to query FC as defined in (2) above. Well-studied algorithms such as GD and GD with standard `2 norm regularization fall into this category of algorithms. To extend the lower bound to algorithms not restricted to the gradient span we refine the simpler construction and apply well-established techniques of random embedding in high-dimensional space. We discuss thesemodifications briefly in the end of this section and provide the full details in Section 4 and the full version of the paper [1]. 3.1 A simpler construction Let us fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ2 = Z, such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2 > 0. Define the hard instance 5(6) : ℝ3+2 ×Z→ ℝ as follows: 5(6) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (6) where 6γ, Eα and A are • 6γ(F;α) B √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) B { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) B max{0,max8∈[3+1]{F(8)}}, • Eα (8) B − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 43+2 is the (3 + 2)’th standard basis vector. The distribution we will consider is uniform over α. That is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(6) (F; (α, ε, γ)). The parameters γ1 and γ2 of the construction should be thought of as arbitrarily small. In particular, the term γ1Eα · F in Eq. (6) should be thought of as negligible, and the first term, 6γ, is roughly 6γ(F;α) ≈ √∑ 8∈3 α(8) (max{−F(8), 0})2. Another useful property of the construction is the population risk (F) = EI∼ 5(6) (F; I) is minimized at F★ ≈ −43+2, with expected loss (F★) ≈ −ε. However, as we will see, the choice of the perturbation vector Eα and the term A (F) hinder the learner from observing this coordinate and; the first Ω(ε−4) queries are constrained to a linear subspace where all the points have a high generalization error due to the expectation of the first term 6γ. 3.2 Analysis We next state the main lemmas we use, with proofs deferred to the full version of the paper [1]. Given a sample (, let us denote Ē = 1 = ∑ α∈( Eα, and span1{D1, D2, . . .} B span{D1, D2, . . .} ∩W. Additionally, given a fixed sample we write I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1} for the set of coordinates 8 ∈ [3] such that α(8) = 0 for every α in the sample (, plus the coordinate 3 + 1. Lemma 3.1. Let γ1 ≤ 12) , γ2 = 2γ1 ε , and suppose that the sample ( satisfies |I(() | > ) . Then there exists a first-order full-batch oracle such that for any algorithm that adheres to FC ∈ span1 { ∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1) } , (7) with respect to 5 (F; (α, ε, γ)) defined in Eq. (6), we have FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } for all C ∈ [)], where IC (() is the set of the C + 1 largest coordinates in I((). We next observe that in any span of the form {γ1Ē + ε43+2 + 48}8∈I) (() such that |I) (() | ≤ ) , we cannot find a solution with better risk than 0. On the other hand, note that for F̄ = −43+2, we have that 5(6) (F̄; (α, ε, γ)) = −ε. In other words, our lower bound stems from the following result: Lemma 3.2. For sufficiently small γ1 ≤ 2=εγ2, γ2 ≤ ε/ √ 4) , and any vector ‖Ē‖ ≤ √ 3, any output F( ∈ span1 8∈I) (() {γ1Ē + ε43+2 + 48}, satisfies 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)) + εF( (43+2) ≥ min { 1 − 2ε2 √ ), 0 } − 1 2 ε. (8) Lower bound proof sketch for span-restricted algorithms of the form (5). First, observe that the probability of an arbitrary index 8 to satisfy α(8) = 0 for all α ∈ ( is (1/2)=. Therefore, |I(() | −1, the number of indexes that hold this from the possible 3, is distributed as a binomial with 3 experiments and success probability ? = 2−=. Using elementary probability arguments one can show that for sufficiently large 3 we have |I(() | > ) with high probability; see Claim B.2 in the appendix. This implies that the conditions of Lemmas 3.1 and 3.2 hold w.h.p. To conclude, we relate the LHS of Eq. (8) to the expected risk (F) = E α∼ [ 5(6) (F; (α, ε, γ))] = E α∼ [6γ(F;α)] + γ1 · E α∼ [Eα] · F + εF · 43+2 + A (F). As 6γ(F;α) is convex w.r.t. α (since α(8) = α2 (8)) we can apply Jensen’s inequality with Eα∼ [α(8)] = 12 to obtain: E α∼ [6γ(F(;α)] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)). Applying theCauchy-Schwarz inequality to the second termwhile also using the facts that ‖Eα‖ ≤ √ 3 and that F( is in the unit ball, we get: γ1 E α∼ [Eα] · F ≥ −γ1 E α∼ [‖Eα‖ · ‖F‖] ≥ −γ1 √ 3. For sufficiently small γ1 this term is negligible, and since A (F) ≥ 0 we get that the expected risk is approximately the LHS term in Eq. (8). Lastly, recalling that (−43+2) = −ε we get that (F() − min F ∈W (F) ≥ 1 2 ε +min { 1 − 2ε2 √ ), 0 } w.h.p. The same lower bound (up to a constant) also holds in expectation by the the law of total expectation. Our distribution is supported on 5-Lipschitz convex functions, so that re-parametrizing 110ε→ ε as well as 5(6) yields the claimed lower bound (4) for the case of span-restricted algorithms. 3.3 Handling general full-batch algorithms The above construction establishes an Ω(1/ε4) oracle complexity lower bound on any algorithm whose iterates lie in the span of the previous gradients. While this covers a large class of algorithms, techniques like preconditioning [13], coordinate methods [27] and randomized smoothing [14] do not satisfy this assumption. In fact, a trivial algorithm that always outputs −43+2 will solve the hard instance (6) in a single iteration. To address general algorithms, we employ a well-established technique in optimization lower bounds [30, 8, 12] wherein we embed a hard instance 5 (F; I) for span-constrained algorithms in a random high-dimensional space. More concretely, we draw a random orthogonal matrix * ∈ ℝ3′×3 (*>* = 3×3) and consider the 3 ′ > 3-dimensional instance 5* (F; I) = 5 (*>F; I) along with its corresponding empirical objective (,* (F) = 1= ∑ 8∈[=] 5* (F; I8). Roughly speaking, we show that for a general algorithm operating with the appropriate subgradient oracle for (,* the iterate FC is approximately in the span of {∇ (,* (F0), . . . ,∇ (,* (FC−1)} in the sense that the component of FC outside that span is nearly orthogonal to the columns of*. Consequently, the response of the oracle to the query FC at iteration C is, with high probability, identical to the information it would return if queried with the projection of FC to the span of the previously observed gradients. This reduces, in a sense, the problem back to the span-restricted setting described above. For the embedding technique to work, we must robustify the hard instance construction so that small perturbations around points in the span of previous gradients do not “leak” additional information about the embedding *. To do that we make a fairly standard modification to the component A (F) in (6) (known as Nemirovski’s function [12, 7]), replacing it with max{0,max8∈[3 ]{F(8) + 8γ′}, F(3 + 1) + γ′′}, where γ′, γ′′ are small offset coefficients that go to zero as the embedding dimension 3 ′ tends to infinity. We provide the full construction and the proof of Theorem 2.1 in Section 4 and the full version of the paper [1]. 4 The Full Construction As explained above, the key difference between the simplified construction 5(6) and the full construction with which we prove Theorem 2.1 is that we modify the Nemirvoski function term A (F) in order to make it robust to queries that are nearly within a certain linear subspace. In particular, we bias the different terms in the maximization defining A (F) so as to control the index of the coordinate attaining the maximum. For ease of reference, we now provide a self-contained definition of our full construction with the modified Nemirovski function. Fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ3 = Z are such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2, γ3 > 0. Define the hard instance 5(9) : ℝ3+2 ×Z→ ℝ as follows: 5(9) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (9) where 6γ, Eα and A are • 6γ(F;α) := √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) := { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) := max{0,max8∈[3+1]{F(8) + σ8}} with σ8 := { 8 · γ1γ343= if 8 ∈ [3]; 2γ3 if 8 = 3 + 1. • Eα (8) := − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 48 is the 8’th standard basis vector in ℝ3+2. We consider a distribution over α that is distributed uniformly over {0, 1}3; that is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(9) (F; (α, ε, γ)). The rest of the parameters are set throughout the proof as follows: γ1 = εγ2 4 , γ2 = ε ) √ 3 , γ3 = ε 16 . (10) With this choice of distribution as well our choice of parameters we obtain, since ‖Eα‖ ≤ √ 3 and by our choice of γ1 (as well as Jensen’s inequality and A (·) ≥ 0): (F) = E α∼ [ 5(9) (F; (α, ε, γ)) ] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F(8)) + εF(3 + 2) − ε 4 . (11) Notice that we also have that for a choice F★ = −43+2, since A (F★) = 2γ3: (F★) = −ε + ε 8 = −7ε 8 (12) Our development makes frequent use of the following notation from Section 3: I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1}, IC (() = C largest elements in I((), and Ē = 1 = ∑ α∈( Eα. We begin with the following lemma, which is a robust version of Lemma 3.1 in Section 3. The proof is provided in the full version of the paper [1]. Lemma 4.1. Suppose that F0 = 0. Consider 5(9) (F; (α, ε, γ)) with parameters as in Eq. (10). Suppose ( is a sample such that |I(() | > C + 1. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ min {γ2 3 , γ1γ3 163= } . (13) Then, ∇ ( (F) = γ1Ē + ε43+2 + 48 , for some 8 ∈ IC+1 ((), where IC (() is the set of the C + 1 largest coordinates in I((). The following corollary states that the gradient oracle’s answers are resilient to small perturbation of the query (as long as they are in vicinity of the “right" subspace): the proof is provided in the full version of the paper [1]: Corollary 4.2. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ 1 4 √ 3 min {γ2 3 , γ1γ3 163= } . (14) Then, ∇ ( (F) = ∇ ( (ΠC+1 (F)), ( (F) = ( (ΠC+1 (F)), where ΠC is a projection onto span8∈IC (() {γ1Ē + ε43+2 + 48}. Acknowledgements and Disclosure of Funding This work has received support from the Israeli Science Foundation (ISF) grant no. 2549/19 and grant no. 2188/20, from the Len Blavatnik and the Blavatnik Family foundation, from the Yandex Initiative in Machine Learning, and from an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the focus of the paper regarding generalization performance in stochastic convex optimization? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the significance and novelty of the result? 4. Are there any concerns or suggestions regarding the presentation, proof, and contribution of the paper?
Summary Of The Paper Review
Summary Of The Paper This work studies the generalization performance of algorithms in stochastic convex optimization. Specifically, the authors study the full-batch algorithms where only full gradient and function value information are accessible. By constructing a hard instance of the objective function, the authors show that any full-batch algorithm needs at least Ω ( 1 / ϵ 4 ) iterations to obtain an ϵ -risk function, while stochastic algorithms such as SGD only need O ( 1 / ϵ 2 ) . Review This paper shows that there exists a separation between full-batch algorithms and stochastic algorithms in the generalization performance of empirical minimization. Such a separation has been well studied in the pure optimization setting, and this work is the first to study such a problem in the generalization setting. The presentation is clear and easy to understand. The proof is technically sound. Some of my comments are as follows. It is a bit hard to tell the significance of the proposed result. On the one hand, the lower bound result reveals that full-batch algorithms will be sample-inefficient compared with SGD, which is not studied in previous works. On the other hand, such a phenomenon is not surprising, since similar separation results have already been studied in previous works studying optimization errors. The tool the authors used is standard (hard instance by Nemirovski et. al), and it is not clear how the authors contribute given existing works. For the use of random rotation matrices, there are some additional works studying optimization lower bounds which also use them. The authors may want to highlight these works in the main text. [1] Fang, C., Li, C. J., Lin, Z., & Zhang, T. (2018). Spider: Near-optimal non-convex optimization via stochastic path integrated differential estimator. arXiv preprint arXiv:1807.01695. [2] Zhou, D., & Gu, Q. (2019, May). Lower bounds for smooth nonconvex finite-sum optimization. In International Conference on Machine Learning (pp. 7574-7583). PMLR. [3] Arjevani, Y., Carmon, Y., Duchi, J. C., Foster, D. J., Srebro, N., & Woodworth, B. (2019). Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365.
NIPS
Title Never Go Full Batch (in Stochastic Convex Optimization) Abstract We study the generalization performance of full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ε after $ (1/ε2) iterations, full-batch methods either need at least Ω(1/ε4) iterations or exhibit a dimension-dependent sample complexity. 1 Introduction Stochastic ConvexOptimization (SCO) is a fundamental problem that received considerable attention from the machine learning community in recent years [28, 15, 4, 11, 2]. In this problem, we assume a learner that is provided with a finite sample of convex functions drawn i.i.d. from an unknown distribution. The learner’s goal is to minimize the expected function. Owing to its simplicity, it serves as an almost ideal theoretical model for studying generalization properties of optimization algorithms ubiquitous in practice, particularly first-order methods which utilize only first derivatives of the loss rather than higher-order ones. One prominent approach for SCO—and learning more broadly—is to consider the empirical risk (the average objective over the sample) and apply a first-order optimization algorithm to minimize it. The problem of learning is then decoupled into controlling the optimization error over the empirical risk (training error) and bounding the difference between the empirical error and the expected error (generalization error). In convex optimization, the convergence of different first-order methods has been researched extensively for many years (e.g., [26, 25, 5]), and we currently have a very good understanding of this setting in terms of upper as well lower bounds on worst-case complexity. However, in SCO where the generalization error must also be taken into account, our understanding is still lacking. In fact, this is one of the few theoretical learning models where the optimization method affects not only the optimization error but also the generalization error (distinctively from models such as PAC learning and generalized linear models). In particular, it has been shown [28, 15] that some minima of the empirical risk may obtain large generalization error, while other minima have a vanishingly small 35th Conference on Neural Information Processing Systems (NeurIPS 2021). generalization error. To put differently, learning in SCO is not only a question of minimizing the empirical risk, but also a question of how one minimizes it. However, the results of [28, 15] leave open the question of whether concrete optimization also have different generalization properties. Towards better understanding, Amir et al. [2] recently studied the generalization properties of fullbatch gradient descent (GD), where each step is taken with respect to the gradient of the empirical risk. For GD (and a regularized variant thereof), they gave a lower bound on the generalization error as a function of iteration number, which is strictly larger than the well-known optimal rate obtained by stochastic gradient descent (SGD), where each step is taken with respect to the gradient at a sampled example. Notably, the lower bound of [2] precisely matches the dimension-independent stability-based upper bound recently shown for full-batch GD by Bassily et al. [4]. The separation between full-batch GD and SGD is the first evidence that not only abstract Empirical RiskMinimizers may fail to generalize in SCO, but in fact also basic methods such as GD could be prone to such overfitting. A natural question is, then, whether overfitting is inherent to full-batch algorithms, that minimize the objective only through access to the exact empirical risk, or whether this suboptimality can be remedied by adding regularization, noise, smoothing, or any other mechanism for improving the generalization of GD. In this work we present and analyze a model of full-batch optimization algorithms for SCO. Namely, we focus on algorithms that access the empirical risk only via a first-order oracle that computes the exact (full-batch) gradient of the empirical loss, rather than directly accessing gradients with respect to individual samples. Our main result provides a negative answer to the question above by significantly generalizing and extending the result of Amir et al. [2]: we show that any optimization method that uses full-batch gradients needs at least Ω(1/ε4) iterations to minimize the expected loss to within ε error. This is in contrast with the empirical loss, which can be minimized with only $ (1/ε2) steps. Comparing SGD and GD in terms of the sample size =, we see that SGD converges to an optimal generalization error of $ (1/ √ =) after $ (=) iterations, whereas a full-batch method must perform Ω(=2) iterations to achieve the same $ (1/ √ =) test error. We emphasize that we account here for the oracle complexity, which coincides with the iteration complexity in the case of gradient methods. In terms of individual gradients calculations, while SGD uses at most $ (=) gradient calculations (one sample per iteration), a full-batch method will perform Ω(=3) calculations (= samples per iteration). The above result is applicable to a wide family of full-batch learning algorithms: regularized GD (with any data-independent regularization function), noisy GD, GDwith line-search or adaptive step sizes, GD with momentum, proximal methods, coordinate methods, and many more. Taken together with upper bound of Bassily et al. [4], we obtain a sharp rate of Θ(1/ε4) for the generalizationcomplexity of full-batch methods. Surprisingly, this rate is achieved by standard GD (with an unusual step-size choice of η = Θ(ε3)), and it cannot be improved by adding regularization of any sort, nor by adding noise or any other form of implicit/explicit bias. 1.1 Related work This work extends and generalizes the results of Amir et al. [2] who proved generalization lower bounds for GD (and a specific instance of regularized GD). Our work shows that in fact any full-batch method will suffer from similar lower bounds. Our construction builds upon the one used in [2], which in turn builds upon previous constructions [4, 28]. However, our arguments and proofs here are more challenging, as we need to reason about a general family of algorithms, and not about a specific algorithm whose trajectory can be analyzed directly. Our developments also build on ideas from the literature on oracle complexity lower bounds in optimization [25, 26, 30, 8, 12, 9]. In particular, we first prove our result in the simplified setting of algorithms constrained to the span of observed gradients [25, 26] and subsequently lift it to general algorithms using a random high-dimensional embedding technique proposed byWoodworth and Srebro [30] and later refined in [8, 12]. However, while these works lower bound what we call the empirical risk, we lower bound the generalization error. This requires us to develop a somewhat different argument for how the span of the gradients evolve during the optimization: in prior work, the algorithm learns the component of the solution coordinate by coordinate, whereas in our work the true (generalizing) solution is present in the observed gradients from the first query, but spurious sampling artifacts drown it out. Empirical studies (outside of the scope of SCO) support the claim that generalization capabilities degrade with the increase of the batch size. Specifically, Zhu et al. [33] indicates that SGD outperforms GD in terms of generalization. The works of Keskar et al. [22] and Hoffer et al. [20] exhibit a similar phenomenon in which small-batch SGD generalizes better than large-batch SGD with the same iteration budget. We provide the first theoretical evidence for this phenomenon for convex losses. Several theoretical studies explore the convergence of stochastic methods that use mini-batches [10, 23, 31]. Note that this setting differs from ours, as they assume access to minibatches sampled without replacement whereas full-batch means we reuse the same (full) batch with each gradient step. There has also been recent progress in improving the generalization capabilities of GD. Wu et al. [32] interprets mini-batch SGD as a noisy version of GD. They propose a modified algorithm with noise injected to the full-batch gradients. Geiping et al. [16] propose a GD-based training scheme that achieves CIFAR-10 generalization performance comparable to standard SGD training. Interestingly, both proposed algorithms require access to sample-points and are therefore not “fullbatch” by our definition: The scheme [32] requires sample-point data for computing the noise, while the GD variant [16] uses mini-batch statistics to compute a regularization term (as well as batch normalization). Our work shows that (in SCO) this is unavoidable: namely, no data-independent noise or full-batch regularization can be used to improve generalization at a reasonable computational budget. Several other works study the generalization performance of GD [29, 17, 21, 24]. The work of Soudry et al. [29], for example, examines GD on unregularized logistic regression problems. They show that, in the limit, GD converges to a well-generalizing solution by arguing about the bias of the algorithm. Interestingly, both our and their results require slow-training, beyond what is required for empirical error optimization. Another work that highlights the slow convergence of GD is that of Bassily et al. [4]. They were the first to address uniform stability of (non-smooth) GD and SGD, and provided tight bounds. Stability entails generalization, hence our results lead to stability lower bounds for any full-batch method. Consequently, we extend the lower bounds for GD in the work of Bassily et al. [4] to a wider class. It might be thought that the instability argument of Bassily et al. [4] can be used to obtain similar generalization lower bounds—however, we note that their techniques also prove instability of SGD (which does generalize). Hence, instability does not immediately imply, in this setting, lack of generalization. Finally, we note that under smoothness and strong convexity, it is well known that improved rates can be obtained. Specifically, using the stability bound of Bousquet and Elisseeff [6] one can show that we can achieve generalization error of $ (1/ √ =) after $ (=) iterations if the population risk is $ (1)-strongly convex. The arguments of Hardt et al. [19] imply generalization bound to instances where every sample risk is $ ( √ =) smooth. Our result implies that, even though these special families of functions enjoy appealing learning rates, in general it is impossible to obtain better rates by strong-convexifying or smoothing problem instances via first-order full-batch oracle queries. 2 Problem Setup and Main Results We study the standard setting of stochastic convex optimization. In this setting, a learning problem is specified by a fixed domain W ⊆ ℝ3 in 3-dimensional Euclidean space, and a loss function 5 : W × Z → ℝ, which is both convex and !-Lipschitz with respect to its first argument (that is, for any I ∈ Z the function 5 (F; I) is !-Lipschitz and convex with respect to F). In particular, throughout the paper, our construction consists of 1-Lipschitz functions and we will focus on a fixed domain W defined to be the unit Euclidean ball in ℝ3 , namely W = {F : ‖F‖2 ≤ 1}. We also assume that there exists an unknown distribution over parameters I and the goal of the learner is to optimize the true risk (or true loss, or population risk) defined as follows: (F) B E I∼ [ 5 (F; I)], (1) We assume that a sample ( = {I1, . . . , I=} is drawn from the distribution , and the learner has to output F( ∈ W (the exact access the learner has to the sample, and how F( may depend on ( is discussed below). We require the solution to be ε-optimal in expectation for some parameter ε > 0, i.e., E (∼ = [ (F()] − min F★∈W (F★) ≤ ε. As discussed, the standard setting assumes that the learner has direct access to the i.i.d. sample, as well as to the gradients of the loss function (i.e., a first-order oracle). In this work, though, we focus on a specific family full-batch methods. Hence, the optimization process is described as follows: First, an i.i.d. sample ( = (I1, . . . , I=) is drawn from . Then, the learner is provided with access only to the empirical risk via a full-batch first-order oracle which we define next. Full-batch first-order oracle. Consider a fixed sample ( = (I1, . . . , I=) of size =, drawn i.i.d. from . The empirical risk over the sample ( is ( (F) = 1 = =∑ 8=1 5 (F; I8). Then, a full-batch first-order oracle is a procedure that, given input F ∈W, outputs O(F) := (∇ ( (F); ( (F)). where ∇ ( (F) is an empirical risk sub-gradient of the form ∇ ( (F) = 1 = =∑ 8=1 ∇ 5 (F; I8), (2) and each sub-gradient ∇ 5 (F, I8) is computed by the oracle as a function of F and I8 (that is, independently of I 9 for 9 ≠ 8). We emphasize that the sample is fixed throughout the optimization, so that the oracle computes the gradient of the same empirical risk function at every call, hence the name full-batch. Note that the subgradient with respect to a single data point, i.e., ∇ 5 (F; I8), is not accessible through this oracle, which only returns the average gradient over the sample (. Notice that our definition above is slightly narrower than a general sub-gradient oracle for the empirical risk due to the requirement that the sub-gradients ∇ 5 (F, I8) are chosen independently of I 9 for 9 ≠ 8 – since we provide here with a lower bound, this restriction strengthens our result. We make this restriction to avoid some degenerate constructions (that in fact can even be used to fail SGD if the gradient at I8 may depend on the whole sample), which are of no practical implications. Full-batch first-order algorithm. A full-batch (first-order) method is naturally defined as any algorithm that has access to the optimization objective—namely the empirical risk (—only via the full-batch first order oracle. In particular, if FC is the C’th query of the algorithm to the full-batch oracle then FC has to be of the form FC = &C (O(F0), . . . ,O(FC−1)), (3) where &C : (ℝ3+1)C → W is a fixed (possibly randomized) mapping. At the end of the process the algorithm outputs F( . We study the algorithm’s oracle complexity, which is the number of iterations ) the algorithm performs before halting. Therefore, we assume without loss of generality that F( = F) , i.e., the algorithm’s output is its )’th query. 2.1 Main result In this sectionwe establish ourmain result, which provides a generalization lower-bound for full-batch first order algorithms. The complete proof is provided in the full version of the paper [1]. Theorem 2.1. Let ε > 0 and =, ) ∈ ℕ; there exists 3 = poly(2=, ), 1/ε) such that the following holds. For any full-batch first-order algorithm with oracle complexity at most ) , there exists a 1-Lipschitz convex function 5 (F; I) inW, the unit-ball inℝ3 , and a distribution overZ such that, for some universal constant 2 > 0: E (∼ = [ (F()] ≥ min F★∈W (F★) + ε + Ω ( min { 1 − 2ε2 √ ), 0 }) . (4) An immediate consequence of Theorem 2.1 is that in order to obtain less than ε true risk we need at least ) = Ω(1/ε4) iterations. For simplicity, we state and prove the lower bound in Theorem 2.1 for the class of first-order fullbatch algorithms defined above. However, our constructions readily generalize to local full-batch oracles that provide a complete description of ( in an arbitrarily small neighborhood of the query point [25, 18]. Such oracles subsume second-order oracles, and consequently our generalization lower bounds hold also for second-order full-batch algorithms. 2.2 Discussion Theorem 2.1 suggests that full-batch first-order algorithms are inferior to other types of first-order algorithms that operate with access to individual examples, such as SGD. Importantly, this separation is achieved not in terms of the optimization performance but in terms of the generalization performance. In light of this result, we next discuss and revisit the role of the optimization algorithm in the context of SCO. In particular, we wish to discuss the implications to what are perhaps the two most prominent full-batch optimization methods, GD and regularzied-GD, and in turn compare them. Gradient descent. Perhaps the simplest example of a full-batch method is (projected) GD: GD is an iterative algorithm that at each iteration performs an update step FC = ΠW [FC−1 − η∇ ( (FC )], where W is a convex set on which we project the iterated step. The output of GD is normally taken to be F( = 1) ∑ FC (or a randomly chosen FC ). Notice, that each step requires one call to a full batch oracle, and a single projection operation. The convergence analysis of GD to the optimal solution of the empirical risk has been widely studied. Specifically, if = is the sample-size, it is known that with η = $ (1/ √ =) and ) = $ (=), GD converges to a minimizer of ( that is $ (1/ √ =)-sub optimal. For the exact variant of GD depicted above, the generalization performance was analyzed in the work of Amir et al. [2] that showed that with ) = $ (=) steps, GD will suffer Ω(1/ 4 √ =) generalization error. Theorem 2.1 extends the above result to any variant of GD (dynamic learning-rate, noisy GD, normalized GD, etc.). Regularized gradient descent. We would also like to discuss the implication of Theorem 2.1 with respect to regularized variants of GD that operate on the regularized empirical risk ̂ (F) = λA (F) + ( (F). The main motivation of introducing the regularization term A is to avoid overfitting, and a popular choice for A is the Euclidean norm A (F) = ‖F‖22. This choice leads to the following update rule for GD: FC+1 = ΠW [(1 − ηC ) · (2λFC ) − ηC∇ ( (FC )] , Again, this update can be implemented using a single first-order full-batch oracle call that computes the quantity∇ ( (FC ). More generally, for any data-independent A , GDon ̂ is a full-batch algorithm1. When A is the Euclidean norm, the minimizer of ̂ is known to enjoy (with choice λ = $ (1/ √ =)), an optimal generalization error of $ (1/ √ =) [6, 28]. This demonstrates the power of regularization and how it can provably induce generalization. Nevertheless, Theorem 2.1 still applies to any optimization method over ̂. Since optimization of ̂ (the regularized empirical risk) to $ (1/ √ =)- precision can be done via a full-batch method, and with less than $ (=) calls, we observe that there are methods that minimize the regularized-empirical risk but, due to Theorem 2.1 do not reach the optimal generalization error. The role of regularization. Finally, in light of Theorem 2.1 let us compare the different variants of GD and regularized GD that do generalize well, in order to sharpen our understanding of the role of regularization in generalization. The conclusion of Theorem 2.1 is that any full-batch method that generalizes well performs at least $ (=2) steps. For regularized GD, with `2 regularization, $ (=2) are indeed sufficient. In particular, with $ (=2) iterations we can find a solution that has $ (1/=) empirical error. Any such solution would enjoy a generalization error of $ (1/ √ =) [28]. For GD, Bassily et al. [4] showed that $ (=2) iterations would also suffice to achieve $ (1/ √ =) error. This is achieved by tuning the learning rate to η = $ (1/=3/2). Notice that this improvement does not require any type of added regularization. To summarize, both GD and regularized GD with optimal parameters require Θ(=2) iterations to attain the optimal$ (1/ √ =) generalization error. Overall then, explicitly adding regularization is not necessary nor does it improve the convergence rate. One might be tempted to believe that tuning the learning rate in GD induces implicitly some sort of regularization. For example, one might 1Note that we are not concerned with the computational cost of computing ∇A (FC ) since it does not factor into oracle complexity. imagine that GD can be biased towards minimal norm solution, which might explain redundancy of regularizing by this norm. However, this turns out also to be false: Dauber et al. [11] showed how GD (with any reasonable choice of learning rate) can diverge from the minimal norm solution. In fact, for any regularization term A, one can find examples where GD does not converge to the regularized solution. Thus, even though GD and regularized-GD are comparable algorithms in terms of generalization and oracle complexity, they are distinct in terms of the solutions they select. 3 Technical Overview In this section we give an overview of our construction and approach towards proving Theorem 2.1. For the sake of exposition, we will describe here a slightly simpler construction which proves the main result only for algorithms that remain in the span of the gradients. In more detail, let us examine the family of iterative algorithms of the form FC ∈ span{∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1)} ∩W, (5) where W is the unit ball and ∇ ( (FC ) is full-batch oracle response to query FC as defined in (2) above. Well-studied algorithms such as GD and GD with standard `2 norm regularization fall into this category of algorithms. To extend the lower bound to algorithms not restricted to the gradient span we refine the simpler construction and apply well-established techniques of random embedding in high-dimensional space. We discuss thesemodifications briefly in the end of this section and provide the full details in Section 4 and the full version of the paper [1]. 3.1 A simpler construction Let us fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ2 = Z, such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2 > 0. Define the hard instance 5(6) : ℝ3+2 ×Z→ ℝ as follows: 5(6) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (6) where 6γ, Eα and A are • 6γ(F;α) B √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) B { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) B max{0,max8∈[3+1]{F(8)}}, • Eα (8) B − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 43+2 is the (3 + 2)’th standard basis vector. The distribution we will consider is uniform over α. That is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(6) (F; (α, ε, γ)). The parameters γ1 and γ2 of the construction should be thought of as arbitrarily small. In particular, the term γ1Eα · F in Eq. (6) should be thought of as negligible, and the first term, 6γ, is roughly 6γ(F;α) ≈ √∑ 8∈3 α(8) (max{−F(8), 0})2. Another useful property of the construction is the population risk (F) = EI∼ 5(6) (F; I) is minimized at F★ ≈ −43+2, with expected loss (F★) ≈ −ε. However, as we will see, the choice of the perturbation vector Eα and the term A (F) hinder the learner from observing this coordinate and; the first Ω(ε−4) queries are constrained to a linear subspace where all the points have a high generalization error due to the expectation of the first term 6γ. 3.2 Analysis We next state the main lemmas we use, with proofs deferred to the full version of the paper [1]. Given a sample (, let us denote Ē = 1 = ∑ α∈( Eα, and span1{D1, D2, . . .} B span{D1, D2, . . .} ∩W. Additionally, given a fixed sample we write I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1} for the set of coordinates 8 ∈ [3] such that α(8) = 0 for every α in the sample (, plus the coordinate 3 + 1. Lemma 3.1. Let γ1 ≤ 12) , γ2 = 2γ1 ε , and suppose that the sample ( satisfies |I(() | > ) . Then there exists a first-order full-batch oracle such that for any algorithm that adheres to FC ∈ span1 { ∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1) } , (7) with respect to 5 (F; (α, ε, γ)) defined in Eq. (6), we have FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } for all C ∈ [)], where IC (() is the set of the C + 1 largest coordinates in I((). We next observe that in any span of the form {γ1Ē + ε43+2 + 48}8∈I) (() such that |I) (() | ≤ ) , we cannot find a solution with better risk than 0. On the other hand, note that for F̄ = −43+2, we have that 5(6) (F̄; (α, ε, γ)) = −ε. In other words, our lower bound stems from the following result: Lemma 3.2. For sufficiently small γ1 ≤ 2=εγ2, γ2 ≤ ε/ √ 4) , and any vector ‖Ē‖ ≤ √ 3, any output F( ∈ span1 8∈I) (() {γ1Ē + ε43+2 + 48}, satisfies 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)) + εF( (43+2) ≥ min { 1 − 2ε2 √ ), 0 } − 1 2 ε. (8) Lower bound proof sketch for span-restricted algorithms of the form (5). First, observe that the probability of an arbitrary index 8 to satisfy α(8) = 0 for all α ∈ ( is (1/2)=. Therefore, |I(() | −1, the number of indexes that hold this from the possible 3, is distributed as a binomial with 3 experiments and success probability ? = 2−=. Using elementary probability arguments one can show that for sufficiently large 3 we have |I(() | > ) with high probability; see Claim B.2 in the appendix. This implies that the conditions of Lemmas 3.1 and 3.2 hold w.h.p. To conclude, we relate the LHS of Eq. (8) to the expected risk (F) = E α∼ [ 5(6) (F; (α, ε, γ))] = E α∼ [6γ(F;α)] + γ1 · E α∼ [Eα] · F + εF · 43+2 + A (F). As 6γ(F;α) is convex w.r.t. α (since α(8) = α2 (8)) we can apply Jensen’s inequality with Eα∼ [α(8)] = 12 to obtain: E α∼ [6γ(F(;α)] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)). Applying theCauchy-Schwarz inequality to the second termwhile also using the facts that ‖Eα‖ ≤ √ 3 and that F( is in the unit ball, we get: γ1 E α∼ [Eα] · F ≥ −γ1 E α∼ [‖Eα‖ · ‖F‖] ≥ −γ1 √ 3. For sufficiently small γ1 this term is negligible, and since A (F) ≥ 0 we get that the expected risk is approximately the LHS term in Eq. (8). Lastly, recalling that (−43+2) = −ε we get that (F() − min F ∈W (F) ≥ 1 2 ε +min { 1 − 2ε2 √ ), 0 } w.h.p. The same lower bound (up to a constant) also holds in expectation by the the law of total expectation. Our distribution is supported on 5-Lipschitz convex functions, so that re-parametrizing 110ε→ ε as well as 5(6) yields the claimed lower bound (4) for the case of span-restricted algorithms. 3.3 Handling general full-batch algorithms The above construction establishes an Ω(1/ε4) oracle complexity lower bound on any algorithm whose iterates lie in the span of the previous gradients. While this covers a large class of algorithms, techniques like preconditioning [13], coordinate methods [27] and randomized smoothing [14] do not satisfy this assumption. In fact, a trivial algorithm that always outputs −43+2 will solve the hard instance (6) in a single iteration. To address general algorithms, we employ a well-established technique in optimization lower bounds [30, 8, 12] wherein we embed a hard instance 5 (F; I) for span-constrained algorithms in a random high-dimensional space. More concretely, we draw a random orthogonal matrix * ∈ ℝ3′×3 (*>* = 3×3) and consider the 3 ′ > 3-dimensional instance 5* (F; I) = 5 (*>F; I) along with its corresponding empirical objective (,* (F) = 1= ∑ 8∈[=] 5* (F; I8). Roughly speaking, we show that for a general algorithm operating with the appropriate subgradient oracle for (,* the iterate FC is approximately in the span of {∇ (,* (F0), . . . ,∇ (,* (FC−1)} in the sense that the component of FC outside that span is nearly orthogonal to the columns of*. Consequently, the response of the oracle to the query FC at iteration C is, with high probability, identical to the information it would return if queried with the projection of FC to the span of the previously observed gradients. This reduces, in a sense, the problem back to the span-restricted setting described above. For the embedding technique to work, we must robustify the hard instance construction so that small perturbations around points in the span of previous gradients do not “leak” additional information about the embedding *. To do that we make a fairly standard modification to the component A (F) in (6) (known as Nemirovski’s function [12, 7]), replacing it with max{0,max8∈[3 ]{F(8) + 8γ′}, F(3 + 1) + γ′′}, where γ′, γ′′ are small offset coefficients that go to zero as the embedding dimension 3 ′ tends to infinity. We provide the full construction and the proof of Theorem 2.1 in Section 4 and the full version of the paper [1]. 4 The Full Construction As explained above, the key difference between the simplified construction 5(6) and the full construction with which we prove Theorem 2.1 is that we modify the Nemirvoski function term A (F) in order to make it robust to queries that are nearly within a certain linear subspace. In particular, we bias the different terms in the maximization defining A (F) so as to control the index of the coordinate attaining the maximum. For ease of reference, we now provide a self-contained definition of our full construction with the modified Nemirovski function. Fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ3 = Z are such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2, γ3 > 0. Define the hard instance 5(9) : ℝ3+2 ×Z→ ℝ as follows: 5(9) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (9) where 6γ, Eα and A are • 6γ(F;α) := √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) := { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) := max{0,max8∈[3+1]{F(8) + σ8}} with σ8 := { 8 · γ1γ343= if 8 ∈ [3]; 2γ3 if 8 = 3 + 1. • Eα (8) := − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 48 is the 8’th standard basis vector in ℝ3+2. We consider a distribution over α that is distributed uniformly over {0, 1}3; that is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(9) (F; (α, ε, γ)). The rest of the parameters are set throughout the proof as follows: γ1 = εγ2 4 , γ2 = ε ) √ 3 , γ3 = ε 16 . (10) With this choice of distribution as well our choice of parameters we obtain, since ‖Eα‖ ≤ √ 3 and by our choice of γ1 (as well as Jensen’s inequality and A (·) ≥ 0): (F) = E α∼ [ 5(9) (F; (α, ε, γ)) ] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F(8)) + εF(3 + 2) − ε 4 . (11) Notice that we also have that for a choice F★ = −43+2, since A (F★) = 2γ3: (F★) = −ε + ε 8 = −7ε 8 (12) Our development makes frequent use of the following notation from Section 3: I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1}, IC (() = C largest elements in I((), and Ē = 1 = ∑ α∈( Eα. We begin with the following lemma, which is a robust version of Lemma 3.1 in Section 3. The proof is provided in the full version of the paper [1]. Lemma 4.1. Suppose that F0 = 0. Consider 5(9) (F; (α, ε, γ)) with parameters as in Eq. (10). Suppose ( is a sample such that |I(() | > C + 1. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ min {γ2 3 , γ1γ3 163= } . (13) Then, ∇ ( (F) = γ1Ē + ε43+2 + 48 , for some 8 ∈ IC+1 ((), where IC (() is the set of the C + 1 largest coordinates in I((). The following corollary states that the gradient oracle’s answers are resilient to small perturbation of the query (as long as they are in vicinity of the “right" subspace): the proof is provided in the full version of the paper [1]: Corollary 4.2. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ 1 4 √ 3 min {γ2 3 , γ1γ3 163= } . (14) Then, ∇ ( (F) = ∇ ( (ΠC+1 (F)), ( (F) = ( (ΠC+1 (F)), where ΠC is a projection onto span8∈IC (() {γ1Ē + ε43+2 + 48}. Acknowledgements and Disclosure of Funding This work has received support from the Israeli Science Foundation (ISF) grant no. 2549/19 and grant no. 2188/20, from the Len Blavatnik and the Blavatnik Family foundation, from the Yandex Initiative in Machine Learning, and from an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the main contribution of the paper regarding the optimization problem? 2. What are the weaknesses of the paper's proof regarding the lower bound on queries? 3. How does the reviewer assess the novelty and interest of the results without the contrast? 4. Are there any questions or concerns regarding the organization and presentation of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper considers a problem where one is trying to find an input that minimizes the expected value of a function that one has sample values of. Its goal is to show that algorithms that only have the ability to query the gradient of the average of the samples at specified points may need more queries than a function that can check the gradient on individual samples. In order to do that, the paper finds a class of probability distributions over functions for which any algorithm that gains information on the functions only by querying the gradient of the average of the sampled function needs Ω ( 1 / ϵ 4 ) queries to get within ϵ of the minimum expected loss. Review As far as I can tell, this never actually proves that one can get a loss within ϵ of the optimum in o ( 1 / ϵ 4 ) queries if one is not restricted to working with the averaged gradient, which means it does not actually prove the separation result in the abstract. The lower bound on the number of queries needed to get near the minimum is much less interesting without the contrast than it would be with it. This appears to mostly use known techniques to prove the bound. The idea of bounding what one can learn from the gradient reminds me of statistical query bounds, although those would be weaker in that they would use a distorted form of the gradient. The organization is mostly fine, although separating the lemmas in section 4 from their proofs is confusing. ----------------------------------edit-------------------------- The textbook the authors mention in their rebuttal contains the positive half of the separation. So, I am now recommending that this paper be accepted.
NIPS
Title Never Go Full Batch (in Stochastic Convex Optimization) Abstract We study the generalization performance of full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ε after $ (1/ε2) iterations, full-batch methods either need at least Ω(1/ε4) iterations or exhibit a dimension-dependent sample complexity. 1 Introduction Stochastic ConvexOptimization (SCO) is a fundamental problem that received considerable attention from the machine learning community in recent years [28, 15, 4, 11, 2]. In this problem, we assume a learner that is provided with a finite sample of convex functions drawn i.i.d. from an unknown distribution. The learner’s goal is to minimize the expected function. Owing to its simplicity, it serves as an almost ideal theoretical model for studying generalization properties of optimization algorithms ubiquitous in practice, particularly first-order methods which utilize only first derivatives of the loss rather than higher-order ones. One prominent approach for SCO—and learning more broadly—is to consider the empirical risk (the average objective over the sample) and apply a first-order optimization algorithm to minimize it. The problem of learning is then decoupled into controlling the optimization error over the empirical risk (training error) and bounding the difference between the empirical error and the expected error (generalization error). In convex optimization, the convergence of different first-order methods has been researched extensively for many years (e.g., [26, 25, 5]), and we currently have a very good understanding of this setting in terms of upper as well lower bounds on worst-case complexity. However, in SCO where the generalization error must also be taken into account, our understanding is still lacking. In fact, this is one of the few theoretical learning models where the optimization method affects not only the optimization error but also the generalization error (distinctively from models such as PAC learning and generalized linear models). In particular, it has been shown [28, 15] that some minima of the empirical risk may obtain large generalization error, while other minima have a vanishingly small 35th Conference on Neural Information Processing Systems (NeurIPS 2021). generalization error. To put differently, learning in SCO is not only a question of minimizing the empirical risk, but also a question of how one minimizes it. However, the results of [28, 15] leave open the question of whether concrete optimization also have different generalization properties. Towards better understanding, Amir et al. [2] recently studied the generalization properties of fullbatch gradient descent (GD), where each step is taken with respect to the gradient of the empirical risk. For GD (and a regularized variant thereof), they gave a lower bound on the generalization error as a function of iteration number, which is strictly larger than the well-known optimal rate obtained by stochastic gradient descent (SGD), where each step is taken with respect to the gradient at a sampled example. Notably, the lower bound of [2] precisely matches the dimension-independent stability-based upper bound recently shown for full-batch GD by Bassily et al. [4]. The separation between full-batch GD and SGD is the first evidence that not only abstract Empirical RiskMinimizers may fail to generalize in SCO, but in fact also basic methods such as GD could be prone to such overfitting. A natural question is, then, whether overfitting is inherent to full-batch algorithms, that minimize the objective only through access to the exact empirical risk, or whether this suboptimality can be remedied by adding regularization, noise, smoothing, or any other mechanism for improving the generalization of GD. In this work we present and analyze a model of full-batch optimization algorithms for SCO. Namely, we focus on algorithms that access the empirical risk only via a first-order oracle that computes the exact (full-batch) gradient of the empirical loss, rather than directly accessing gradients with respect to individual samples. Our main result provides a negative answer to the question above by significantly generalizing and extending the result of Amir et al. [2]: we show that any optimization method that uses full-batch gradients needs at least Ω(1/ε4) iterations to minimize the expected loss to within ε error. This is in contrast with the empirical loss, which can be minimized with only $ (1/ε2) steps. Comparing SGD and GD in terms of the sample size =, we see that SGD converges to an optimal generalization error of $ (1/ √ =) after $ (=) iterations, whereas a full-batch method must perform Ω(=2) iterations to achieve the same $ (1/ √ =) test error. We emphasize that we account here for the oracle complexity, which coincides with the iteration complexity in the case of gradient methods. In terms of individual gradients calculations, while SGD uses at most $ (=) gradient calculations (one sample per iteration), a full-batch method will perform Ω(=3) calculations (= samples per iteration). The above result is applicable to a wide family of full-batch learning algorithms: regularized GD (with any data-independent regularization function), noisy GD, GDwith line-search or adaptive step sizes, GD with momentum, proximal methods, coordinate methods, and many more. Taken together with upper bound of Bassily et al. [4], we obtain a sharp rate of Θ(1/ε4) for the generalizationcomplexity of full-batch methods. Surprisingly, this rate is achieved by standard GD (with an unusual step-size choice of η = Θ(ε3)), and it cannot be improved by adding regularization of any sort, nor by adding noise or any other form of implicit/explicit bias. 1.1 Related work This work extends and generalizes the results of Amir et al. [2] who proved generalization lower bounds for GD (and a specific instance of regularized GD). Our work shows that in fact any full-batch method will suffer from similar lower bounds. Our construction builds upon the one used in [2], which in turn builds upon previous constructions [4, 28]. However, our arguments and proofs here are more challenging, as we need to reason about a general family of algorithms, and not about a specific algorithm whose trajectory can be analyzed directly. Our developments also build on ideas from the literature on oracle complexity lower bounds in optimization [25, 26, 30, 8, 12, 9]. In particular, we first prove our result in the simplified setting of algorithms constrained to the span of observed gradients [25, 26] and subsequently lift it to general algorithms using a random high-dimensional embedding technique proposed byWoodworth and Srebro [30] and later refined in [8, 12]. However, while these works lower bound what we call the empirical risk, we lower bound the generalization error. This requires us to develop a somewhat different argument for how the span of the gradients evolve during the optimization: in prior work, the algorithm learns the component of the solution coordinate by coordinate, whereas in our work the true (generalizing) solution is present in the observed gradients from the first query, but spurious sampling artifacts drown it out. Empirical studies (outside of the scope of SCO) support the claim that generalization capabilities degrade with the increase of the batch size. Specifically, Zhu et al. [33] indicates that SGD outperforms GD in terms of generalization. The works of Keskar et al. [22] and Hoffer et al. [20] exhibit a similar phenomenon in which small-batch SGD generalizes better than large-batch SGD with the same iteration budget. We provide the first theoretical evidence for this phenomenon for convex losses. Several theoretical studies explore the convergence of stochastic methods that use mini-batches [10, 23, 31]. Note that this setting differs from ours, as they assume access to minibatches sampled without replacement whereas full-batch means we reuse the same (full) batch with each gradient step. There has also been recent progress in improving the generalization capabilities of GD. Wu et al. [32] interprets mini-batch SGD as a noisy version of GD. They propose a modified algorithm with noise injected to the full-batch gradients. Geiping et al. [16] propose a GD-based training scheme that achieves CIFAR-10 generalization performance comparable to standard SGD training. Interestingly, both proposed algorithms require access to sample-points and are therefore not “fullbatch” by our definition: The scheme [32] requires sample-point data for computing the noise, while the GD variant [16] uses mini-batch statistics to compute a regularization term (as well as batch normalization). Our work shows that (in SCO) this is unavoidable: namely, no data-independent noise or full-batch regularization can be used to improve generalization at a reasonable computational budget. Several other works study the generalization performance of GD [29, 17, 21, 24]. The work of Soudry et al. [29], for example, examines GD on unregularized logistic regression problems. They show that, in the limit, GD converges to a well-generalizing solution by arguing about the bias of the algorithm. Interestingly, both our and their results require slow-training, beyond what is required for empirical error optimization. Another work that highlights the slow convergence of GD is that of Bassily et al. [4]. They were the first to address uniform stability of (non-smooth) GD and SGD, and provided tight bounds. Stability entails generalization, hence our results lead to stability lower bounds for any full-batch method. Consequently, we extend the lower bounds for GD in the work of Bassily et al. [4] to a wider class. It might be thought that the instability argument of Bassily et al. [4] can be used to obtain similar generalization lower bounds—however, we note that their techniques also prove instability of SGD (which does generalize). Hence, instability does not immediately imply, in this setting, lack of generalization. Finally, we note that under smoothness and strong convexity, it is well known that improved rates can be obtained. Specifically, using the stability bound of Bousquet and Elisseeff [6] one can show that we can achieve generalization error of $ (1/ √ =) after $ (=) iterations if the population risk is $ (1)-strongly convex. The arguments of Hardt et al. [19] imply generalization bound to instances where every sample risk is $ ( √ =) smooth. Our result implies that, even though these special families of functions enjoy appealing learning rates, in general it is impossible to obtain better rates by strong-convexifying or smoothing problem instances via first-order full-batch oracle queries. 2 Problem Setup and Main Results We study the standard setting of stochastic convex optimization. In this setting, a learning problem is specified by a fixed domain W ⊆ ℝ3 in 3-dimensional Euclidean space, and a loss function 5 : W × Z → ℝ, which is both convex and !-Lipschitz with respect to its first argument (that is, for any I ∈ Z the function 5 (F; I) is !-Lipschitz and convex with respect to F). In particular, throughout the paper, our construction consists of 1-Lipschitz functions and we will focus on a fixed domain W defined to be the unit Euclidean ball in ℝ3 , namely W = {F : ‖F‖2 ≤ 1}. We also assume that there exists an unknown distribution over parameters I and the goal of the learner is to optimize the true risk (or true loss, or population risk) defined as follows: (F) B E I∼ [ 5 (F; I)], (1) We assume that a sample ( = {I1, . . . , I=} is drawn from the distribution , and the learner has to output F( ∈ W (the exact access the learner has to the sample, and how F( may depend on ( is discussed below). We require the solution to be ε-optimal in expectation for some parameter ε > 0, i.e., E (∼ = [ (F()] − min F★∈W (F★) ≤ ε. As discussed, the standard setting assumes that the learner has direct access to the i.i.d. sample, as well as to the gradients of the loss function (i.e., a first-order oracle). In this work, though, we focus on a specific family full-batch methods. Hence, the optimization process is described as follows: First, an i.i.d. sample ( = (I1, . . . , I=) is drawn from . Then, the learner is provided with access only to the empirical risk via a full-batch first-order oracle which we define next. Full-batch first-order oracle. Consider a fixed sample ( = (I1, . . . , I=) of size =, drawn i.i.d. from . The empirical risk over the sample ( is ( (F) = 1 = =∑ 8=1 5 (F; I8). Then, a full-batch first-order oracle is a procedure that, given input F ∈W, outputs O(F) := (∇ ( (F); ( (F)). where ∇ ( (F) is an empirical risk sub-gradient of the form ∇ ( (F) = 1 = =∑ 8=1 ∇ 5 (F; I8), (2) and each sub-gradient ∇ 5 (F, I8) is computed by the oracle as a function of F and I8 (that is, independently of I 9 for 9 ≠ 8). We emphasize that the sample is fixed throughout the optimization, so that the oracle computes the gradient of the same empirical risk function at every call, hence the name full-batch. Note that the subgradient with respect to a single data point, i.e., ∇ 5 (F; I8), is not accessible through this oracle, which only returns the average gradient over the sample (. Notice that our definition above is slightly narrower than a general sub-gradient oracle for the empirical risk due to the requirement that the sub-gradients ∇ 5 (F, I8) are chosen independently of I 9 for 9 ≠ 8 – since we provide here with a lower bound, this restriction strengthens our result. We make this restriction to avoid some degenerate constructions (that in fact can even be used to fail SGD if the gradient at I8 may depend on the whole sample), which are of no practical implications. Full-batch first-order algorithm. A full-batch (first-order) method is naturally defined as any algorithm that has access to the optimization objective—namely the empirical risk (—only via the full-batch first order oracle. In particular, if FC is the C’th query of the algorithm to the full-batch oracle then FC has to be of the form FC = &C (O(F0), . . . ,O(FC−1)), (3) where &C : (ℝ3+1)C → W is a fixed (possibly randomized) mapping. At the end of the process the algorithm outputs F( . We study the algorithm’s oracle complexity, which is the number of iterations ) the algorithm performs before halting. Therefore, we assume without loss of generality that F( = F) , i.e., the algorithm’s output is its )’th query. 2.1 Main result In this sectionwe establish ourmain result, which provides a generalization lower-bound for full-batch first order algorithms. The complete proof is provided in the full version of the paper [1]. Theorem 2.1. Let ε > 0 and =, ) ∈ ℕ; there exists 3 = poly(2=, ), 1/ε) such that the following holds. For any full-batch first-order algorithm with oracle complexity at most ) , there exists a 1-Lipschitz convex function 5 (F; I) inW, the unit-ball inℝ3 , and a distribution overZ such that, for some universal constant 2 > 0: E (∼ = [ (F()] ≥ min F★∈W (F★) + ε + Ω ( min { 1 − 2ε2 √ ), 0 }) . (4) An immediate consequence of Theorem 2.1 is that in order to obtain less than ε true risk we need at least ) = Ω(1/ε4) iterations. For simplicity, we state and prove the lower bound in Theorem 2.1 for the class of first-order fullbatch algorithms defined above. However, our constructions readily generalize to local full-batch oracles that provide a complete description of ( in an arbitrarily small neighborhood of the query point [25, 18]. Such oracles subsume second-order oracles, and consequently our generalization lower bounds hold also for second-order full-batch algorithms. 2.2 Discussion Theorem 2.1 suggests that full-batch first-order algorithms are inferior to other types of first-order algorithms that operate with access to individual examples, such as SGD. Importantly, this separation is achieved not in terms of the optimization performance but in terms of the generalization performance. In light of this result, we next discuss and revisit the role of the optimization algorithm in the context of SCO. In particular, we wish to discuss the implications to what are perhaps the two most prominent full-batch optimization methods, GD and regularzied-GD, and in turn compare them. Gradient descent. Perhaps the simplest example of a full-batch method is (projected) GD: GD is an iterative algorithm that at each iteration performs an update step FC = ΠW [FC−1 − η∇ ( (FC )], where W is a convex set on which we project the iterated step. The output of GD is normally taken to be F( = 1) ∑ FC (or a randomly chosen FC ). Notice, that each step requires one call to a full batch oracle, and a single projection operation. The convergence analysis of GD to the optimal solution of the empirical risk has been widely studied. Specifically, if = is the sample-size, it is known that with η = $ (1/ √ =) and ) = $ (=), GD converges to a minimizer of ( that is $ (1/ √ =)-sub optimal. For the exact variant of GD depicted above, the generalization performance was analyzed in the work of Amir et al. [2] that showed that with ) = $ (=) steps, GD will suffer Ω(1/ 4 √ =) generalization error. Theorem 2.1 extends the above result to any variant of GD (dynamic learning-rate, noisy GD, normalized GD, etc.). Regularized gradient descent. We would also like to discuss the implication of Theorem 2.1 with respect to regularized variants of GD that operate on the regularized empirical risk ̂ (F) = λA (F) + ( (F). The main motivation of introducing the regularization term A is to avoid overfitting, and a popular choice for A is the Euclidean norm A (F) = ‖F‖22. This choice leads to the following update rule for GD: FC+1 = ΠW [(1 − ηC ) · (2λFC ) − ηC∇ ( (FC )] , Again, this update can be implemented using a single first-order full-batch oracle call that computes the quantity∇ ( (FC ). More generally, for any data-independent A , GDon ̂ is a full-batch algorithm1. When A is the Euclidean norm, the minimizer of ̂ is known to enjoy (with choice λ = $ (1/ √ =)), an optimal generalization error of $ (1/ √ =) [6, 28]. This demonstrates the power of regularization and how it can provably induce generalization. Nevertheless, Theorem 2.1 still applies to any optimization method over ̂. Since optimization of ̂ (the regularized empirical risk) to $ (1/ √ =)- precision can be done via a full-batch method, and with less than $ (=) calls, we observe that there are methods that minimize the regularized-empirical risk but, due to Theorem 2.1 do not reach the optimal generalization error. The role of regularization. Finally, in light of Theorem 2.1 let us compare the different variants of GD and regularized GD that do generalize well, in order to sharpen our understanding of the role of regularization in generalization. The conclusion of Theorem 2.1 is that any full-batch method that generalizes well performs at least $ (=2) steps. For regularized GD, with `2 regularization, $ (=2) are indeed sufficient. In particular, with $ (=2) iterations we can find a solution that has $ (1/=) empirical error. Any such solution would enjoy a generalization error of $ (1/ √ =) [28]. For GD, Bassily et al. [4] showed that $ (=2) iterations would also suffice to achieve $ (1/ √ =) error. This is achieved by tuning the learning rate to η = $ (1/=3/2). Notice that this improvement does not require any type of added regularization. To summarize, both GD and regularized GD with optimal parameters require Θ(=2) iterations to attain the optimal$ (1/ √ =) generalization error. Overall then, explicitly adding regularization is not necessary nor does it improve the convergence rate. One might be tempted to believe that tuning the learning rate in GD induces implicitly some sort of regularization. For example, one might 1Note that we are not concerned with the computational cost of computing ∇A (FC ) since it does not factor into oracle complexity. imagine that GD can be biased towards minimal norm solution, which might explain redundancy of regularizing by this norm. However, this turns out also to be false: Dauber et al. [11] showed how GD (with any reasonable choice of learning rate) can diverge from the minimal norm solution. In fact, for any regularization term A, one can find examples where GD does not converge to the regularized solution. Thus, even though GD and regularized-GD are comparable algorithms in terms of generalization and oracle complexity, they are distinct in terms of the solutions they select. 3 Technical Overview In this section we give an overview of our construction and approach towards proving Theorem 2.1. For the sake of exposition, we will describe here a slightly simpler construction which proves the main result only for algorithms that remain in the span of the gradients. In more detail, let us examine the family of iterative algorithms of the form FC ∈ span{∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1)} ∩W, (5) where W is the unit ball and ∇ ( (FC ) is full-batch oracle response to query FC as defined in (2) above. Well-studied algorithms such as GD and GD with standard `2 norm regularization fall into this category of algorithms. To extend the lower bound to algorithms not restricted to the gradient span we refine the simpler construction and apply well-established techniques of random embedding in high-dimensional space. We discuss thesemodifications briefly in the end of this section and provide the full details in Section 4 and the full version of the paper [1]. 3.1 A simpler construction Let us fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ2 = Z, such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2 > 0. Define the hard instance 5(6) : ℝ3+2 ×Z→ ℝ as follows: 5(6) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (6) where 6γ, Eα and A are • 6γ(F;α) B √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) B { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) B max{0,max8∈[3+1]{F(8)}}, • Eα (8) B − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 43+2 is the (3 + 2)’th standard basis vector. The distribution we will consider is uniform over α. That is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(6) (F; (α, ε, γ)). The parameters γ1 and γ2 of the construction should be thought of as arbitrarily small. In particular, the term γ1Eα · F in Eq. (6) should be thought of as negligible, and the first term, 6γ, is roughly 6γ(F;α) ≈ √∑ 8∈3 α(8) (max{−F(8), 0})2. Another useful property of the construction is the population risk (F) = EI∼ 5(6) (F; I) is minimized at F★ ≈ −43+2, with expected loss (F★) ≈ −ε. However, as we will see, the choice of the perturbation vector Eα and the term A (F) hinder the learner from observing this coordinate and; the first Ω(ε−4) queries are constrained to a linear subspace where all the points have a high generalization error due to the expectation of the first term 6γ. 3.2 Analysis We next state the main lemmas we use, with proofs deferred to the full version of the paper [1]. Given a sample (, let us denote Ē = 1 = ∑ α∈( Eα, and span1{D1, D2, . . .} B span{D1, D2, . . .} ∩W. Additionally, given a fixed sample we write I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1} for the set of coordinates 8 ∈ [3] such that α(8) = 0 for every α in the sample (, plus the coordinate 3 + 1. Lemma 3.1. Let γ1 ≤ 12) , γ2 = 2γ1 ε , and suppose that the sample ( satisfies |I(() | > ) . Then there exists a first-order full-batch oracle such that for any algorithm that adheres to FC ∈ span1 { ∇ ( (F0),∇ ( (F1), . . . ,∇ ( (FC−1) } , (7) with respect to 5 (F; (α, ε, γ)) defined in Eq. (6), we have FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } for all C ∈ [)], where IC (() is the set of the C + 1 largest coordinates in I((). We next observe that in any span of the form {γ1Ē + ε43+2 + 48}8∈I) (() such that |I) (() | ≤ ) , we cannot find a solution with better risk than 0. On the other hand, note that for F̄ = −43+2, we have that 5(6) (F̄; (α, ε, γ)) = −ε. In other words, our lower bound stems from the following result: Lemma 3.2. For sufficiently small γ1 ≤ 2=εγ2, γ2 ≤ ε/ √ 4) , and any vector ‖Ē‖ ≤ √ 3, any output F( ∈ span1 8∈I) (() {γ1Ē + ε43+2 + 48}, satisfies 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)) + εF( (43+2) ≥ min { 1 − 2ε2 √ ), 0 } − 1 2 ε. (8) Lower bound proof sketch for span-restricted algorithms of the form (5). First, observe that the probability of an arbitrary index 8 to satisfy α(8) = 0 for all α ∈ ( is (1/2)=. Therefore, |I(() | −1, the number of indexes that hold this from the possible 3, is distributed as a binomial with 3 experiments and success probability ? = 2−=. Using elementary probability arguments one can show that for sufficiently large 3 we have |I(() | > ) with high probability; see Claim B.2 in the appendix. This implies that the conditions of Lemmas 3.1 and 3.2 hold w.h.p. To conclude, we relate the LHS of Eq. (8) to the expected risk (F) = E α∼ [ 5(6) (F; (α, ε, γ))] = E α∼ [6γ(F;α)] + γ1 · E α∼ [Eα] · F + εF · 43+2 + A (F). As 6γ(F;α) is convex w.r.t. α (since α(8) = α2 (8)) we can apply Jensen’s inequality with Eα∼ [α(8)] = 12 to obtain: E α∼ [6γ(F(;α)] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F( (8)). Applying theCauchy-Schwarz inequality to the second termwhile also using the facts that ‖Eα‖ ≤ √ 3 and that F( is in the unit ball, we get: γ1 E α∼ [Eα] · F ≥ −γ1 E α∼ [‖Eα‖ · ‖F‖] ≥ −γ1 √ 3. For sufficiently small γ1 this term is negligible, and since A (F) ≥ 0 we get that the expected risk is approximately the LHS term in Eq. (8). Lastly, recalling that (−43+2) = −ε we get that (F() − min F ∈W (F) ≥ 1 2 ε +min { 1 − 2ε2 √ ), 0 } w.h.p. The same lower bound (up to a constant) also holds in expectation by the the law of total expectation. Our distribution is supported on 5-Lipschitz convex functions, so that re-parametrizing 110ε→ ε as well as 5(6) yields the claimed lower bound (4) for the case of span-restricted algorithms. 3.3 Handling general full-batch algorithms The above construction establishes an Ω(1/ε4) oracle complexity lower bound on any algorithm whose iterates lie in the span of the previous gradients. While this covers a large class of algorithms, techniques like preconditioning [13], coordinate methods [27] and randomized smoothing [14] do not satisfy this assumption. In fact, a trivial algorithm that always outputs −43+2 will solve the hard instance (6) in a single iteration. To address general algorithms, we employ a well-established technique in optimization lower bounds [30, 8, 12] wherein we embed a hard instance 5 (F; I) for span-constrained algorithms in a random high-dimensional space. More concretely, we draw a random orthogonal matrix * ∈ ℝ3′×3 (*>* = 3×3) and consider the 3 ′ > 3-dimensional instance 5* (F; I) = 5 (*>F; I) along with its corresponding empirical objective (,* (F) = 1= ∑ 8∈[=] 5* (F; I8). Roughly speaking, we show that for a general algorithm operating with the appropriate subgradient oracle for (,* the iterate FC is approximately in the span of {∇ (,* (F0), . . . ,∇ (,* (FC−1)} in the sense that the component of FC outside that span is nearly orthogonal to the columns of*. Consequently, the response of the oracle to the query FC at iteration C is, with high probability, identical to the information it would return if queried with the projection of FC to the span of the previously observed gradients. This reduces, in a sense, the problem back to the span-restricted setting described above. For the embedding technique to work, we must robustify the hard instance construction so that small perturbations around points in the span of previous gradients do not “leak” additional information about the embedding *. To do that we make a fairly standard modification to the component A (F) in (6) (known as Nemirovski’s function [12, 7]), replacing it with max{0,max8∈[3 ]{F(8) + 8γ′}, F(3 + 1) + γ′′}, where γ′, γ′′ are small offset coefficients that go to zero as the embedding dimension 3 ′ tends to infinity. We provide the full construction and the proof of Theorem 2.1 in Section 4 and the full version of the paper [1]. 4 The Full Construction As explained above, the key difference between the simplified construction 5(6) and the full construction with which we prove Theorem 2.1 is that we modify the Nemirvoski function term A (F) in order to make it robust to queries that are nearly within a certain linear subspace. In particular, we bias the different terms in the maximization defining A (F) so as to control the index of the coordinate attaining the maximum. For ease of reference, we now provide a self-contained definition of our full construction with the modified Nemirovski function. Fix =, 3 ≥ 1 and parameters I = (α, ε, γ) ∈ {0, 1}3 × ℝ × ℝ3 = Z are such that α ∈ {0, 1}3 , ε > 0 and γ1, γ2, γ3 > 0. Define the hard instance 5(9) : ℝ3+2 ×Z→ ℝ as follows: 5(9) (F; (α, ε, γ)) = 6γ(F;α) + γ1Eα · F + εF · 43+2 + A (F), (9) where 6γ, Eα and A are • 6γ(F;α) := √∑ 8∈[3 ] α(8)ℎ2γ(F(8)) with ℎγ(0) := { 0 0 ≥ −γ2; 0 + γ2 0 < −γ2, • A (F) := max{0,max8∈[3+1]{F(8) + σ8}} with σ8 := { 8 · γ1γ343= if 8 ∈ [3]; 2γ3 if 8 = 3 + 1. • Eα (8) := − 12= if α(8) = 0; +1 if α(8) = 1; 0 if 8 ∈ {3 + 1, 3 + 2}, and 48 is the 8’th standard basis vector in ℝ3+2. We consider a distribution over α that is distributed uniformly over {0, 1}3; that is, we draw α ∈ {0, 1}3 uniformly at random and pick the function 5(9) (F; (α, ε, γ)). The rest of the parameters are set throughout the proof as follows: γ1 = εγ2 4 , γ2 = ε ) √ 3 , γ3 = ε 16 . (10) With this choice of distribution as well our choice of parameters we obtain, since ‖Eα‖ ≤ √ 3 and by our choice of γ1 (as well as Jensen’s inequality and A (·) ≥ 0): (F) = E α∼ [ 5(9) (F; (α, ε, γ)) ] ≥ 1 2 √∑ 8∈[3 ] ℎ2γ(F(8)) + εF(3 + 2) − ε 4 . (11) Notice that we also have that for a choice F★ = −43+2, since A (F★) = 2γ3: (F★) = −ε + ε 8 = −7ε 8 (12) Our development makes frequent use of the following notation from Section 3: I(() = {8 : α(8) = 0 ∀α ∈ (} ∪ {3 + 1}, IC (() = C largest elements in I((), and Ē = 1 = ∑ α∈( Eα. We begin with the following lemma, which is a robust version of Lemma 3.1 in Section 3. The proof is provided in the full version of the paper [1]. Lemma 4.1. Suppose that F0 = 0. Consider 5(9) (F; (α, ε, γ)) with parameters as in Eq. (10). Suppose ( is a sample such that |I(() | > C + 1. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ min {γ2 3 , γ1γ3 163= } . (13) Then, ∇ ( (F) = γ1Ē + ε43+2 + 48 , for some 8 ∈ IC+1 ((), where IC (() is the set of the C + 1 largest coordinates in I((). The following corollary states that the gradient oracle’s answers are resilient to small perturbation of the query (as long as they are in vicinity of the “right" subspace): the proof is provided in the full version of the paper [1]: Corollary 4.2. Assume that F is such that F = FC + @, where FC ∈ span1 8∈IC (() { γ1Ē + ε43+2 + 48 } , and ‖@‖∞ ≤ 1 4 √ 3 min {γ2 3 , γ1γ3 163= } . (14) Then, ∇ ( (F) = ∇ ( (ΠC+1 (F)), ( (F) = ( (ΠC+1 (F)), where ΠC is a projection onto span8∈IC (() {γ1Ē + ε43+2 + 48}. Acknowledgements and Disclosure of Funding This work has received support from the Israeli Science Foundation (ISF) grant no. 2549/19 and grant no. 2188/20, from the Len Blavatnik and the Blavatnik Family foundation, from the Yandex Initiative in Machine Learning, and from an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the main contribution of the paper in terms of error lower-bound for optimization algorithms? 2. How does the paper extend the bound given in Bassily et al. to a wider class of functions? 3. What are the advantages of stochastic methods like SGD according to the review? 4. How does the paper apply a classical technique to extend the result to general full-batch algorithms? 5. What other first-order methods can the bound be applied to, as mentioned in the review?
Summary Of The Paper Review
Summary Of The Paper In this paper the authors prove an error lower-bound for full-batch first-order algorithms. This is a generalization of the bound given in Bassily et al. [2]. This new bound of the authors is true for a wider class of functions : GD and SGD, non-smooth and convex implies that to get an ϵ -risk, full-batch methods require at least Ω ( 1 / ϵ 4 ) iterations This shows the advantage of stochastic methods like SGD which only require Ω ( 1 / ϵ 2 ) iterations to reach the same risk. Review The authors clearly explain the background of Stochastic Convex Optimization (SCO) and present existing results and bounds. The authors also underline the fact that their bound is valid for a large family of algorithms (GD, projected GD, with smooth or non-smooth regularization, etc). To extend their results to general full-batch algorithms, they apply a classical technique described in section 4.3 where they use a random high-dimensional embedding and apply the previous bound on a modified objective. Comment after rebuttal In view of the reviews and the discussions, the authors clarified their technical contribution compared to [1] and presented how this generalization could be applied to many 1st order methods including Nesterov's acceleration, CG etc. Thus, I will increase my score to 7.
NIPS
Title Score-based Generative Modeling in Latent Space Abstract Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Our implementation is available at https://github.com/NVlabs/LSGM. 1 Introduction The long-standing goal of likelihood-based generative learning is to faithfully learn a data distribution, while also generating high-quality samples. Achieving these two goals simultaneously is a tremendous challenge, which has led to the development of a plethora of different generative models. Recently, score-based generative models (SGMs) demonstrated astonishing results in terms of both high sample quality and likelihood [1, 2]. These models define a forward diffusion process that maps data to noise by gradually perturbing the input data. Generation corresponds to a reverse process that synthesizes novel data via iterative denoising, starting from random noise. The problem then reduces to learning the score function—the gradient of the log-density—of the perturbed data [3]. In a seminal work, Song et al. [2] show how this modeling approach is described with a stochastic differential equation (SDE) framework which can be converted to maximum likelihood training [4]. Variants of SGMs have been applied to images [1, 2, 5, 6], audio [7, 8, 9, 10], graphs [11] and point clouds [12, 13]. Albeit high quality, sampling from SGMs is computationally expensive. This is because generation amounts to solving a complex SDE, or equivalently ordinary differential equation (ODE) (denoted as the probability flow ODE in [2]), that maps a simple base distribution to the complex data distribution. The resulting differential equations are typically complex and solving them accurately requires numerical integration with very small step sizes, which results in thousands of neural network evaluations [1, 2, 6]. Furthermore, generation complexity is uniquely defined by the underlying data distribution and the forward SDE for data perturbation, implying that synthesis speed cannot be ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). increased easily without sacrifices. Moreover, SDE-based generative models are currently defined for continuous data and cannot be applied effortlessly to binary, categorical, or graph-structured data. Here, we propose the Latent Score-based Generative Model (LSGM), a new approach for learning SGMs in latent space, leveraging a variational autoencoder (VAE) framework [14, 15]. We map the input data to latent space and apply the score-based generative model there. The score-based model is then tasked with modeling the distribution over the embeddings of the data set. Novel data synthesis is achieved by first generating embeddings via drawing from a simple base distribution followed by iterative denoising, and then transforming this embedding via a decoder to data space (see Fig. 1). We can consider this model a VAE with an SGM prior. Our approach has several key advantages: Synthesis Speed: By pretraining the VAE with a Normal prior first, we can bring the marginal distribution over encodings (the aggregate posterior) close to the Normal prior, which is also the SGM’s base distribution. Consequently, the SGM only needs to model the remaining mismatch, resulting in a less complex model from which sampling becomes easier. Furthermore, we can tailor the latent space according to our needs. For example, we can use hierarchical latent variables and apply the diffusion model only over a subset of them, further improving synthesis speed. Expressivity: Training a regular SGM can be considered as training a neural ODE directly on the data [2]. However, previous works found that augmenting neural ODEs [16, 17] and more generally generative models [18, 19, 20, 21] with latent variables improves their expressivity. Consequently, we expect similar performance gains from combining SGMs with a latent variable framework. Tailored Encoders and Decoders: Since we use the SGM in latent space, we can utilize carefully designed encoders and decoders mapping between latent and data space, further improving expressivity. Additionally, the LSGM method can therefore be naturally applied to non-continuous data. LSGMs can be trained end-to-end by maximizing the variational lower bound on the data likelihood. Compared to regular score matching, our approach comes with additional challenges, since both the score-based denoising model and its target distribution, formed by the latent space encodings, are learnt simultaneously. To this end, we make the following technical contributions: (i) We derive a new denoising score matching objective that allows us to efficiently learn the VAE model and the latent SGM prior at the same time. (ii) We introduce a new parameterization of the latent space score function, which mixes a Normal distribution with a learnable SGM, allowing the SGM to model only the mismatch between the distribution of latent variables and the Normal prior. (iii) We propose techniques for variance reduction of the training objective by designing a new SDE and by analytically deriving importance sampling schemes, allowing us to stably train deep LSGMs. Experimentally, we achieve state-of-the-art 2.10 FID on CIFAR-10 and 7.22 FID on CelebA-HQ-256, and significantly improve upon likelihoods of previous SGMs. On CelebA-HQ-256, we outperform previous SGMs in synthesis speed by two orders of magnitude. We also model binarized images, MNIST and OMNIGLOT, achieving state-of-the-art likelihood on the latter. 2 Background Here, we review continuous-time score-based generative models (see [2] for an in-depth discussion). Consider a forward diffusion process {zt}t=1t=0 for continuous time variable t ∈ [0, 1], where z0 is the starting variable and zt its perturbation at time t. The diffusion process is defined by an Itô SDE: dz = f(t)z dt+ g(t) dw (1) where f : R→ R and g : R→ R are scalar drift and diffusion coefficients, respectively, and w is the standard Wiener process. f(t) and g(t) can be designed such that z1 ∼ N (z1;0, I) follows a Normal distribution at the end of the diffusion process.2 Song et al. [2] show that the SDE in Eq. 1 can be converted to a generative model by first sampling from z1 ∼ N (z1;0, I) and then running the reverse-time SDE dz = [f(t)z−g(t)2∇z log qt(z)] dt+g(t) dw̄, where w̄ is a reverse-time standard Wiener process and dt is an infinitesimal negative time step. The reverse SDE requires knowledge of ∇zt log qt(zt), the score function of the marginal distribution under the forward diffusion at time t. One approach for estimating it is via the score matching objective3: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2] ] (2) that trains the parameteric score function ∇zt log pθ(zt) at time t ∼ U [0, 1] for a given weighting coefficient λ(t). q(z0) is the z0-generating distribution and q(zt|z0) is the diffusion kernel, which is available in closed form for certain f(t) and g(t). Since ∇zt log q(zt) is not analytically available, Song et al. [2] rely on denoising score matching [22] that converts the objective in Eq. 2 to: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt|z0)−∇zt log pθ(zt)|| 2 2] ] + C (3) Vincent [22] shows C = Et∼U [0,1][λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)||22 − ||∇zt log q(zt|z0)||22]] is independent of θ, making the minimizations in Eq. 3 and Eq. 2 equivalent. Song et al. [4] show that for λ(t) = g(t)2/2, the minimizations correspond to approximate maximum likelihood training based on an upper on the Kullback-Leibler (KL) divergence between the target distribution and the distribution defined by the reverse-time generative SDE with the learnt score function. In particular, the objective of Eq. 2 can then be written: KL ( q(z0)||pθ(z0) ) ≤ Et∼U[0,1] [ g(t)2 2 Eq(z0)Eq(zt|z0) [ ||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2 ]] (4) which can again be transformed into denoising score matching (Eq. 3) following Vincent [22]. 3 Score-based Generative Modeling in Latent Space The LSGM framework in Fig. 1 consists of the encoder qφ(z0|x), SGM prior pθ(z0), and decoder pψ(x|z0). The SGM prior leverages a diffusion process as defined in Eq. 1 and diffuses z0 ∼ qφ(z0|x) samples in latent space to the standard Normal distribution p(z1) = N (z1;0, I). Generation uses the reverse SDE to sample from pθ(z0) with time-dependent score function∇zt log pθ(zt), and the decoder pψ(x|z0) to map the synthesized encodings z0 to data space. Formally, the generative process is written as p(z0,x) = pθ(z0)pψ(x|z0). The goal of training is to learn {φ,θ,ψ}, the parameters of the encoder qφ(z0|x), score function∇zt log pθ(zt), and decoder pψ(x|z0), respectively. We train LSGM by minimizing the variational upper bound on negative data log-likelihood log p(x): L(x,φ,θ,ψ) = Eqφ(z0|x) [ − log pψ(x|z0) ] +KL ( qφ(z0|x)||pθ(z0) ) (5) = Eqφ(z0|x) [ − log pψ(x|z0) ]︸ ︷︷ ︸ reconstruction term +Eqφ(z0|x) [ log qφ(z0|x) ]︸ ︷︷ ︸ negative encoder entropy +Eqφ(z0|x) [ − log pθ(z0) ]︸ ︷︷ ︸ cross entropy (6) following a VAE approach [14, 15], where qφ(z0|x) approximates the true posterior p(z0|x). In this paper, we use Eq. 6 with decomposed KL divergence into its entropy and cross entropy terms. The reconstruction and entropy terms are estimated easily for any explicit encoder as long as the reparameterization trick is available [14]. The challenging part in training LSGM is to train the cross entropy term that involves the SGM prior. We motivate and present our expression for the cross-entropy term in Sec. 3.1, the parameterization of the SGM prior in Sec. 3.2, different weighting mechanisms for the training objective in Sec. 3.3, and variance reduction techniques in Sec. 3.4. 3.1 The Cross Entropy Term One may ask, why not train LSGM with Eq. 5 and rely on the KL in Eq. 4. Directly using the KL expression in Eq. 4 is not possible, as it involves the marginal score ∇zt log q(zt), which is unavailable analytically for common non-Normal distributions q(z0) such as Normalizing flows. 2Other distributions at t = 1 are possible; for instance, see the “variance-exploding” SDE in [2]. In this paper, however, we use only SDEs converging towardsN (z1;0, I) at t = 1. 3We omit the t-subscript of the diffused distributions qt in all score functions of the form∇zt log qt(zt). Transforming into denoising score matching does not help either, since in that case the problematic ∇zt log q(zt) term appears in the C term (see Eq. 3). In contrast to previous works [2, 22], we cannot simply drop C, since it is, in fact, not constant but depends on q(zt), which is trainable in our setup. To circumvent this problem, we instead decompose the KL in Eq. 5 and rather work directly with the cross entropy between the encoder distribution q(z0|x) and the SGM prior p(z0). We show: Theorem 1. Given two distributions q(z0|x) and p(z0), defined in the continuous space RD, denote the marginal distributions of diffused samples under the SDE in Eq. 1 at time t with q(zt|x) and p(zt). Assuming mild smoothness conditions on log q(zt|x) and log p(zt), the cross entropy is: CE(q(z0|x)||p(z0)) = Et∼U[0,1] [ g(t)2 2 Eq(zt,z0|x) [ ||∇zt log q(zt|z0)−∇zt log p(zt)|| 2 2 ]] + D 2 log ( 2πeσ20 ) , with q(zt, z0|x) = q(zt|z0)q(z0|x) and a Normal transition kernel q(zt|z0) = N (zt;µt(z0), σ2t I), where µt and σ 2 t are obtained from f(t) and g(t) for a fixed initial variance σ 2 0 at t = 0. A proof with generic expressions for µt and σ 2 t as well as an intuitive interpretation are in App. A. Importantly, unlike for the KL objective of Eq. 4, no problematic terms depending on the marginal score ∇zt log q(zt|x) arise. This allows us to use this denoising score matching objective for the cross entropy term in Theorem 1 not only for optimizing p(z0) (which is commonly done in the score matching literature), but also for the q(z0|x) encoding distribution. It can be used even with complex q(z0|x) distributions, defined, for example, in a hierarchical fashion [20, 21] or via Normalizing flows [23, 24]. Our novel analysis shows that, for diffusion SDEs following Eq. 1, only the cross entropy can be expressed purely with ∇zt log q(zt|z0). Neither KL nor entropy in [4] can be expressed without the problematic term∇zt log q(zt|x) (details in the Appendix). Note that in Theorem 1, the term∇zt log p(zt) in the score matching expression corresponds to the score that originates from diffusing an initial p(z0) distribution. In practice, we use the expression to learn an SGM prior pθ(z0), which models∇zt log p(zt) by a neural network. With the learnt score ∇zt log pθ(zt) (here we explicitly indicate the parameters θ to clarify that this is the learnt model), the actual SGM prior is defined via the generative reverse-time SDE (or, alternatively, a closely-connected ODE, see Sec. 2 and App. D), which generally defines its own, separate marginal distribution pθ(z0) at t = 0. Importantly, the learnt, approximate score∇zt log pθ(zt) is not necessarily the same as one would obtain when diffusing pθ(z0). Hence, when considering the learnt score∇zt log pθ(zt), the score matching expression in our Theorem only corresponds to an upper bound on the cross entropy between q(z0|x) and pθ(z0) defined by the generative reverse-time SDE. This is discussed in detail in concurrent works [4, 25]. Hence, from the perspective of the learnt SGM prior, we are training with an upper bound on the cross entropy (similar to the bound on the KL in Eq. 4), which can also be considered as the continuous version of the discretized variational objective derived by Ho et al. [1]. 3.2 Mixing Normal and Neural Score Functions In VAEs [14], p(z0) is often chosen as a standard Normal N (z0;0, I). For recent hierarchical VAEs [20, 21], using the reparameterization trick, the prior can be converted to N (z0;0, I) (App. E). Considering a single dimensional latent space, we can assume that the prior at time t is in the form of a geometric mixture p(zt) ∝ N (zt; 0, 1)1−αp′θ(zt)α where p′θ(zt) is a trainable SGM prior and α ∈ [0, 1] is a learnable scalar mixing coefficient. Formulating the prior this way has crucial advantages: (i) We can pretrain LSGM’s autoencoder networks assuming α=0, which corresponds to training the VAE with a standard Normal prior. This pretraining step will bring the distribution of latent variable close to N (z0; 0, 1), allowing the SGM prior to learn a much simpler distribution in the following end-to-end training stage. (ii) The score function for this mixture is of the form ∇zt log p(zt) = −(1− α)zt + α∇zt log p′θ(zt). When the score function is dominated by the linear term, we expect that the reverse SDE can be solved faster, as its drift is dominated by this linear term. For our multivariate latent space, we obtain diffused samples at time t by sampling zt ∼ q(zt|z0) with zt = µt(z0) + σt , where ∼ N ( ;0, I). Since we have ∇zt log q(zt|z0) = − /σt, similar to [1], we parameterize the score function by ∇zt log p(zt) := − θ(zt, t)/σt, where θ(zt, t) := σt(1 − α) zt + α ′θ(zt, t) is defined by our mixed score parameterization that is applied elementwise to the components of the score. With this, we simplify the cross entropy expression to: CE(qφ(z0|x)||pθ(z0)) = Et∼U[0,1] [ w(t) 2 Eqφ(zt,z0|x), [ || − θ(zt, t)||22 ]] + D 2 log ( 2πeσ20 ) , (7) where w(t) = g(t)2/σ2t is a time-dependent weighting scalar. 3.3 Training with Different Weighting Mechanisms Table 1: Weighting mechanisms Mechanism Weights Weighted wll(t) = g(t)2/σ2t Unweighted wun(t) = 1 Reweighted wre(t) = g(t)2 The weighting term w(t) in Eq. 7 trains the prior with maximum likelihood. Similar to [1, 2], we observe that when w(t) is dropped while training the SGM prior (i.e., w(t) = 1), LSGM often yields higher quality samples at a small cost in likelihood. However, in our case, we can only drop the weighting when training the prior. When updating the encoder parameters, we still need to use the maximum likelihood weighting to ensure that the encoder q(z0|x) is brought closer to the true posterior p(z0|x)4. Tab. 1 summarizes three weighting mechanisms we consider in this paper: wll(t) corresponds to maximum likelihood, wun(t) is the unweighted objective used by [1, 2], and wre(t) is a variant obtained by dropping only 1/σ2t . This weighting mechanism has a similar affect on the sample quality as wun(t) = 1; however, in Sec. 3.4, we show that it is easier to define a variance reduction scheme for this weighting mechanism. The following summarizes our training objectives (with t ∼ U [0, 1] and ∼ N ( ;0, I)): min φ,ψ Eqφ(z0|x) [ −log pψ(x|z0) ] +Eqφ(z0|x) [ log qφ(z0|x) ] +Et, ,q(zt|z0),qφ(z0|x) [ wll(t) 2 || − θ(zt, t)||22 ] (8) min θ Et, ,q(zt|z0),qφ(z0|x) [ wll/un/re(t) 2 || − θ(zt, t)||22 ] with q(zt|z0) = N (zt;µt(z0), σ 2 t I), (9) where Eq. 8 trains the VAE encoder and decoder parameters {φ,ψ} using the variational bound L(x,φ,θ,ψ) from Eq. 6. Eq. 9 trains the prior with one of the three weighting mechanisms. Since the SGM prior participates in the objective only in the cross entropy term, we only consider this term when training the prior. Efficient algorithms for training with the objectives are presented in App. G. 3.4 Variance Reduction The objectives in Eqs. 8 and 9 involve sampling of the time variable t, which has high variance [26]. We introduce several techniques for reducing this variance for all three objective weightings. We focus on the “variance preserving” SDEs (VPSDEs) [2, 1, 27], defined by dz = − 12β(t)z dt+ √ β(t) dw where β(t) = β0 + (β1 − β0)t linearly interpolates in [β0, β1] (other SDEs discussed in App. B). We denote the marginal distribution of latent variables by q(z0) := Epdata(x)[q(z0|x)]. Here, we derive variance reduction techniques for CE(q(z0)||p(z0)), assuming that both q(z0) = p(z0) = N (z0;0, I). This is a reasonable simplification for our analysis because pretraining our LSGM model with a N (z0;0, I) prior brings q(z0) close to N (z0;0, I) and our SGM prior is often dominated by the fixed Normal mixture component. We empirically observe that the variance reduction techniques developed with this assumption still work well when q(z0) and p(z0) are not exactly N (z0;0, I). Variance reduction for likelihood weighting: In App. B, for q(z0) = p(z0) = N (z0;0, I), we show CE(q(z0)||p(z0)) is given by D2 Et∼U [0,1][d log σ 2 t /dt] + const. We consider two approaches: (1) Geometric VPSDE: To reduce the variance sampling uniformly from t, we can design the SDE such that d log σ2t /dt is constant for t ∈ [0, 1]. We show in App. B that a β(t) = log(σ2max/σ2min) σ2t (1−σ2t ) with geometric variance σ2t = σ 2 min(σ 2 max/σ 2 min) t satisfies this condition. We call a VPSDE with this β(t) a geometric VPSDE. σ2min and σ 2 max are the hyperparameters of the SDE, with 0<σ 2 min<σ 2 max<1. Although our geometric VPSDE has a geometric variance progression similar to the “variance exploding” SDE (VESDE) [2], it still enjoys the “variance preserving” property of the VPSDE. In App. B, we show that the VESDE does not come with a reduced variance for t-sampling by default. (2) Importance sampling (IS): We can keep β(t) and σ2t unchanged for the original linear VPSDE, and instead use IS to minimize variance. The theory of IS shows that the proposal r(t) ∝ d log σ2t /dt has minimum variance [28]. In App. B, we show that we can sample from r(t) using inverse transform sampling t = var−1((σ21) ρ(σ20) 1−ρ) where var−1 is the inverse of σ2t and ρ ∼ U [0, 1]. This variance reduction technique is available for any VPSDE with arbitrary β(t). In Fig. 2, we train a small LSGM on CIFAR-10 with wll weighting using (i) the original VPSDE with uniform t sampling, (ii) the same SDE but with our IS from t, and (iii) the proposed geometric 4Minimizing L(x,φ,θ,ψ) w.r.t φ is equivalent to minimizing KL ( q(z0|x)||p(z0|x) ) w.r.t q(z0|x). VPSDE. Note how both (ii) and (iii) significantly reduce the variance and allow us to monitor the progress of the training objective. In this case, (i) has difficulty minimizing the objective due to the high variance. In App. B, we show how IS proposals can be formed for other SDEs, including the VESDE and Sub-VPSDE from [2]. Variance reduction for unweighted and reweighted objectives: When training with wun, analytically deriving IS proposal distributions for arbitrary β(t) is challenging. For linear VPSDEs, we provide a derivation in App. B to obtain the optimal IS distribution. In contrast, defining IS proposal distributions is easier when training with wre. In App. B, we show that the optimal distribution is in the form r(t) ∝ dσ2t /dtwhich is sampled by t=var−1((1−ρ)σ20 +ρσ21) with ρ ∼ U [0, 1]. In Fig. 3, we visualize the IS distributions for the three weighting mechanisms for the linear VPSDE with the original [β0, β1] parameters from [2]. r(t) for the likelihood weighting is more tilted towards t = 0 due to the 1/σ2t term in wll. When using differently weighted objectives for training, we can either sample separate t with different IS distributions for each objective, or use IS for the SGM objective (Eq. 9) and reweight the samples according to the likelihood objective for encoder training (Eq. 8). See App. G for details. 4 Related Work Our work builds on score-matching [29, 30, 31, 32, 33, 34, 35, 36, 37], specifically denoising score matching [22], which makes our work related to recent generative models using denoising score matching- and denoising diffusion-based objectives [3, 38, 1, 2, 6]. Among those, [1, 6] use a discretized diffusion process with many noise scales, building on [27], while Song et al. [2] introduce the continuous time framework using SDEs. Experimentally, these works focus on image modeling and, contrary to us, work directly in pixel space. Various works recently tried to address the slow sampling of these types of models and further improve output quality. [39] add an adversarial objective, [5] introduce non-Markovian diffusion processes that allow to trade off synthesis speed, quality, and sample diversity, [40] learn a sequence of conditional energy-based models for denoising, [41] distill the iterative sampling process into single shot synthesis, and [42] learn an adaptive noise schedule, which is adjusted during synthesis to accelerate sampling. Further, [26] propose empirical variance reduction techniques for discretized diffusions and introduce a new, heuristically motivated, noise schedule. In contrast, our proposed noise schedule and our variance reduction techniques are analytically derived and directly tailored to our learning setting in the continuous time setup. Recently, [11] presented a method to generate graphs using score-based models, relaxing the entries of adjacency matrices to continuous values. LSGM would allow to model graph data more naturally using encoders and decoders tailored to graphs [43, 44, 45, 46]. Since our model can be considered a VAE [14, 15] with score-based prior, it is related to approaches that improve VAE priors. For example, Normalizing flows and hierarchical distributions [23, 24, 47, 48, 20, 21], as well as energy-based models [49, 50, 51, 52, 53] have been proposed as VAE priors. Furthermore, classifiers [54, 55, 56], adversarial methods [57], and other techniques [58, 59] have been used to define prior distributions implicitly. In two-stage training, a separate generative model is trained in latent space as a new prior after training the VAE itself [60, 61, 62, 63, 64, 10]. Our work also bears a resemblance to recent methods on improving the sampling quality in generative adversarial networks using gradient flows in the latent space [65, 66, 67, 68], with the main difference that these prior works use a discriminator to update the latent variables, whereas we train an SGM. Concurrent works: [10] proposed to learn a denoising diffusion model in the latent space of a VAE for symbolic music generation. This work does not introduce an end-to-end training framework of the combined VAE and denoising diffusion model and instead trains them in two separate stages. In contrast, concurrently with us [69] proposed an end-to-end training approach, and [70] combines contrastive learning with diffusion models in the latent space of VAEs for controllable generation. However, [10, 69, 70] consider the discretized diffusion objective [1], while we build on the continuous time framework. Also, these models are not equipped with the mixed score parameterization and variance reduction techniques, which we found crucial for the successful training of SGM priors. Additionally, [71, 4, 25] concurrently with us proposed likelihood-based training of SGMs in data space5. [4] developed a bound for the data likelihood in their Theorem 3 of their second version, using a denoising score matching objective, closely related to our cross entropy expression. However, our cross entropy expression is much simpler as we show how several terms can be marginalized out analytically for the diffusion SDEs employed by us (see our proof in App. A). The same marginalization can be applied to Theorem 3 in [4] when the drift coefficient takes a special affine form (i.e., f(z, t) = f(t)z). Moreover, [25] discusses the likelihood-based training of SGMs from a fundamental perspective and shows how several score matching objectives become a variational bound on the data likelihood. [71] introduced a notion of signal-to-noise ratio (SNR) that results in a noise-invariant parameterization of time that depends only on the initial and final noise. Interestingly, our importance sampling distribution in Sec. 3.4 has a similar noise-invariant parameterization of time via t = var−1((σ21) ρ(σ20) 1−ρ), which also depends only on the initial and final diffusion process variances. We additionally show that this time parameterization results in the optimal minimumvariance objective, if the distribution of latent variables follows a standard Normal distribution. Finally, [72] proposed a modified time parameterization that allows modeling unbounded data scores. 5 Experiments Here, we examine the efficacy of LSGM in learning generative models for images. Implementation details: We implement LSGM using the NVAE [20] architecture as VAE backbone and NCSN++ [2] as SGM backbone. NVAE has a hierarchical latent structure. The diffusion process input z0 is constructed by concatenating the latent variables from all groups in the channel dimension. For NVAEs with multiple spatial resolutions in latent groups, we only feed the smallest resolution groups to the SGM prior and assume that the remaining groups have a standard Normal distribution. Sampling: To generate samples from LSGM at test time, we use a black-box ODE solver [73] to sample from the prior. Prior samples are then passed to the decoder to generate samples in data space. Evaluation: We measure NELBO, an upper bound on negative log-likelihood (NLL), using Eq. 6. For estimating log p(z0), we rely on the probability flow ODE [2], which provides an unbiased but stochastic estimation of log p(z0). This stochasticity prevents us from performing an importance weighted estimation of NLL [74] (see App. F for details). For measuring sample quality, Fréchet inception distance (FID) [75] is evaluated with 50K samples. Implementation details in App. G. 5.1 Main Results Unconditional color image generation: Here, we present our main results for unconditional image generation on CIFAR-10 [89] (Tab. 2) and CelebA-HQ-256 (5-bit quantized) [88] (Tab. 3). For CIFAR-10, we train 3 different models: LSGM (FID) and LSGM (balanced) both use the VPSDE with linear β(t) and wun-weighting for the SGM prior in Eq. 9, while performing IS as derived in Sec. 3.4. They only differ in how the backbone VAE is trained. LSGM (NLL) is a model that is trained with our novel geometric VPSDE, using wll-weighting in the prior objective (further details in App. G). When set up for high image quality, LSGM achieves a new state-of-the-art FID of 2.10. When tuned towards NLL, we achieve a NELBO of 2.87, which is significantly better than previous score-based models. Only autoregressive models, which come with very slow synthesis, and VDVAE [21] reach similar or higher likelihoods, but they usually have much poorer image quality. For CelebA-HQ-256, we observe that when LSGM is trained with different SDE types and weighting mechanisms, it often obtains similar NELBO potentially due to applying the SGM prior only to small latent variable groups and using Normal priors at the larger groups. With wre-weighting and linear VPSDE, LSGM obtains the state-of-the-art FID score of 7.22 on a par with the original SGM [2]. For both datasets, we also report results for the VAE backbone used in our LSGM. Although this baseline achieves competitive NLL, its sample quality is behind our LSGM and the original SGM. Modeling binarized images: Next, we examine LSGM on dynamically binarized MNIST [93] and OMNIGLOT [74]. We apply LSGM to binary images using a decoder with pixel-wise independent Bernoulli distributions. For these datasets, we report both NELBO and NLL in nats in Tab. 4 and Tab. 5. On OMNIGLOT, LSGM achieves state-of-the-art likelihood of ≤87.79 nat, outperforming previous models including VAEs with autoregressive decoders, and even when comparing its NELBO 5We build on the V1 version of [4], which was substantially updated after the NeurIPS submission deadline. Table 5: Dynamically binarized MNIST results. Method NELBO↓ NLL↓ Ours LSGM 78.47 ≤78.47 VAEs NVAE [20] 79.56 78.01 BIVA [48] 80.06 78.41 IAF-VAE [24] 80.80 79.10 DVAE++ [51] - 78.49 Aut. Reg. PixelVAE++ [91] - 78.00 VampPrior [59] - 78.45 MAE [92] - 77.98 against importance weighted estimation of NLL for other methods. On MNIST, LSGM outperforms previous VAEs in NELBO, reaching a NELBO 1.09 nat lower than the state-of-the-art NVAE. Qualitative results: We visualize qualitative results for all datasets in Fig. 5. On the complex multimodal CIFAR-10 dataset, LSGM generates sharp and high-quality images. On CelebA-HQ-256, LSGM generates diverse samples from different ethnicity and age groups with varying head poses and facial expressions. On MNIST and OMNIGLOT, the generated characters are sharp and high-contrast. Sampling time: We compare LSGM against the original SGM [2] trained on the CelebA-HQ-256 dataset in terms of sampling time and number of function evaluations (NFEs) of the ODE solver. Song et al. [2] propose two main sampling techniques including predictor-corrector (PC) and probability flow ODE. PC sampling involves 4000 NFEs and takes 44.6 min. on a Titan V for a batch of 16 images. It yields 7.23 FID score (see Tab. 3). ODE-based sampling from SGM takes 3.91 min. with 335 NFEs, but it obtains a poor FID score of 128.13 with 10−5 as ODE solver error tolerance6. In a stark contrast, ODE-based sampling from our LSGM takes 0.07 min. with average of 23 NFEs, yielding 7.22 FID score. LSGM is 637× and 56× faster than original SGM’s [2] PC and ODE 6We use the VESDE checkpoint at https://github.com/yang-song/score_sde_pytorch. Song et al. [2] report that ODE-based sampling yields worse FID scores for their models (see D.4 in [2]). The problem is more severe for VESDEs. Unfortunately, at submission time only a VESDE model was released. sampling, respectively. In Fig. 4, we visualize FID scores and NFEs for different ODE solver error tolerances. Our LSGM achieves low FID scores for relatively large error tolerances. We identify three main reasons for this significantly faster sampling from LSGM: (i) The SGM prior in our LSGM models latent variables with 32×32 spatial dim., whereas the original SGM [2] directly models 256×256 images. The larger spatial dimensions require a deeper network to achieve a large receptive field. (ii) Inspecting the SGM prior in our model suggests that the score function is heavily dominated by the linear term at the end of training, as the mixing coefficients α are all < 0.02. This makes our SGM prior smooth and numerically faster to solve. (iii) Since SGM is formed in the latent space in our model, errors from solving the ODE can be corrected to some degree using the VAE decoder, while in the original SGM [2] errors directly translate to artifacts in pixel space. 5.2 Ablation Studies SDEs, objective weighting mechanisms and variance reduction. In Tab. 6, we analyze the different weighting mechanisms and variance reduction techniques and compare the geometric VPSDE with the regular VPSDE with linear β(t) [1, 2]. In the table, SGM-obj.-weighting denotes the weighting mechanism used when training the SGM prior (via Eq. 9). t-sampling (SGM-obj.) indicates the sampling approach for t, where rll(t), run(t) and rre(t) denote the IS distributions for the weighted (likelihood), the unweighted, and the reweighted objective, respectively. For training the VAE encoder qφ(z0|x) (last term in Eq. 8), we either sample a separate batch t with importance sampling following rll(t) (only necessary when the SGM prior is not trained with wll itself), or we reweight the samples drawn for training the prior according to the likelihood objective (denoted by rew.). n/a indicates fields that do not apply: The geometric VPSDE has optimal variance for the weighted (likelihood) objective already with uniform sampling; there is no additional IS distribution. Also, we did not derive IS distributions for the geometric VPSDE for wun. NaN indicates experiments that failed due to training instabilities. Previous work [20, 21] have reported instability in training large VAEs. We find that our method inherits similar instabilities from VAEs; however, importance sampling often stabilizes training our LSGM. As expected, we obtain the best NELBOs (red) when training with the weighted, maximum likelihood objective (wll). Importantly, our new geometric VPSDE achieves the best NELBO. Furthermore, the best FIDs (blue) are obtained either by unweighted (wun) or reweighted (wre) SGM prior training, with only slightly worse NELBOs. These experiments were run on the CIFAR10 dataset, using a smaller model than for our main results above (details in App. G). End-to-end training. We proposed to train LSGM end-to-end, in contrast to [10]. Using a similar setup as above we compare end-to-end training of LSGM during the second stage with freezing the VAE encoder and decoder and only training the SGM prior in latent space during the second stage. When training the model end-to-end, we achieve an FID of 5.19 and NELBO of 2.98; when freezing the VAE networks during the second stage, we only get an FID of 9.00 and NELBO of 3.03. These results clearly motivate our end-to-end training strategy. Mixing Normal and neural score functions. We generally found training LSGM without our proposed “mixed score” formulation (Sec. 3.2) to be unstable during end-to-end training, highlighting its importance. To quantify the contribution of the mixed score parametrization for a stable model, we train a small LSGM with only one latent variable group. In this case, without the mixed score, we reached an FID of 34.71 and NELBO of 3.39; with it, we got an FID of 7.60 and NELBO of 3.29. Without the inductive bias provided by the mixed score, learning that the marginal distribution is close to a Normal one for large t purely from samples can be very hard in the high-dimensional latent space, where our diffusion is run. Furthermore, due to our importance sampling schemes, we tend to oversample small, rather than large t. However, synthesizing high-quality images requires an accurate score function estimate for all t. On the other hand, the log-likelihood of samples is highly sensitive to local image statistics and primarily determined at small t. It is plausible that we are still able to learn a reasonable estimate of the score function for these small t even without the mixed score formulation. That may explain why log-likelihood suffers much less than sample quality, as estimated by FID, when we remove the mixed score parameterization. Additional experiments and model samples are presented in App. H. 6 Conclusions We proposed the Latent Score-based Generative Model, a novel framework for end-to-end training of score-based generative models in the latent space of a variational autoencoder. Moving from data to latent space allows us to form more expressive generative models, model non-continuous data, and reduce sampling time using smoother SGMs. To enable training latent SGMs, we made three core contributions: (i) we derived a simple expression for the cross entropy term in the variational objective, (ii) we parameterized the SGM prior by mixing Normal and neural score functions, and (iii) we proposed several techniques for variance reduction in the estimation of the training objective. Experimental results show that latent SGMs outperform recent pixel-space SGMs in terms of both data likelihood and sample quality, and they can also be applied to binary datasets. In large image generation, LSGM generates data several orders of magnitude faster than recent SGMs. Nevertheless, LSGM’s synthesis speed does not yet permit sampling at interactive rates, and our implementation of LSGM is currently limited to image generation. Therefore, future work includes further accelerating sampling, applying LSGMs to other data types, and designing efficient networks for LSGMs. 7 Broader Impact Generating high-quality samples while fully covering the data distribution has been a long-standing challenge in generative learning. A solution to this problem will likely help reduce biases in generative models and lead to improving overall representation of minorities in the data distribution. SGMs are perhaps one of the first deep models that excel at both sample quality and distribution coverage. However, the high computational cost of sampling limits their widespread use. Our proposed LSGM reduces the sampling complexity of SGMs by a large margin and improves their expressivity further. Thus, in the long term, it can enable the usage of SGMs in practical applications. Here, LSGM is examined on the image generation task which has potential benefits and risks discussed in [94, 95]. However, LSGM can be considered a generic framework that extends SGMs to non-continuous data types. In principle LSGM could be used to model, for example, language [96, 97], music [98, 10], or molecules [99, 100]. Furthermore, like other deep generative models, it can potentially be used also for non-generative tasks such as semi-supervised and representation learning [101, 102, 103]. This makes the long-term social impacts of LSGM dependent on the downstream applications. Funding Statement All authors were funded by NVIDIA through full-time employment.
1. What is the focus and contribution of the paper on generative models? 2. What are the strengths of the proposed approach, particularly in terms of prior/inference scheme and adaptation of score-based generation? 3. Do you have any concerns or suggestions regarding the writing clarity and mathematical details? 4. How does the reviewer assess the novelty, significance, and impact of the paper's content? 5. Are there any questions or suggestions regarding the evaluation and illustration of the model's effectiveness, such as measuring KL divergence or visualizing latent space evolution?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a generative model based on the VAE framework with a sophisticated prior / inference scheme based on denoising score-matching. Authors develop extensive machinery to adapt related ideas from score-based generation in the observed space to latent space. This greatly improves sampling time while preserving and sometimes improving sample quality and/or likelihood of data. I believe the paper is an interesting addition to the existing collection of generative modelling techniques. Review Novelty: To my knowledge, the method is novel. There is a certain intersection with some concurrent NeurIPS submissions though which will can be reflected by authors when preparing the final version of the paper. Clarity: The paper is generally well-written but due a large amount of mathematical material, the writing is dense and there are some details that I would to be discussed more. As I understand, the exceptionally good results with modelling binary images are explained by the fact that score-matching in the latent space resulted into very tight variational approximation and, hence, good gradients for training the decoder. Would it be possible to quantify this directly by measuring the KL divergence for the LSGM and the same model trained with standard amortized inference techniques? Or there are even better ways to further illustrate this? I couldn't find detailed information on how the structure of the latent space. As I understand, it's a lower resolution image-like structure. If so, can authors visualize evolution of the latent variable under the SDE and what reconstructions does it produce? Clearly, effectiveness of LSGMs depends on the choice of the latent space, and I would like to get, again, a more detailed discussion around this. How do the performance metrics depend on its size and structure (image-like vs flat vector)? Quality: I didn't read proofs for the presented theorems, but I followed equations in the main text and they made sense to me. Significance: I believe LSGMs are a valuable contribution because it can enable faster generation and tighter variational approximations. I think the potential impact can be further improved if similarly better results are obtained with decoder architectures othen than NVAE which is arguably very specially structured in handling latent variables.
NIPS
Title Score-based Generative Modeling in Latent Space Abstract Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Our implementation is available at https://github.com/NVlabs/LSGM. 1 Introduction The long-standing goal of likelihood-based generative learning is to faithfully learn a data distribution, while also generating high-quality samples. Achieving these two goals simultaneously is a tremendous challenge, which has led to the development of a plethora of different generative models. Recently, score-based generative models (SGMs) demonstrated astonishing results in terms of both high sample quality and likelihood [1, 2]. These models define a forward diffusion process that maps data to noise by gradually perturbing the input data. Generation corresponds to a reverse process that synthesizes novel data via iterative denoising, starting from random noise. The problem then reduces to learning the score function—the gradient of the log-density—of the perturbed data [3]. In a seminal work, Song et al. [2] show how this modeling approach is described with a stochastic differential equation (SDE) framework which can be converted to maximum likelihood training [4]. Variants of SGMs have been applied to images [1, 2, 5, 6], audio [7, 8, 9, 10], graphs [11] and point clouds [12, 13]. Albeit high quality, sampling from SGMs is computationally expensive. This is because generation amounts to solving a complex SDE, or equivalently ordinary differential equation (ODE) (denoted as the probability flow ODE in [2]), that maps a simple base distribution to the complex data distribution. The resulting differential equations are typically complex and solving them accurately requires numerical integration with very small step sizes, which results in thousands of neural network evaluations [1, 2, 6]. Furthermore, generation complexity is uniquely defined by the underlying data distribution and the forward SDE for data perturbation, implying that synthesis speed cannot be ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). increased easily without sacrifices. Moreover, SDE-based generative models are currently defined for continuous data and cannot be applied effortlessly to binary, categorical, or graph-structured data. Here, we propose the Latent Score-based Generative Model (LSGM), a new approach for learning SGMs in latent space, leveraging a variational autoencoder (VAE) framework [14, 15]. We map the input data to latent space and apply the score-based generative model there. The score-based model is then tasked with modeling the distribution over the embeddings of the data set. Novel data synthesis is achieved by first generating embeddings via drawing from a simple base distribution followed by iterative denoising, and then transforming this embedding via a decoder to data space (see Fig. 1). We can consider this model a VAE with an SGM prior. Our approach has several key advantages: Synthesis Speed: By pretraining the VAE with a Normal prior first, we can bring the marginal distribution over encodings (the aggregate posterior) close to the Normal prior, which is also the SGM’s base distribution. Consequently, the SGM only needs to model the remaining mismatch, resulting in a less complex model from which sampling becomes easier. Furthermore, we can tailor the latent space according to our needs. For example, we can use hierarchical latent variables and apply the diffusion model only over a subset of them, further improving synthesis speed. Expressivity: Training a regular SGM can be considered as training a neural ODE directly on the data [2]. However, previous works found that augmenting neural ODEs [16, 17] and more generally generative models [18, 19, 20, 21] with latent variables improves their expressivity. Consequently, we expect similar performance gains from combining SGMs with a latent variable framework. Tailored Encoders and Decoders: Since we use the SGM in latent space, we can utilize carefully designed encoders and decoders mapping between latent and data space, further improving expressivity. Additionally, the LSGM method can therefore be naturally applied to non-continuous data. LSGMs can be trained end-to-end by maximizing the variational lower bound on the data likelihood. Compared to regular score matching, our approach comes with additional challenges, since both the score-based denoising model and its target distribution, formed by the latent space encodings, are learnt simultaneously. To this end, we make the following technical contributions: (i) We derive a new denoising score matching objective that allows us to efficiently learn the VAE model and the latent SGM prior at the same time. (ii) We introduce a new parameterization of the latent space score function, which mixes a Normal distribution with a learnable SGM, allowing the SGM to model only the mismatch between the distribution of latent variables and the Normal prior. (iii) We propose techniques for variance reduction of the training objective by designing a new SDE and by analytically deriving importance sampling schemes, allowing us to stably train deep LSGMs. Experimentally, we achieve state-of-the-art 2.10 FID on CIFAR-10 and 7.22 FID on CelebA-HQ-256, and significantly improve upon likelihoods of previous SGMs. On CelebA-HQ-256, we outperform previous SGMs in synthesis speed by two orders of magnitude. We also model binarized images, MNIST and OMNIGLOT, achieving state-of-the-art likelihood on the latter. 2 Background Here, we review continuous-time score-based generative models (see [2] for an in-depth discussion). Consider a forward diffusion process {zt}t=1t=0 for continuous time variable t ∈ [0, 1], where z0 is the starting variable and zt its perturbation at time t. The diffusion process is defined by an Itô SDE: dz = f(t)z dt+ g(t) dw (1) where f : R→ R and g : R→ R are scalar drift and diffusion coefficients, respectively, and w is the standard Wiener process. f(t) and g(t) can be designed such that z1 ∼ N (z1;0, I) follows a Normal distribution at the end of the diffusion process.2 Song et al. [2] show that the SDE in Eq. 1 can be converted to a generative model by first sampling from z1 ∼ N (z1;0, I) and then running the reverse-time SDE dz = [f(t)z−g(t)2∇z log qt(z)] dt+g(t) dw̄, where w̄ is a reverse-time standard Wiener process and dt is an infinitesimal negative time step. The reverse SDE requires knowledge of ∇zt log qt(zt), the score function of the marginal distribution under the forward diffusion at time t. One approach for estimating it is via the score matching objective3: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2] ] (2) that trains the parameteric score function ∇zt log pθ(zt) at time t ∼ U [0, 1] for a given weighting coefficient λ(t). q(z0) is the z0-generating distribution and q(zt|z0) is the diffusion kernel, which is available in closed form for certain f(t) and g(t). Since ∇zt log q(zt) is not analytically available, Song et al. [2] rely on denoising score matching [22] that converts the objective in Eq. 2 to: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt|z0)−∇zt log pθ(zt)|| 2 2] ] + C (3) Vincent [22] shows C = Et∼U [0,1][λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)||22 − ||∇zt log q(zt|z0)||22]] is independent of θ, making the minimizations in Eq. 3 and Eq. 2 equivalent. Song et al. [4] show that for λ(t) = g(t)2/2, the minimizations correspond to approximate maximum likelihood training based on an upper on the Kullback-Leibler (KL) divergence between the target distribution and the distribution defined by the reverse-time generative SDE with the learnt score function. In particular, the objective of Eq. 2 can then be written: KL ( q(z0)||pθ(z0) ) ≤ Et∼U[0,1] [ g(t)2 2 Eq(z0)Eq(zt|z0) [ ||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2 ]] (4) which can again be transformed into denoising score matching (Eq. 3) following Vincent [22]. 3 Score-based Generative Modeling in Latent Space The LSGM framework in Fig. 1 consists of the encoder qφ(z0|x), SGM prior pθ(z0), and decoder pψ(x|z0). The SGM prior leverages a diffusion process as defined in Eq. 1 and diffuses z0 ∼ qφ(z0|x) samples in latent space to the standard Normal distribution p(z1) = N (z1;0, I). Generation uses the reverse SDE to sample from pθ(z0) with time-dependent score function∇zt log pθ(zt), and the decoder pψ(x|z0) to map the synthesized encodings z0 to data space. Formally, the generative process is written as p(z0,x) = pθ(z0)pψ(x|z0). The goal of training is to learn {φ,θ,ψ}, the parameters of the encoder qφ(z0|x), score function∇zt log pθ(zt), and decoder pψ(x|z0), respectively. We train LSGM by minimizing the variational upper bound on negative data log-likelihood log p(x): L(x,φ,θ,ψ) = Eqφ(z0|x) [ − log pψ(x|z0) ] +KL ( qφ(z0|x)||pθ(z0) ) (5) = Eqφ(z0|x) [ − log pψ(x|z0) ]︸ ︷︷ ︸ reconstruction term +Eqφ(z0|x) [ log qφ(z0|x) ]︸ ︷︷ ︸ negative encoder entropy +Eqφ(z0|x) [ − log pθ(z0) ]︸ ︷︷ ︸ cross entropy (6) following a VAE approach [14, 15], where qφ(z0|x) approximates the true posterior p(z0|x). In this paper, we use Eq. 6 with decomposed KL divergence into its entropy and cross entropy terms. The reconstruction and entropy terms are estimated easily for any explicit encoder as long as the reparameterization trick is available [14]. The challenging part in training LSGM is to train the cross entropy term that involves the SGM prior. We motivate and present our expression for the cross-entropy term in Sec. 3.1, the parameterization of the SGM prior in Sec. 3.2, different weighting mechanisms for the training objective in Sec. 3.3, and variance reduction techniques in Sec. 3.4. 3.1 The Cross Entropy Term One may ask, why not train LSGM with Eq. 5 and rely on the KL in Eq. 4. Directly using the KL expression in Eq. 4 is not possible, as it involves the marginal score ∇zt log q(zt), which is unavailable analytically for common non-Normal distributions q(z0) such as Normalizing flows. 2Other distributions at t = 1 are possible; for instance, see the “variance-exploding” SDE in [2]. In this paper, however, we use only SDEs converging towardsN (z1;0, I) at t = 1. 3We omit the t-subscript of the diffused distributions qt in all score functions of the form∇zt log qt(zt). Transforming into denoising score matching does not help either, since in that case the problematic ∇zt log q(zt) term appears in the C term (see Eq. 3). In contrast to previous works [2, 22], we cannot simply drop C, since it is, in fact, not constant but depends on q(zt), which is trainable in our setup. To circumvent this problem, we instead decompose the KL in Eq. 5 and rather work directly with the cross entropy between the encoder distribution q(z0|x) and the SGM prior p(z0). We show: Theorem 1. Given two distributions q(z0|x) and p(z0), defined in the continuous space RD, denote the marginal distributions of diffused samples under the SDE in Eq. 1 at time t with q(zt|x) and p(zt). Assuming mild smoothness conditions on log q(zt|x) and log p(zt), the cross entropy is: CE(q(z0|x)||p(z0)) = Et∼U[0,1] [ g(t)2 2 Eq(zt,z0|x) [ ||∇zt log q(zt|z0)−∇zt log p(zt)|| 2 2 ]] + D 2 log ( 2πeσ20 ) , with q(zt, z0|x) = q(zt|z0)q(z0|x) and a Normal transition kernel q(zt|z0) = N (zt;µt(z0), σ2t I), where µt and σ 2 t are obtained from f(t) and g(t) for a fixed initial variance σ 2 0 at t = 0. A proof with generic expressions for µt and σ 2 t as well as an intuitive interpretation are in App. A. Importantly, unlike for the KL objective of Eq. 4, no problematic terms depending on the marginal score ∇zt log q(zt|x) arise. This allows us to use this denoising score matching objective for the cross entropy term in Theorem 1 not only for optimizing p(z0) (which is commonly done in the score matching literature), but also for the q(z0|x) encoding distribution. It can be used even with complex q(z0|x) distributions, defined, for example, in a hierarchical fashion [20, 21] or via Normalizing flows [23, 24]. Our novel analysis shows that, for diffusion SDEs following Eq. 1, only the cross entropy can be expressed purely with ∇zt log q(zt|z0). Neither KL nor entropy in [4] can be expressed without the problematic term∇zt log q(zt|x) (details in the Appendix). Note that in Theorem 1, the term∇zt log p(zt) in the score matching expression corresponds to the score that originates from diffusing an initial p(z0) distribution. In practice, we use the expression to learn an SGM prior pθ(z0), which models∇zt log p(zt) by a neural network. With the learnt score ∇zt log pθ(zt) (here we explicitly indicate the parameters θ to clarify that this is the learnt model), the actual SGM prior is defined via the generative reverse-time SDE (or, alternatively, a closely-connected ODE, see Sec. 2 and App. D), which generally defines its own, separate marginal distribution pθ(z0) at t = 0. Importantly, the learnt, approximate score∇zt log pθ(zt) is not necessarily the same as one would obtain when diffusing pθ(z0). Hence, when considering the learnt score∇zt log pθ(zt), the score matching expression in our Theorem only corresponds to an upper bound on the cross entropy between q(z0|x) and pθ(z0) defined by the generative reverse-time SDE. This is discussed in detail in concurrent works [4, 25]. Hence, from the perspective of the learnt SGM prior, we are training with an upper bound on the cross entropy (similar to the bound on the KL in Eq. 4), which can also be considered as the continuous version of the discretized variational objective derived by Ho et al. [1]. 3.2 Mixing Normal and Neural Score Functions In VAEs [14], p(z0) is often chosen as a standard Normal N (z0;0, I). For recent hierarchical VAEs [20, 21], using the reparameterization trick, the prior can be converted to N (z0;0, I) (App. E). Considering a single dimensional latent space, we can assume that the prior at time t is in the form of a geometric mixture p(zt) ∝ N (zt; 0, 1)1−αp′θ(zt)α where p′θ(zt) is a trainable SGM prior and α ∈ [0, 1] is a learnable scalar mixing coefficient. Formulating the prior this way has crucial advantages: (i) We can pretrain LSGM’s autoencoder networks assuming α=0, which corresponds to training the VAE with a standard Normal prior. This pretraining step will bring the distribution of latent variable close to N (z0; 0, 1), allowing the SGM prior to learn a much simpler distribution in the following end-to-end training stage. (ii) The score function for this mixture is of the form ∇zt log p(zt) = −(1− α)zt + α∇zt log p′θ(zt). When the score function is dominated by the linear term, we expect that the reverse SDE can be solved faster, as its drift is dominated by this linear term. For our multivariate latent space, we obtain diffused samples at time t by sampling zt ∼ q(zt|z0) with zt = µt(z0) + σt , where ∼ N ( ;0, I). Since we have ∇zt log q(zt|z0) = − /σt, similar to [1], we parameterize the score function by ∇zt log p(zt) := − θ(zt, t)/σt, where θ(zt, t) := σt(1 − α) zt + α ′θ(zt, t) is defined by our mixed score parameterization that is applied elementwise to the components of the score. With this, we simplify the cross entropy expression to: CE(qφ(z0|x)||pθ(z0)) = Et∼U[0,1] [ w(t) 2 Eqφ(zt,z0|x), [ || − θ(zt, t)||22 ]] + D 2 log ( 2πeσ20 ) , (7) where w(t) = g(t)2/σ2t is a time-dependent weighting scalar. 3.3 Training with Different Weighting Mechanisms Table 1: Weighting mechanisms Mechanism Weights Weighted wll(t) = g(t)2/σ2t Unweighted wun(t) = 1 Reweighted wre(t) = g(t)2 The weighting term w(t) in Eq. 7 trains the prior with maximum likelihood. Similar to [1, 2], we observe that when w(t) is dropped while training the SGM prior (i.e., w(t) = 1), LSGM often yields higher quality samples at a small cost in likelihood. However, in our case, we can only drop the weighting when training the prior. When updating the encoder parameters, we still need to use the maximum likelihood weighting to ensure that the encoder q(z0|x) is brought closer to the true posterior p(z0|x)4. Tab. 1 summarizes three weighting mechanisms we consider in this paper: wll(t) corresponds to maximum likelihood, wun(t) is the unweighted objective used by [1, 2], and wre(t) is a variant obtained by dropping only 1/σ2t . This weighting mechanism has a similar affect on the sample quality as wun(t) = 1; however, in Sec. 3.4, we show that it is easier to define a variance reduction scheme for this weighting mechanism. The following summarizes our training objectives (with t ∼ U [0, 1] and ∼ N ( ;0, I)): min φ,ψ Eqφ(z0|x) [ −log pψ(x|z0) ] +Eqφ(z0|x) [ log qφ(z0|x) ] +Et, ,q(zt|z0),qφ(z0|x) [ wll(t) 2 || − θ(zt, t)||22 ] (8) min θ Et, ,q(zt|z0),qφ(z0|x) [ wll/un/re(t) 2 || − θ(zt, t)||22 ] with q(zt|z0) = N (zt;µt(z0), σ 2 t I), (9) where Eq. 8 trains the VAE encoder and decoder parameters {φ,ψ} using the variational bound L(x,φ,θ,ψ) from Eq. 6. Eq. 9 trains the prior with one of the three weighting mechanisms. Since the SGM prior participates in the objective only in the cross entropy term, we only consider this term when training the prior. Efficient algorithms for training with the objectives are presented in App. G. 3.4 Variance Reduction The objectives in Eqs. 8 and 9 involve sampling of the time variable t, which has high variance [26]. We introduce several techniques for reducing this variance for all three objective weightings. We focus on the “variance preserving” SDEs (VPSDEs) [2, 1, 27], defined by dz = − 12β(t)z dt+ √ β(t) dw where β(t) = β0 + (β1 − β0)t linearly interpolates in [β0, β1] (other SDEs discussed in App. B). We denote the marginal distribution of latent variables by q(z0) := Epdata(x)[q(z0|x)]. Here, we derive variance reduction techniques for CE(q(z0)||p(z0)), assuming that both q(z0) = p(z0) = N (z0;0, I). This is a reasonable simplification for our analysis because pretraining our LSGM model with a N (z0;0, I) prior brings q(z0) close to N (z0;0, I) and our SGM prior is often dominated by the fixed Normal mixture component. We empirically observe that the variance reduction techniques developed with this assumption still work well when q(z0) and p(z0) are not exactly N (z0;0, I). Variance reduction for likelihood weighting: In App. B, for q(z0) = p(z0) = N (z0;0, I), we show CE(q(z0)||p(z0)) is given by D2 Et∼U [0,1][d log σ 2 t /dt] + const. We consider two approaches: (1) Geometric VPSDE: To reduce the variance sampling uniformly from t, we can design the SDE such that d log σ2t /dt is constant for t ∈ [0, 1]. We show in App. B that a β(t) = log(σ2max/σ2min) σ2t (1−σ2t ) with geometric variance σ2t = σ 2 min(σ 2 max/σ 2 min) t satisfies this condition. We call a VPSDE with this β(t) a geometric VPSDE. σ2min and σ 2 max are the hyperparameters of the SDE, with 0<σ 2 min<σ 2 max<1. Although our geometric VPSDE has a geometric variance progression similar to the “variance exploding” SDE (VESDE) [2], it still enjoys the “variance preserving” property of the VPSDE. In App. B, we show that the VESDE does not come with a reduced variance for t-sampling by default. (2) Importance sampling (IS): We can keep β(t) and σ2t unchanged for the original linear VPSDE, and instead use IS to minimize variance. The theory of IS shows that the proposal r(t) ∝ d log σ2t /dt has minimum variance [28]. In App. B, we show that we can sample from r(t) using inverse transform sampling t = var−1((σ21) ρ(σ20) 1−ρ) where var−1 is the inverse of σ2t and ρ ∼ U [0, 1]. This variance reduction technique is available for any VPSDE with arbitrary β(t). In Fig. 2, we train a small LSGM on CIFAR-10 with wll weighting using (i) the original VPSDE with uniform t sampling, (ii) the same SDE but with our IS from t, and (iii) the proposed geometric 4Minimizing L(x,φ,θ,ψ) w.r.t φ is equivalent to minimizing KL ( q(z0|x)||p(z0|x) ) w.r.t q(z0|x). VPSDE. Note how both (ii) and (iii) significantly reduce the variance and allow us to monitor the progress of the training objective. In this case, (i) has difficulty minimizing the objective due to the high variance. In App. B, we show how IS proposals can be formed for other SDEs, including the VESDE and Sub-VPSDE from [2]. Variance reduction for unweighted and reweighted objectives: When training with wun, analytically deriving IS proposal distributions for arbitrary β(t) is challenging. For linear VPSDEs, we provide a derivation in App. B to obtain the optimal IS distribution. In contrast, defining IS proposal distributions is easier when training with wre. In App. B, we show that the optimal distribution is in the form r(t) ∝ dσ2t /dtwhich is sampled by t=var−1((1−ρ)σ20 +ρσ21) with ρ ∼ U [0, 1]. In Fig. 3, we visualize the IS distributions for the three weighting mechanisms for the linear VPSDE with the original [β0, β1] parameters from [2]. r(t) for the likelihood weighting is more tilted towards t = 0 due to the 1/σ2t term in wll. When using differently weighted objectives for training, we can either sample separate t with different IS distributions for each objective, or use IS for the SGM objective (Eq. 9) and reweight the samples according to the likelihood objective for encoder training (Eq. 8). See App. G for details. 4 Related Work Our work builds on score-matching [29, 30, 31, 32, 33, 34, 35, 36, 37], specifically denoising score matching [22], which makes our work related to recent generative models using denoising score matching- and denoising diffusion-based objectives [3, 38, 1, 2, 6]. Among those, [1, 6] use a discretized diffusion process with many noise scales, building on [27], while Song et al. [2] introduce the continuous time framework using SDEs. Experimentally, these works focus on image modeling and, contrary to us, work directly in pixel space. Various works recently tried to address the slow sampling of these types of models and further improve output quality. [39] add an adversarial objective, [5] introduce non-Markovian diffusion processes that allow to trade off synthesis speed, quality, and sample diversity, [40] learn a sequence of conditional energy-based models for denoising, [41] distill the iterative sampling process into single shot synthesis, and [42] learn an adaptive noise schedule, which is adjusted during synthesis to accelerate sampling. Further, [26] propose empirical variance reduction techniques for discretized diffusions and introduce a new, heuristically motivated, noise schedule. In contrast, our proposed noise schedule and our variance reduction techniques are analytically derived and directly tailored to our learning setting in the continuous time setup. Recently, [11] presented a method to generate graphs using score-based models, relaxing the entries of adjacency matrices to continuous values. LSGM would allow to model graph data more naturally using encoders and decoders tailored to graphs [43, 44, 45, 46]. Since our model can be considered a VAE [14, 15] with score-based prior, it is related to approaches that improve VAE priors. For example, Normalizing flows and hierarchical distributions [23, 24, 47, 48, 20, 21], as well as energy-based models [49, 50, 51, 52, 53] have been proposed as VAE priors. Furthermore, classifiers [54, 55, 56], adversarial methods [57], and other techniques [58, 59] have been used to define prior distributions implicitly. In two-stage training, a separate generative model is trained in latent space as a new prior after training the VAE itself [60, 61, 62, 63, 64, 10]. Our work also bears a resemblance to recent methods on improving the sampling quality in generative adversarial networks using gradient flows in the latent space [65, 66, 67, 68], with the main difference that these prior works use a discriminator to update the latent variables, whereas we train an SGM. Concurrent works: [10] proposed to learn a denoising diffusion model in the latent space of a VAE for symbolic music generation. This work does not introduce an end-to-end training framework of the combined VAE and denoising diffusion model and instead trains them in two separate stages. In contrast, concurrently with us [69] proposed an end-to-end training approach, and [70] combines contrastive learning with diffusion models in the latent space of VAEs for controllable generation. However, [10, 69, 70] consider the discretized diffusion objective [1], while we build on the continuous time framework. Also, these models are not equipped with the mixed score parameterization and variance reduction techniques, which we found crucial for the successful training of SGM priors. Additionally, [71, 4, 25] concurrently with us proposed likelihood-based training of SGMs in data space5. [4] developed a bound for the data likelihood in their Theorem 3 of their second version, using a denoising score matching objective, closely related to our cross entropy expression. However, our cross entropy expression is much simpler as we show how several terms can be marginalized out analytically for the diffusion SDEs employed by us (see our proof in App. A). The same marginalization can be applied to Theorem 3 in [4] when the drift coefficient takes a special affine form (i.e., f(z, t) = f(t)z). Moreover, [25] discusses the likelihood-based training of SGMs from a fundamental perspective and shows how several score matching objectives become a variational bound on the data likelihood. [71] introduced a notion of signal-to-noise ratio (SNR) that results in a noise-invariant parameterization of time that depends only on the initial and final noise. Interestingly, our importance sampling distribution in Sec. 3.4 has a similar noise-invariant parameterization of time via t = var−1((σ21) ρ(σ20) 1−ρ), which also depends only on the initial and final diffusion process variances. We additionally show that this time parameterization results in the optimal minimumvariance objective, if the distribution of latent variables follows a standard Normal distribution. Finally, [72] proposed a modified time parameterization that allows modeling unbounded data scores. 5 Experiments Here, we examine the efficacy of LSGM in learning generative models for images. Implementation details: We implement LSGM using the NVAE [20] architecture as VAE backbone and NCSN++ [2] as SGM backbone. NVAE has a hierarchical latent structure. The diffusion process input z0 is constructed by concatenating the latent variables from all groups in the channel dimension. For NVAEs with multiple spatial resolutions in latent groups, we only feed the smallest resolution groups to the SGM prior and assume that the remaining groups have a standard Normal distribution. Sampling: To generate samples from LSGM at test time, we use a black-box ODE solver [73] to sample from the prior. Prior samples are then passed to the decoder to generate samples in data space. Evaluation: We measure NELBO, an upper bound on negative log-likelihood (NLL), using Eq. 6. For estimating log p(z0), we rely on the probability flow ODE [2], which provides an unbiased but stochastic estimation of log p(z0). This stochasticity prevents us from performing an importance weighted estimation of NLL [74] (see App. F for details). For measuring sample quality, Fréchet inception distance (FID) [75] is evaluated with 50K samples. Implementation details in App. G. 5.1 Main Results Unconditional color image generation: Here, we present our main results for unconditional image generation on CIFAR-10 [89] (Tab. 2) and CelebA-HQ-256 (5-bit quantized) [88] (Tab. 3). For CIFAR-10, we train 3 different models: LSGM (FID) and LSGM (balanced) both use the VPSDE with linear β(t) and wun-weighting for the SGM prior in Eq. 9, while performing IS as derived in Sec. 3.4. They only differ in how the backbone VAE is trained. LSGM (NLL) is a model that is trained with our novel geometric VPSDE, using wll-weighting in the prior objective (further details in App. G). When set up for high image quality, LSGM achieves a new state-of-the-art FID of 2.10. When tuned towards NLL, we achieve a NELBO of 2.87, which is significantly better than previous score-based models. Only autoregressive models, which come with very slow synthesis, and VDVAE [21] reach similar or higher likelihoods, but they usually have much poorer image quality. For CelebA-HQ-256, we observe that when LSGM is trained with different SDE types and weighting mechanisms, it often obtains similar NELBO potentially due to applying the SGM prior only to small latent variable groups and using Normal priors at the larger groups. With wre-weighting and linear VPSDE, LSGM obtains the state-of-the-art FID score of 7.22 on a par with the original SGM [2]. For both datasets, we also report results for the VAE backbone used in our LSGM. Although this baseline achieves competitive NLL, its sample quality is behind our LSGM and the original SGM. Modeling binarized images: Next, we examine LSGM on dynamically binarized MNIST [93] and OMNIGLOT [74]. We apply LSGM to binary images using a decoder with pixel-wise independent Bernoulli distributions. For these datasets, we report both NELBO and NLL in nats in Tab. 4 and Tab. 5. On OMNIGLOT, LSGM achieves state-of-the-art likelihood of ≤87.79 nat, outperforming previous models including VAEs with autoregressive decoders, and even when comparing its NELBO 5We build on the V1 version of [4], which was substantially updated after the NeurIPS submission deadline. Table 5: Dynamically binarized MNIST results. Method NELBO↓ NLL↓ Ours LSGM 78.47 ≤78.47 VAEs NVAE [20] 79.56 78.01 BIVA [48] 80.06 78.41 IAF-VAE [24] 80.80 79.10 DVAE++ [51] - 78.49 Aut. Reg. PixelVAE++ [91] - 78.00 VampPrior [59] - 78.45 MAE [92] - 77.98 against importance weighted estimation of NLL for other methods. On MNIST, LSGM outperforms previous VAEs in NELBO, reaching a NELBO 1.09 nat lower than the state-of-the-art NVAE. Qualitative results: We visualize qualitative results for all datasets in Fig. 5. On the complex multimodal CIFAR-10 dataset, LSGM generates sharp and high-quality images. On CelebA-HQ-256, LSGM generates diverse samples from different ethnicity and age groups with varying head poses and facial expressions. On MNIST and OMNIGLOT, the generated characters are sharp and high-contrast. Sampling time: We compare LSGM against the original SGM [2] trained on the CelebA-HQ-256 dataset in terms of sampling time and number of function evaluations (NFEs) of the ODE solver. Song et al. [2] propose two main sampling techniques including predictor-corrector (PC) and probability flow ODE. PC sampling involves 4000 NFEs and takes 44.6 min. on a Titan V for a batch of 16 images. It yields 7.23 FID score (see Tab. 3). ODE-based sampling from SGM takes 3.91 min. with 335 NFEs, but it obtains a poor FID score of 128.13 with 10−5 as ODE solver error tolerance6. In a stark contrast, ODE-based sampling from our LSGM takes 0.07 min. with average of 23 NFEs, yielding 7.22 FID score. LSGM is 637× and 56× faster than original SGM’s [2] PC and ODE 6We use the VESDE checkpoint at https://github.com/yang-song/score_sde_pytorch. Song et al. [2] report that ODE-based sampling yields worse FID scores for their models (see D.4 in [2]). The problem is more severe for VESDEs. Unfortunately, at submission time only a VESDE model was released. sampling, respectively. In Fig. 4, we visualize FID scores and NFEs for different ODE solver error tolerances. Our LSGM achieves low FID scores for relatively large error tolerances. We identify three main reasons for this significantly faster sampling from LSGM: (i) The SGM prior in our LSGM models latent variables with 32×32 spatial dim., whereas the original SGM [2] directly models 256×256 images. The larger spatial dimensions require a deeper network to achieve a large receptive field. (ii) Inspecting the SGM prior in our model suggests that the score function is heavily dominated by the linear term at the end of training, as the mixing coefficients α are all < 0.02. This makes our SGM prior smooth and numerically faster to solve. (iii) Since SGM is formed in the latent space in our model, errors from solving the ODE can be corrected to some degree using the VAE decoder, while in the original SGM [2] errors directly translate to artifacts in pixel space. 5.2 Ablation Studies SDEs, objective weighting mechanisms and variance reduction. In Tab. 6, we analyze the different weighting mechanisms and variance reduction techniques and compare the geometric VPSDE with the regular VPSDE with linear β(t) [1, 2]. In the table, SGM-obj.-weighting denotes the weighting mechanism used when training the SGM prior (via Eq. 9). t-sampling (SGM-obj.) indicates the sampling approach for t, where rll(t), run(t) and rre(t) denote the IS distributions for the weighted (likelihood), the unweighted, and the reweighted objective, respectively. For training the VAE encoder qφ(z0|x) (last term in Eq. 8), we either sample a separate batch t with importance sampling following rll(t) (only necessary when the SGM prior is not trained with wll itself), or we reweight the samples drawn for training the prior according to the likelihood objective (denoted by rew.). n/a indicates fields that do not apply: The geometric VPSDE has optimal variance for the weighted (likelihood) objective already with uniform sampling; there is no additional IS distribution. Also, we did not derive IS distributions for the geometric VPSDE for wun. NaN indicates experiments that failed due to training instabilities. Previous work [20, 21] have reported instability in training large VAEs. We find that our method inherits similar instabilities from VAEs; however, importance sampling often stabilizes training our LSGM. As expected, we obtain the best NELBOs (red) when training with the weighted, maximum likelihood objective (wll). Importantly, our new geometric VPSDE achieves the best NELBO. Furthermore, the best FIDs (blue) are obtained either by unweighted (wun) or reweighted (wre) SGM prior training, with only slightly worse NELBOs. These experiments were run on the CIFAR10 dataset, using a smaller model than for our main results above (details in App. G). End-to-end training. We proposed to train LSGM end-to-end, in contrast to [10]. Using a similar setup as above we compare end-to-end training of LSGM during the second stage with freezing the VAE encoder and decoder and only training the SGM prior in latent space during the second stage. When training the model end-to-end, we achieve an FID of 5.19 and NELBO of 2.98; when freezing the VAE networks during the second stage, we only get an FID of 9.00 and NELBO of 3.03. These results clearly motivate our end-to-end training strategy. Mixing Normal and neural score functions. We generally found training LSGM without our proposed “mixed score” formulation (Sec. 3.2) to be unstable during end-to-end training, highlighting its importance. To quantify the contribution of the mixed score parametrization for a stable model, we train a small LSGM with only one latent variable group. In this case, without the mixed score, we reached an FID of 34.71 and NELBO of 3.39; with it, we got an FID of 7.60 and NELBO of 3.29. Without the inductive bias provided by the mixed score, learning that the marginal distribution is close to a Normal one for large t purely from samples can be very hard in the high-dimensional latent space, where our diffusion is run. Furthermore, due to our importance sampling schemes, we tend to oversample small, rather than large t. However, synthesizing high-quality images requires an accurate score function estimate for all t. On the other hand, the log-likelihood of samples is highly sensitive to local image statistics and primarily determined at small t. It is plausible that we are still able to learn a reasonable estimate of the score function for these small t even without the mixed score formulation. That may explain why log-likelihood suffers much less than sample quality, as estimated by FID, when we remove the mixed score parameterization. Additional experiments and model samples are presented in App. H. 6 Conclusions We proposed the Latent Score-based Generative Model, a novel framework for end-to-end training of score-based generative models in the latent space of a variational autoencoder. Moving from data to latent space allows us to form more expressive generative models, model non-continuous data, and reduce sampling time using smoother SGMs. To enable training latent SGMs, we made three core contributions: (i) we derived a simple expression for the cross entropy term in the variational objective, (ii) we parameterized the SGM prior by mixing Normal and neural score functions, and (iii) we proposed several techniques for variance reduction in the estimation of the training objective. Experimental results show that latent SGMs outperform recent pixel-space SGMs in terms of both data likelihood and sample quality, and they can also be applied to binary datasets. In large image generation, LSGM generates data several orders of magnitude faster than recent SGMs. Nevertheless, LSGM’s synthesis speed does not yet permit sampling at interactive rates, and our implementation of LSGM is currently limited to image generation. Therefore, future work includes further accelerating sampling, applying LSGMs to other data types, and designing efficient networks for LSGMs. 7 Broader Impact Generating high-quality samples while fully covering the data distribution has been a long-standing challenge in generative learning. A solution to this problem will likely help reduce biases in generative models and lead to improving overall representation of minorities in the data distribution. SGMs are perhaps one of the first deep models that excel at both sample quality and distribution coverage. However, the high computational cost of sampling limits their widespread use. Our proposed LSGM reduces the sampling complexity of SGMs by a large margin and improves their expressivity further. Thus, in the long term, it can enable the usage of SGMs in practical applications. Here, LSGM is examined on the image generation task which has potential benefits and risks discussed in [94, 95]. However, LSGM can be considered a generic framework that extends SGMs to non-continuous data types. In principle LSGM could be used to model, for example, language [96, 97], music [98, 10], or molecules [99, 100]. Furthermore, like other deep generative models, it can potentially be used also for non-generative tasks such as semi-supervised and representation learning [101, 102, 103]. This makes the long-term social impacts of LSGM dependent on the downstream applications. Funding Statement All authors were funded by NVIDIA through full-time employment.
1. What is the novelty of the proposed method in learning a flexible VAE prior? 2. How effective has the model been verified, and what are the ablation studies conducted? 3. Are there any concerns regarding the complexity of the VAE and SGM backbone structure? 4. How does the reviewer assess the quality and clarity of the paper's content? 5. Is the idea of using score-based generative models directly on the latent space rather than high-dimensional pixel space significant?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to learn a flexible VAE prior using score-based generative model. Contrary to the existing score-based model that builds on high-dim pixel space, the paper applies the score-based model directly on the latent space which is typically low-dimensional. The effectiveness of the model has been verified using generative FID score, negative log-likelihood as well as various ablation studies. Review Originality: (+/-) The model presented is relatively intuitive and seems not appeared in the recent literature. However, the idea that uses some recently well-performed diffusion/score-based model as a flexible prior of VAE is straightforward and appears less surprising. Quality: (+) The authors put efforts on designing various ablation studies to explore their model and conduct quite a lot comparisons with existing baselines. The overall generative performance looks good. I also appreciate the authors provide comprehensive supplementary materials for additional proof, details and results. (-) Though the FID score is good, but the vae and Sgm backbone structure can be complicated, and some places need better described in the main text. I understand the powerful backbone structures are used to target SOTA performance. But from the model/methodology side, it might be better to also focus on the "basic" backbone structures on which quite a few existing generative models build (e.g., the vae backbone with a few convolution-relu layers). Does the paper try such basic backbone models in the experiments? In the tab 2& 3, Ours (VAE backbone) refers to the non-NVAE structure? What structure exactly? Also, does all the results obtained by pre-training the NVAE first OR jointly train both VAE and SGM? Would be more interesting if the latter. I would adjust my initial scores based on the feedback of the authors on above issues. Clarity: (+) overall, the paper is easy to follow and presentation is clear despite a few places about training procedures as listed above. Significance: (+/-) The idea is intuitive and could be easy to use/adapt for the follow-up works, but the idea itself is not so surprising, and cannot be considered transformative.
NIPS
Title Score-based Generative Modeling in Latent Space Abstract Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Our implementation is available at https://github.com/NVlabs/LSGM. 1 Introduction The long-standing goal of likelihood-based generative learning is to faithfully learn a data distribution, while also generating high-quality samples. Achieving these two goals simultaneously is a tremendous challenge, which has led to the development of a plethora of different generative models. Recently, score-based generative models (SGMs) demonstrated astonishing results in terms of both high sample quality and likelihood [1, 2]. These models define a forward diffusion process that maps data to noise by gradually perturbing the input data. Generation corresponds to a reverse process that synthesizes novel data via iterative denoising, starting from random noise. The problem then reduces to learning the score function—the gradient of the log-density—of the perturbed data [3]. In a seminal work, Song et al. [2] show how this modeling approach is described with a stochastic differential equation (SDE) framework which can be converted to maximum likelihood training [4]. Variants of SGMs have been applied to images [1, 2, 5, 6], audio [7, 8, 9, 10], graphs [11] and point clouds [12, 13]. Albeit high quality, sampling from SGMs is computationally expensive. This is because generation amounts to solving a complex SDE, or equivalently ordinary differential equation (ODE) (denoted as the probability flow ODE in [2]), that maps a simple base distribution to the complex data distribution. The resulting differential equations are typically complex and solving them accurately requires numerical integration with very small step sizes, which results in thousands of neural network evaluations [1, 2, 6]. Furthermore, generation complexity is uniquely defined by the underlying data distribution and the forward SDE for data perturbation, implying that synthesis speed cannot be ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). increased easily without sacrifices. Moreover, SDE-based generative models are currently defined for continuous data and cannot be applied effortlessly to binary, categorical, or graph-structured data. Here, we propose the Latent Score-based Generative Model (LSGM), a new approach for learning SGMs in latent space, leveraging a variational autoencoder (VAE) framework [14, 15]. We map the input data to latent space and apply the score-based generative model there. The score-based model is then tasked with modeling the distribution over the embeddings of the data set. Novel data synthesis is achieved by first generating embeddings via drawing from a simple base distribution followed by iterative denoising, and then transforming this embedding via a decoder to data space (see Fig. 1). We can consider this model a VAE with an SGM prior. Our approach has several key advantages: Synthesis Speed: By pretraining the VAE with a Normal prior first, we can bring the marginal distribution over encodings (the aggregate posterior) close to the Normal prior, which is also the SGM’s base distribution. Consequently, the SGM only needs to model the remaining mismatch, resulting in a less complex model from which sampling becomes easier. Furthermore, we can tailor the latent space according to our needs. For example, we can use hierarchical latent variables and apply the diffusion model only over a subset of them, further improving synthesis speed. Expressivity: Training a regular SGM can be considered as training a neural ODE directly on the data [2]. However, previous works found that augmenting neural ODEs [16, 17] and more generally generative models [18, 19, 20, 21] with latent variables improves their expressivity. Consequently, we expect similar performance gains from combining SGMs with a latent variable framework. Tailored Encoders and Decoders: Since we use the SGM in latent space, we can utilize carefully designed encoders and decoders mapping between latent and data space, further improving expressivity. Additionally, the LSGM method can therefore be naturally applied to non-continuous data. LSGMs can be trained end-to-end by maximizing the variational lower bound on the data likelihood. Compared to regular score matching, our approach comes with additional challenges, since both the score-based denoising model and its target distribution, formed by the latent space encodings, are learnt simultaneously. To this end, we make the following technical contributions: (i) We derive a new denoising score matching objective that allows us to efficiently learn the VAE model and the latent SGM prior at the same time. (ii) We introduce a new parameterization of the latent space score function, which mixes a Normal distribution with a learnable SGM, allowing the SGM to model only the mismatch between the distribution of latent variables and the Normal prior. (iii) We propose techniques for variance reduction of the training objective by designing a new SDE and by analytically deriving importance sampling schemes, allowing us to stably train deep LSGMs. Experimentally, we achieve state-of-the-art 2.10 FID on CIFAR-10 and 7.22 FID on CelebA-HQ-256, and significantly improve upon likelihoods of previous SGMs. On CelebA-HQ-256, we outperform previous SGMs in synthesis speed by two orders of magnitude. We also model binarized images, MNIST and OMNIGLOT, achieving state-of-the-art likelihood on the latter. 2 Background Here, we review continuous-time score-based generative models (see [2] for an in-depth discussion). Consider a forward diffusion process {zt}t=1t=0 for continuous time variable t ∈ [0, 1], where z0 is the starting variable and zt its perturbation at time t. The diffusion process is defined by an Itô SDE: dz = f(t)z dt+ g(t) dw (1) where f : R→ R and g : R→ R are scalar drift and diffusion coefficients, respectively, and w is the standard Wiener process. f(t) and g(t) can be designed such that z1 ∼ N (z1;0, I) follows a Normal distribution at the end of the diffusion process.2 Song et al. [2] show that the SDE in Eq. 1 can be converted to a generative model by first sampling from z1 ∼ N (z1;0, I) and then running the reverse-time SDE dz = [f(t)z−g(t)2∇z log qt(z)] dt+g(t) dw̄, where w̄ is a reverse-time standard Wiener process and dt is an infinitesimal negative time step. The reverse SDE requires knowledge of ∇zt log qt(zt), the score function of the marginal distribution under the forward diffusion at time t. One approach for estimating it is via the score matching objective3: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2] ] (2) that trains the parameteric score function ∇zt log pθ(zt) at time t ∼ U [0, 1] for a given weighting coefficient λ(t). q(z0) is the z0-generating distribution and q(zt|z0) is the diffusion kernel, which is available in closed form for certain f(t) and g(t). Since ∇zt log q(zt) is not analytically available, Song et al. [2] rely on denoising score matching [22] that converts the objective in Eq. 2 to: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt|z0)−∇zt log pθ(zt)|| 2 2] ] + C (3) Vincent [22] shows C = Et∼U [0,1][λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)||22 − ||∇zt log q(zt|z0)||22]] is independent of θ, making the minimizations in Eq. 3 and Eq. 2 equivalent. Song et al. [4] show that for λ(t) = g(t)2/2, the minimizations correspond to approximate maximum likelihood training based on an upper on the Kullback-Leibler (KL) divergence between the target distribution and the distribution defined by the reverse-time generative SDE with the learnt score function. In particular, the objective of Eq. 2 can then be written: KL ( q(z0)||pθ(z0) ) ≤ Et∼U[0,1] [ g(t)2 2 Eq(z0)Eq(zt|z0) [ ||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2 ]] (4) which can again be transformed into denoising score matching (Eq. 3) following Vincent [22]. 3 Score-based Generative Modeling in Latent Space The LSGM framework in Fig. 1 consists of the encoder qφ(z0|x), SGM prior pθ(z0), and decoder pψ(x|z0). The SGM prior leverages a diffusion process as defined in Eq. 1 and diffuses z0 ∼ qφ(z0|x) samples in latent space to the standard Normal distribution p(z1) = N (z1;0, I). Generation uses the reverse SDE to sample from pθ(z0) with time-dependent score function∇zt log pθ(zt), and the decoder pψ(x|z0) to map the synthesized encodings z0 to data space. Formally, the generative process is written as p(z0,x) = pθ(z0)pψ(x|z0). The goal of training is to learn {φ,θ,ψ}, the parameters of the encoder qφ(z0|x), score function∇zt log pθ(zt), and decoder pψ(x|z0), respectively. We train LSGM by minimizing the variational upper bound on negative data log-likelihood log p(x): L(x,φ,θ,ψ) = Eqφ(z0|x) [ − log pψ(x|z0) ] +KL ( qφ(z0|x)||pθ(z0) ) (5) = Eqφ(z0|x) [ − log pψ(x|z0) ]︸ ︷︷ ︸ reconstruction term +Eqφ(z0|x) [ log qφ(z0|x) ]︸ ︷︷ ︸ negative encoder entropy +Eqφ(z0|x) [ − log pθ(z0) ]︸ ︷︷ ︸ cross entropy (6) following a VAE approach [14, 15], where qφ(z0|x) approximates the true posterior p(z0|x). In this paper, we use Eq. 6 with decomposed KL divergence into its entropy and cross entropy terms. The reconstruction and entropy terms are estimated easily for any explicit encoder as long as the reparameterization trick is available [14]. The challenging part in training LSGM is to train the cross entropy term that involves the SGM prior. We motivate and present our expression for the cross-entropy term in Sec. 3.1, the parameterization of the SGM prior in Sec. 3.2, different weighting mechanisms for the training objective in Sec. 3.3, and variance reduction techniques in Sec. 3.4. 3.1 The Cross Entropy Term One may ask, why not train LSGM with Eq. 5 and rely on the KL in Eq. 4. Directly using the KL expression in Eq. 4 is not possible, as it involves the marginal score ∇zt log q(zt), which is unavailable analytically for common non-Normal distributions q(z0) such as Normalizing flows. 2Other distributions at t = 1 are possible; for instance, see the “variance-exploding” SDE in [2]. In this paper, however, we use only SDEs converging towardsN (z1;0, I) at t = 1. 3We omit the t-subscript of the diffused distributions qt in all score functions of the form∇zt log qt(zt). Transforming into denoising score matching does not help either, since in that case the problematic ∇zt log q(zt) term appears in the C term (see Eq. 3). In contrast to previous works [2, 22], we cannot simply drop C, since it is, in fact, not constant but depends on q(zt), which is trainable in our setup. To circumvent this problem, we instead decompose the KL in Eq. 5 and rather work directly with the cross entropy between the encoder distribution q(z0|x) and the SGM prior p(z0). We show: Theorem 1. Given two distributions q(z0|x) and p(z0), defined in the continuous space RD, denote the marginal distributions of diffused samples under the SDE in Eq. 1 at time t with q(zt|x) and p(zt). Assuming mild smoothness conditions on log q(zt|x) and log p(zt), the cross entropy is: CE(q(z0|x)||p(z0)) = Et∼U[0,1] [ g(t)2 2 Eq(zt,z0|x) [ ||∇zt log q(zt|z0)−∇zt log p(zt)|| 2 2 ]] + D 2 log ( 2πeσ20 ) , with q(zt, z0|x) = q(zt|z0)q(z0|x) and a Normal transition kernel q(zt|z0) = N (zt;µt(z0), σ2t I), where µt and σ 2 t are obtained from f(t) and g(t) for a fixed initial variance σ 2 0 at t = 0. A proof with generic expressions for µt and σ 2 t as well as an intuitive interpretation are in App. A. Importantly, unlike for the KL objective of Eq. 4, no problematic terms depending on the marginal score ∇zt log q(zt|x) arise. This allows us to use this denoising score matching objective for the cross entropy term in Theorem 1 not only for optimizing p(z0) (which is commonly done in the score matching literature), but also for the q(z0|x) encoding distribution. It can be used even with complex q(z0|x) distributions, defined, for example, in a hierarchical fashion [20, 21] or via Normalizing flows [23, 24]. Our novel analysis shows that, for diffusion SDEs following Eq. 1, only the cross entropy can be expressed purely with ∇zt log q(zt|z0). Neither KL nor entropy in [4] can be expressed without the problematic term∇zt log q(zt|x) (details in the Appendix). Note that in Theorem 1, the term∇zt log p(zt) in the score matching expression corresponds to the score that originates from diffusing an initial p(z0) distribution. In practice, we use the expression to learn an SGM prior pθ(z0), which models∇zt log p(zt) by a neural network. With the learnt score ∇zt log pθ(zt) (here we explicitly indicate the parameters θ to clarify that this is the learnt model), the actual SGM prior is defined via the generative reverse-time SDE (or, alternatively, a closely-connected ODE, see Sec. 2 and App. D), which generally defines its own, separate marginal distribution pθ(z0) at t = 0. Importantly, the learnt, approximate score∇zt log pθ(zt) is not necessarily the same as one would obtain when diffusing pθ(z0). Hence, when considering the learnt score∇zt log pθ(zt), the score matching expression in our Theorem only corresponds to an upper bound on the cross entropy between q(z0|x) and pθ(z0) defined by the generative reverse-time SDE. This is discussed in detail in concurrent works [4, 25]. Hence, from the perspective of the learnt SGM prior, we are training with an upper bound on the cross entropy (similar to the bound on the KL in Eq. 4), which can also be considered as the continuous version of the discretized variational objective derived by Ho et al. [1]. 3.2 Mixing Normal and Neural Score Functions In VAEs [14], p(z0) is often chosen as a standard Normal N (z0;0, I). For recent hierarchical VAEs [20, 21], using the reparameterization trick, the prior can be converted to N (z0;0, I) (App. E). Considering a single dimensional latent space, we can assume that the prior at time t is in the form of a geometric mixture p(zt) ∝ N (zt; 0, 1)1−αp′θ(zt)α where p′θ(zt) is a trainable SGM prior and α ∈ [0, 1] is a learnable scalar mixing coefficient. Formulating the prior this way has crucial advantages: (i) We can pretrain LSGM’s autoencoder networks assuming α=0, which corresponds to training the VAE with a standard Normal prior. This pretraining step will bring the distribution of latent variable close to N (z0; 0, 1), allowing the SGM prior to learn a much simpler distribution in the following end-to-end training stage. (ii) The score function for this mixture is of the form ∇zt log p(zt) = −(1− α)zt + α∇zt log p′θ(zt). When the score function is dominated by the linear term, we expect that the reverse SDE can be solved faster, as its drift is dominated by this linear term. For our multivariate latent space, we obtain diffused samples at time t by sampling zt ∼ q(zt|z0) with zt = µt(z0) + σt , where ∼ N ( ;0, I). Since we have ∇zt log q(zt|z0) = − /σt, similar to [1], we parameterize the score function by ∇zt log p(zt) := − θ(zt, t)/σt, where θ(zt, t) := σt(1 − α) zt + α ′θ(zt, t) is defined by our mixed score parameterization that is applied elementwise to the components of the score. With this, we simplify the cross entropy expression to: CE(qφ(z0|x)||pθ(z0)) = Et∼U[0,1] [ w(t) 2 Eqφ(zt,z0|x), [ || − θ(zt, t)||22 ]] + D 2 log ( 2πeσ20 ) , (7) where w(t) = g(t)2/σ2t is a time-dependent weighting scalar. 3.3 Training with Different Weighting Mechanisms Table 1: Weighting mechanisms Mechanism Weights Weighted wll(t) = g(t)2/σ2t Unweighted wun(t) = 1 Reweighted wre(t) = g(t)2 The weighting term w(t) in Eq. 7 trains the prior with maximum likelihood. Similar to [1, 2], we observe that when w(t) is dropped while training the SGM prior (i.e., w(t) = 1), LSGM often yields higher quality samples at a small cost in likelihood. However, in our case, we can only drop the weighting when training the prior. When updating the encoder parameters, we still need to use the maximum likelihood weighting to ensure that the encoder q(z0|x) is brought closer to the true posterior p(z0|x)4. Tab. 1 summarizes three weighting mechanisms we consider in this paper: wll(t) corresponds to maximum likelihood, wun(t) is the unweighted objective used by [1, 2], and wre(t) is a variant obtained by dropping only 1/σ2t . This weighting mechanism has a similar affect on the sample quality as wun(t) = 1; however, in Sec. 3.4, we show that it is easier to define a variance reduction scheme for this weighting mechanism. The following summarizes our training objectives (with t ∼ U [0, 1] and ∼ N ( ;0, I)): min φ,ψ Eqφ(z0|x) [ −log pψ(x|z0) ] +Eqφ(z0|x) [ log qφ(z0|x) ] +Et, ,q(zt|z0),qφ(z0|x) [ wll(t) 2 || − θ(zt, t)||22 ] (8) min θ Et, ,q(zt|z0),qφ(z0|x) [ wll/un/re(t) 2 || − θ(zt, t)||22 ] with q(zt|z0) = N (zt;µt(z0), σ 2 t I), (9) where Eq. 8 trains the VAE encoder and decoder parameters {φ,ψ} using the variational bound L(x,φ,θ,ψ) from Eq. 6. Eq. 9 trains the prior with one of the three weighting mechanisms. Since the SGM prior participates in the objective only in the cross entropy term, we only consider this term when training the prior. Efficient algorithms for training with the objectives are presented in App. G. 3.4 Variance Reduction The objectives in Eqs. 8 and 9 involve sampling of the time variable t, which has high variance [26]. We introduce several techniques for reducing this variance for all three objective weightings. We focus on the “variance preserving” SDEs (VPSDEs) [2, 1, 27], defined by dz = − 12β(t)z dt+ √ β(t) dw where β(t) = β0 + (β1 − β0)t linearly interpolates in [β0, β1] (other SDEs discussed in App. B). We denote the marginal distribution of latent variables by q(z0) := Epdata(x)[q(z0|x)]. Here, we derive variance reduction techniques for CE(q(z0)||p(z0)), assuming that both q(z0) = p(z0) = N (z0;0, I). This is a reasonable simplification for our analysis because pretraining our LSGM model with a N (z0;0, I) prior brings q(z0) close to N (z0;0, I) and our SGM prior is often dominated by the fixed Normal mixture component. We empirically observe that the variance reduction techniques developed with this assumption still work well when q(z0) and p(z0) are not exactly N (z0;0, I). Variance reduction for likelihood weighting: In App. B, for q(z0) = p(z0) = N (z0;0, I), we show CE(q(z0)||p(z0)) is given by D2 Et∼U [0,1][d log σ 2 t /dt] + const. We consider two approaches: (1) Geometric VPSDE: To reduce the variance sampling uniformly from t, we can design the SDE such that d log σ2t /dt is constant for t ∈ [0, 1]. We show in App. B that a β(t) = log(σ2max/σ2min) σ2t (1−σ2t ) with geometric variance σ2t = σ 2 min(σ 2 max/σ 2 min) t satisfies this condition. We call a VPSDE with this β(t) a geometric VPSDE. σ2min and σ 2 max are the hyperparameters of the SDE, with 0<σ 2 min<σ 2 max<1. Although our geometric VPSDE has a geometric variance progression similar to the “variance exploding” SDE (VESDE) [2], it still enjoys the “variance preserving” property of the VPSDE. In App. B, we show that the VESDE does not come with a reduced variance for t-sampling by default. (2) Importance sampling (IS): We can keep β(t) and σ2t unchanged for the original linear VPSDE, and instead use IS to minimize variance. The theory of IS shows that the proposal r(t) ∝ d log σ2t /dt has minimum variance [28]. In App. B, we show that we can sample from r(t) using inverse transform sampling t = var−1((σ21) ρ(σ20) 1−ρ) where var−1 is the inverse of σ2t and ρ ∼ U [0, 1]. This variance reduction technique is available for any VPSDE with arbitrary β(t). In Fig. 2, we train a small LSGM on CIFAR-10 with wll weighting using (i) the original VPSDE with uniform t sampling, (ii) the same SDE but with our IS from t, and (iii) the proposed geometric 4Minimizing L(x,φ,θ,ψ) w.r.t φ is equivalent to minimizing KL ( q(z0|x)||p(z0|x) ) w.r.t q(z0|x). VPSDE. Note how both (ii) and (iii) significantly reduce the variance and allow us to monitor the progress of the training objective. In this case, (i) has difficulty minimizing the objective due to the high variance. In App. B, we show how IS proposals can be formed for other SDEs, including the VESDE and Sub-VPSDE from [2]. Variance reduction for unweighted and reweighted objectives: When training with wun, analytically deriving IS proposal distributions for arbitrary β(t) is challenging. For linear VPSDEs, we provide a derivation in App. B to obtain the optimal IS distribution. In contrast, defining IS proposal distributions is easier when training with wre. In App. B, we show that the optimal distribution is in the form r(t) ∝ dσ2t /dtwhich is sampled by t=var−1((1−ρ)σ20 +ρσ21) with ρ ∼ U [0, 1]. In Fig. 3, we visualize the IS distributions for the three weighting mechanisms for the linear VPSDE with the original [β0, β1] parameters from [2]. r(t) for the likelihood weighting is more tilted towards t = 0 due to the 1/σ2t term in wll. When using differently weighted objectives for training, we can either sample separate t with different IS distributions for each objective, or use IS for the SGM objective (Eq. 9) and reweight the samples according to the likelihood objective for encoder training (Eq. 8). See App. G for details. 4 Related Work Our work builds on score-matching [29, 30, 31, 32, 33, 34, 35, 36, 37], specifically denoising score matching [22], which makes our work related to recent generative models using denoising score matching- and denoising diffusion-based objectives [3, 38, 1, 2, 6]. Among those, [1, 6] use a discretized diffusion process with many noise scales, building on [27], while Song et al. [2] introduce the continuous time framework using SDEs. Experimentally, these works focus on image modeling and, contrary to us, work directly in pixel space. Various works recently tried to address the slow sampling of these types of models and further improve output quality. [39] add an adversarial objective, [5] introduce non-Markovian diffusion processes that allow to trade off synthesis speed, quality, and sample diversity, [40] learn a sequence of conditional energy-based models for denoising, [41] distill the iterative sampling process into single shot synthesis, and [42] learn an adaptive noise schedule, which is adjusted during synthesis to accelerate sampling. Further, [26] propose empirical variance reduction techniques for discretized diffusions and introduce a new, heuristically motivated, noise schedule. In contrast, our proposed noise schedule and our variance reduction techniques are analytically derived and directly tailored to our learning setting in the continuous time setup. Recently, [11] presented a method to generate graphs using score-based models, relaxing the entries of adjacency matrices to continuous values. LSGM would allow to model graph data more naturally using encoders and decoders tailored to graphs [43, 44, 45, 46]. Since our model can be considered a VAE [14, 15] with score-based prior, it is related to approaches that improve VAE priors. For example, Normalizing flows and hierarchical distributions [23, 24, 47, 48, 20, 21], as well as energy-based models [49, 50, 51, 52, 53] have been proposed as VAE priors. Furthermore, classifiers [54, 55, 56], adversarial methods [57], and other techniques [58, 59] have been used to define prior distributions implicitly. In two-stage training, a separate generative model is trained in latent space as a new prior after training the VAE itself [60, 61, 62, 63, 64, 10]. Our work also bears a resemblance to recent methods on improving the sampling quality in generative adversarial networks using gradient flows in the latent space [65, 66, 67, 68], with the main difference that these prior works use a discriminator to update the latent variables, whereas we train an SGM. Concurrent works: [10] proposed to learn a denoising diffusion model in the latent space of a VAE for symbolic music generation. This work does not introduce an end-to-end training framework of the combined VAE and denoising diffusion model and instead trains them in two separate stages. In contrast, concurrently with us [69] proposed an end-to-end training approach, and [70] combines contrastive learning with diffusion models in the latent space of VAEs for controllable generation. However, [10, 69, 70] consider the discretized diffusion objective [1], while we build on the continuous time framework. Also, these models are not equipped with the mixed score parameterization and variance reduction techniques, which we found crucial for the successful training of SGM priors. Additionally, [71, 4, 25] concurrently with us proposed likelihood-based training of SGMs in data space5. [4] developed a bound for the data likelihood in their Theorem 3 of their second version, using a denoising score matching objective, closely related to our cross entropy expression. However, our cross entropy expression is much simpler as we show how several terms can be marginalized out analytically for the diffusion SDEs employed by us (see our proof in App. A). The same marginalization can be applied to Theorem 3 in [4] when the drift coefficient takes a special affine form (i.e., f(z, t) = f(t)z). Moreover, [25] discusses the likelihood-based training of SGMs from a fundamental perspective and shows how several score matching objectives become a variational bound on the data likelihood. [71] introduced a notion of signal-to-noise ratio (SNR) that results in a noise-invariant parameterization of time that depends only on the initial and final noise. Interestingly, our importance sampling distribution in Sec. 3.4 has a similar noise-invariant parameterization of time via t = var−1((σ21) ρ(σ20) 1−ρ), which also depends only on the initial and final diffusion process variances. We additionally show that this time parameterization results in the optimal minimumvariance objective, if the distribution of latent variables follows a standard Normal distribution. Finally, [72] proposed a modified time parameterization that allows modeling unbounded data scores. 5 Experiments Here, we examine the efficacy of LSGM in learning generative models for images. Implementation details: We implement LSGM using the NVAE [20] architecture as VAE backbone and NCSN++ [2] as SGM backbone. NVAE has a hierarchical latent structure. The diffusion process input z0 is constructed by concatenating the latent variables from all groups in the channel dimension. For NVAEs with multiple spatial resolutions in latent groups, we only feed the smallest resolution groups to the SGM prior and assume that the remaining groups have a standard Normal distribution. Sampling: To generate samples from LSGM at test time, we use a black-box ODE solver [73] to sample from the prior. Prior samples are then passed to the decoder to generate samples in data space. Evaluation: We measure NELBO, an upper bound on negative log-likelihood (NLL), using Eq. 6. For estimating log p(z0), we rely on the probability flow ODE [2], which provides an unbiased but stochastic estimation of log p(z0). This stochasticity prevents us from performing an importance weighted estimation of NLL [74] (see App. F for details). For measuring sample quality, Fréchet inception distance (FID) [75] is evaluated with 50K samples. Implementation details in App. G. 5.1 Main Results Unconditional color image generation: Here, we present our main results for unconditional image generation on CIFAR-10 [89] (Tab. 2) and CelebA-HQ-256 (5-bit quantized) [88] (Tab. 3). For CIFAR-10, we train 3 different models: LSGM (FID) and LSGM (balanced) both use the VPSDE with linear β(t) and wun-weighting for the SGM prior in Eq. 9, while performing IS as derived in Sec. 3.4. They only differ in how the backbone VAE is trained. LSGM (NLL) is a model that is trained with our novel geometric VPSDE, using wll-weighting in the prior objective (further details in App. G). When set up for high image quality, LSGM achieves a new state-of-the-art FID of 2.10. When tuned towards NLL, we achieve a NELBO of 2.87, which is significantly better than previous score-based models. Only autoregressive models, which come with very slow synthesis, and VDVAE [21] reach similar or higher likelihoods, but they usually have much poorer image quality. For CelebA-HQ-256, we observe that when LSGM is trained with different SDE types and weighting mechanisms, it often obtains similar NELBO potentially due to applying the SGM prior only to small latent variable groups and using Normal priors at the larger groups. With wre-weighting and linear VPSDE, LSGM obtains the state-of-the-art FID score of 7.22 on a par with the original SGM [2]. For both datasets, we also report results for the VAE backbone used in our LSGM. Although this baseline achieves competitive NLL, its sample quality is behind our LSGM and the original SGM. Modeling binarized images: Next, we examine LSGM on dynamically binarized MNIST [93] and OMNIGLOT [74]. We apply LSGM to binary images using a decoder with pixel-wise independent Bernoulli distributions. For these datasets, we report both NELBO and NLL in nats in Tab. 4 and Tab. 5. On OMNIGLOT, LSGM achieves state-of-the-art likelihood of ≤87.79 nat, outperforming previous models including VAEs with autoregressive decoders, and even when comparing its NELBO 5We build on the V1 version of [4], which was substantially updated after the NeurIPS submission deadline. Table 5: Dynamically binarized MNIST results. Method NELBO↓ NLL↓ Ours LSGM 78.47 ≤78.47 VAEs NVAE [20] 79.56 78.01 BIVA [48] 80.06 78.41 IAF-VAE [24] 80.80 79.10 DVAE++ [51] - 78.49 Aut. Reg. PixelVAE++ [91] - 78.00 VampPrior [59] - 78.45 MAE [92] - 77.98 against importance weighted estimation of NLL for other methods. On MNIST, LSGM outperforms previous VAEs in NELBO, reaching a NELBO 1.09 nat lower than the state-of-the-art NVAE. Qualitative results: We visualize qualitative results for all datasets in Fig. 5. On the complex multimodal CIFAR-10 dataset, LSGM generates sharp and high-quality images. On CelebA-HQ-256, LSGM generates diverse samples from different ethnicity and age groups with varying head poses and facial expressions. On MNIST and OMNIGLOT, the generated characters are sharp and high-contrast. Sampling time: We compare LSGM against the original SGM [2] trained on the CelebA-HQ-256 dataset in terms of sampling time and number of function evaluations (NFEs) of the ODE solver. Song et al. [2] propose two main sampling techniques including predictor-corrector (PC) and probability flow ODE. PC sampling involves 4000 NFEs and takes 44.6 min. on a Titan V for a batch of 16 images. It yields 7.23 FID score (see Tab. 3). ODE-based sampling from SGM takes 3.91 min. with 335 NFEs, but it obtains a poor FID score of 128.13 with 10−5 as ODE solver error tolerance6. In a stark contrast, ODE-based sampling from our LSGM takes 0.07 min. with average of 23 NFEs, yielding 7.22 FID score. LSGM is 637× and 56× faster than original SGM’s [2] PC and ODE 6We use the VESDE checkpoint at https://github.com/yang-song/score_sde_pytorch. Song et al. [2] report that ODE-based sampling yields worse FID scores for their models (see D.4 in [2]). The problem is more severe for VESDEs. Unfortunately, at submission time only a VESDE model was released. sampling, respectively. In Fig. 4, we visualize FID scores and NFEs for different ODE solver error tolerances. Our LSGM achieves low FID scores for relatively large error tolerances. We identify three main reasons for this significantly faster sampling from LSGM: (i) The SGM prior in our LSGM models latent variables with 32×32 spatial dim., whereas the original SGM [2] directly models 256×256 images. The larger spatial dimensions require a deeper network to achieve a large receptive field. (ii) Inspecting the SGM prior in our model suggests that the score function is heavily dominated by the linear term at the end of training, as the mixing coefficients α are all < 0.02. This makes our SGM prior smooth and numerically faster to solve. (iii) Since SGM is formed in the latent space in our model, errors from solving the ODE can be corrected to some degree using the VAE decoder, while in the original SGM [2] errors directly translate to artifacts in pixel space. 5.2 Ablation Studies SDEs, objective weighting mechanisms and variance reduction. In Tab. 6, we analyze the different weighting mechanisms and variance reduction techniques and compare the geometric VPSDE with the regular VPSDE with linear β(t) [1, 2]. In the table, SGM-obj.-weighting denotes the weighting mechanism used when training the SGM prior (via Eq. 9). t-sampling (SGM-obj.) indicates the sampling approach for t, where rll(t), run(t) and rre(t) denote the IS distributions for the weighted (likelihood), the unweighted, and the reweighted objective, respectively. For training the VAE encoder qφ(z0|x) (last term in Eq. 8), we either sample a separate batch t with importance sampling following rll(t) (only necessary when the SGM prior is not trained with wll itself), or we reweight the samples drawn for training the prior according to the likelihood objective (denoted by rew.). n/a indicates fields that do not apply: The geometric VPSDE has optimal variance for the weighted (likelihood) objective already with uniform sampling; there is no additional IS distribution. Also, we did not derive IS distributions for the geometric VPSDE for wun. NaN indicates experiments that failed due to training instabilities. Previous work [20, 21] have reported instability in training large VAEs. We find that our method inherits similar instabilities from VAEs; however, importance sampling often stabilizes training our LSGM. As expected, we obtain the best NELBOs (red) when training with the weighted, maximum likelihood objective (wll). Importantly, our new geometric VPSDE achieves the best NELBO. Furthermore, the best FIDs (blue) are obtained either by unweighted (wun) or reweighted (wre) SGM prior training, with only slightly worse NELBOs. These experiments were run on the CIFAR10 dataset, using a smaller model than for our main results above (details in App. G). End-to-end training. We proposed to train LSGM end-to-end, in contrast to [10]. Using a similar setup as above we compare end-to-end training of LSGM during the second stage with freezing the VAE encoder and decoder and only training the SGM prior in latent space during the second stage. When training the model end-to-end, we achieve an FID of 5.19 and NELBO of 2.98; when freezing the VAE networks during the second stage, we only get an FID of 9.00 and NELBO of 3.03. These results clearly motivate our end-to-end training strategy. Mixing Normal and neural score functions. We generally found training LSGM without our proposed “mixed score” formulation (Sec. 3.2) to be unstable during end-to-end training, highlighting its importance. To quantify the contribution of the mixed score parametrization for a stable model, we train a small LSGM with only one latent variable group. In this case, without the mixed score, we reached an FID of 34.71 and NELBO of 3.39; with it, we got an FID of 7.60 and NELBO of 3.29. Without the inductive bias provided by the mixed score, learning that the marginal distribution is close to a Normal one for large t purely from samples can be very hard in the high-dimensional latent space, where our diffusion is run. Furthermore, due to our importance sampling schemes, we tend to oversample small, rather than large t. However, synthesizing high-quality images requires an accurate score function estimate for all t. On the other hand, the log-likelihood of samples is highly sensitive to local image statistics and primarily determined at small t. It is plausible that we are still able to learn a reasonable estimate of the score function for these small t even without the mixed score formulation. That may explain why log-likelihood suffers much less than sample quality, as estimated by FID, when we remove the mixed score parameterization. Additional experiments and model samples are presented in App. H. 6 Conclusions We proposed the Latent Score-based Generative Model, a novel framework for end-to-end training of score-based generative models in the latent space of a variational autoencoder. Moving from data to latent space allows us to form more expressive generative models, model non-continuous data, and reduce sampling time using smoother SGMs. To enable training latent SGMs, we made three core contributions: (i) we derived a simple expression for the cross entropy term in the variational objective, (ii) we parameterized the SGM prior by mixing Normal and neural score functions, and (iii) we proposed several techniques for variance reduction in the estimation of the training objective. Experimental results show that latent SGMs outperform recent pixel-space SGMs in terms of both data likelihood and sample quality, and they can also be applied to binary datasets. In large image generation, LSGM generates data several orders of magnitude faster than recent SGMs. Nevertheless, LSGM’s synthesis speed does not yet permit sampling at interactive rates, and our implementation of LSGM is currently limited to image generation. Therefore, future work includes further accelerating sampling, applying LSGMs to other data types, and designing efficient networks for LSGMs. 7 Broader Impact Generating high-quality samples while fully covering the data distribution has been a long-standing challenge in generative learning. A solution to this problem will likely help reduce biases in generative models and lead to improving overall representation of minorities in the data distribution. SGMs are perhaps one of the first deep models that excel at both sample quality and distribution coverage. However, the high computational cost of sampling limits their widespread use. Our proposed LSGM reduces the sampling complexity of SGMs by a large margin and improves their expressivity further. Thus, in the long term, it can enable the usage of SGMs in practical applications. Here, LSGM is examined on the image generation task which has potential benefits and risks discussed in [94, 95]. However, LSGM can be considered a generic framework that extends SGMs to non-continuous data types. In principle LSGM could be used to model, for example, language [96, 97], music [98, 10], or molecules [99, 100]. Furthermore, like other deep generative models, it can potentially be used also for non-generative tasks such as semi-supervised and representation learning [101, 102, 103]. This makes the long-term social impacts of LSGM dependent on the downstream applications. Funding Statement All authors were funded by NVIDIA through full-time employment.
1. What is the main contribution of the paper in terms of introducing a new approach to generative modeling? 2. How does the proposed method differ from previous score-based models and VAEs? 3. Can you provide more details about the training stabilization techniques proposed in the paper? 4. How does the paper evaluate the performance of LSGM against other state-of-the-art deep generative models? 5. Can you discuss the relation between LSGM and recent works that propose techniques to improve samples from deep generative models? 6. Did the authors investigate the optimal mixing coefficient α and its impact on the model's performance? 7. Can you provide more insights on why the mixture formulation is particularly important for sample quality?
Summary Of The Paper Review
Summary Of The Paper The paper proposes Latent Score-based Generative Model (LSGM) which introduces a score-based prior in the Variational Autoencoder (VAE) framework. In contrast to previous score-based models that operate in the data space, LSGM uses a score-based model in the latent space. A sample from a base distribution (standard Gaussian) is denoised using the score-based prior in the latent space and is then mapped to the data space using a decoder. The authors further discuss how the various terms in the ELBO can be computed to train the model in an end-to-end fashion without requiring the time-dependent marginal score function. Multiple variance reduction techniques and training tricks have also been proposed for the resulting objective function. Both quantitative and qualitative results demonstrate the ability of the model to generate high fidelity images. Review Significance This paper makes a significant contribution both to the fields of score-based models and VAEs with more expressive priors. Severals well-motivated training stabilization techniques have been proposed which may prove helpful for the community. The paper also improves the state of the art in terms of sample quality (FID) for VAEs. Clarity The paper is well-written and organized. There are a fews places where inclusion of details in the main text would improve the clarity of the paper: i) Line 148: The parameterization of score function using ϵ θ . ii) Line 263: Differences between the three models. Technical quality The paper is technically sound. The empirical results are fairly comprehensive and evaluate LSGM on various datasets against state-of-the-art deep generative models. Ablation studies presented help elucidate the contribution of different components of the proposed model. Originality/Relation to prior work The paper presents a novel combination of score-based modeling with VAEs: a score-based prior distribution is introduced in the VAE framework. The relation to prior-work has been discussed sufficiently. The proposed work is also related to recent works [e.g., 1, 2, 3] that propose techniques to improve samples from deep generative models (particularly GANs). In these works, latent vectors are first diffused to a "better" point and then decoded to generate a sample. This is similar to LSGM where the latent vectors are denoised using a score-based model. A discussion of relation to these works would improve the related works section. [1] Tanaka, Akinori. "Discriminator optimal transport." arXiv preprint arXiv:1910.06832 (2019). [2] Ansari, Abdul Fatir, Ming Liang Ang, and Harold Soh. "Refining deep generative models via discriminator gradient flow." arXiv preprint arXiv:2012.00780 (2020). [3] Che, Tong, et al. "Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling." arXiv preprint arXiv:2003.06060 (2020). Additional comments/Questions Did the authors investigate what final value of α is learned by the model? Specifically, is there an "optimal" mixing coefficient? In the ablation study the authors mention that the mixture formulation is particularly important for the sample quality. Do the authors have any insights on why this is the case? Post Rebuttal: Thank you for the clarification on α and the mixture formulation. I believe that a detailed discussion of the mixture formulation will improve the manuscript.
NIPS
Title Score-based Generative Modeling in Latent Space Abstract Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Our implementation is available at https://github.com/NVlabs/LSGM. 1 Introduction The long-standing goal of likelihood-based generative learning is to faithfully learn a data distribution, while also generating high-quality samples. Achieving these two goals simultaneously is a tremendous challenge, which has led to the development of a plethora of different generative models. Recently, score-based generative models (SGMs) demonstrated astonishing results in terms of both high sample quality and likelihood [1, 2]. These models define a forward diffusion process that maps data to noise by gradually perturbing the input data. Generation corresponds to a reverse process that synthesizes novel data via iterative denoising, starting from random noise. The problem then reduces to learning the score function—the gradient of the log-density—of the perturbed data [3]. In a seminal work, Song et al. [2] show how this modeling approach is described with a stochastic differential equation (SDE) framework which can be converted to maximum likelihood training [4]. Variants of SGMs have been applied to images [1, 2, 5, 6], audio [7, 8, 9, 10], graphs [11] and point clouds [12, 13]. Albeit high quality, sampling from SGMs is computationally expensive. This is because generation amounts to solving a complex SDE, or equivalently ordinary differential equation (ODE) (denoted as the probability flow ODE in [2]), that maps a simple base distribution to the complex data distribution. The resulting differential equations are typically complex and solving them accurately requires numerical integration with very small step sizes, which results in thousands of neural network evaluations [1, 2, 6]. Furthermore, generation complexity is uniquely defined by the underlying data distribution and the forward SDE for data perturbation, implying that synthesis speed cannot be ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). increased easily without sacrifices. Moreover, SDE-based generative models are currently defined for continuous data and cannot be applied effortlessly to binary, categorical, or graph-structured data. Here, we propose the Latent Score-based Generative Model (LSGM), a new approach for learning SGMs in latent space, leveraging a variational autoencoder (VAE) framework [14, 15]. We map the input data to latent space and apply the score-based generative model there. The score-based model is then tasked with modeling the distribution over the embeddings of the data set. Novel data synthesis is achieved by first generating embeddings via drawing from a simple base distribution followed by iterative denoising, and then transforming this embedding via a decoder to data space (see Fig. 1). We can consider this model a VAE with an SGM prior. Our approach has several key advantages: Synthesis Speed: By pretraining the VAE with a Normal prior first, we can bring the marginal distribution over encodings (the aggregate posterior) close to the Normal prior, which is also the SGM’s base distribution. Consequently, the SGM only needs to model the remaining mismatch, resulting in a less complex model from which sampling becomes easier. Furthermore, we can tailor the latent space according to our needs. For example, we can use hierarchical latent variables and apply the diffusion model only over a subset of them, further improving synthesis speed. Expressivity: Training a regular SGM can be considered as training a neural ODE directly on the data [2]. However, previous works found that augmenting neural ODEs [16, 17] and more generally generative models [18, 19, 20, 21] with latent variables improves their expressivity. Consequently, we expect similar performance gains from combining SGMs with a latent variable framework. Tailored Encoders and Decoders: Since we use the SGM in latent space, we can utilize carefully designed encoders and decoders mapping between latent and data space, further improving expressivity. Additionally, the LSGM method can therefore be naturally applied to non-continuous data. LSGMs can be trained end-to-end by maximizing the variational lower bound on the data likelihood. Compared to regular score matching, our approach comes with additional challenges, since both the score-based denoising model and its target distribution, formed by the latent space encodings, are learnt simultaneously. To this end, we make the following technical contributions: (i) We derive a new denoising score matching objective that allows us to efficiently learn the VAE model and the latent SGM prior at the same time. (ii) We introduce a new parameterization of the latent space score function, which mixes a Normal distribution with a learnable SGM, allowing the SGM to model only the mismatch between the distribution of latent variables and the Normal prior. (iii) We propose techniques for variance reduction of the training objective by designing a new SDE and by analytically deriving importance sampling schemes, allowing us to stably train deep LSGMs. Experimentally, we achieve state-of-the-art 2.10 FID on CIFAR-10 and 7.22 FID on CelebA-HQ-256, and significantly improve upon likelihoods of previous SGMs. On CelebA-HQ-256, we outperform previous SGMs in synthesis speed by two orders of magnitude. We also model binarized images, MNIST and OMNIGLOT, achieving state-of-the-art likelihood on the latter. 2 Background Here, we review continuous-time score-based generative models (see [2] for an in-depth discussion). Consider a forward diffusion process {zt}t=1t=0 for continuous time variable t ∈ [0, 1], where z0 is the starting variable and zt its perturbation at time t. The diffusion process is defined by an Itô SDE: dz = f(t)z dt+ g(t) dw (1) where f : R→ R and g : R→ R are scalar drift and diffusion coefficients, respectively, and w is the standard Wiener process. f(t) and g(t) can be designed such that z1 ∼ N (z1;0, I) follows a Normal distribution at the end of the diffusion process.2 Song et al. [2] show that the SDE in Eq. 1 can be converted to a generative model by first sampling from z1 ∼ N (z1;0, I) and then running the reverse-time SDE dz = [f(t)z−g(t)2∇z log qt(z)] dt+g(t) dw̄, where w̄ is a reverse-time standard Wiener process and dt is an infinitesimal negative time step. The reverse SDE requires knowledge of ∇zt log qt(zt), the score function of the marginal distribution under the forward diffusion at time t. One approach for estimating it is via the score matching objective3: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2] ] (2) that trains the parameteric score function ∇zt log pθ(zt) at time t ∼ U [0, 1] for a given weighting coefficient λ(t). q(z0) is the z0-generating distribution and q(zt|z0) is the diffusion kernel, which is available in closed form for certain f(t) and g(t). Since ∇zt log q(zt) is not analytically available, Song et al. [2] rely on denoising score matching [22] that converts the objective in Eq. 2 to: min θ Et∼U [0,1] [ λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt|z0)−∇zt log pθ(zt)|| 2 2] ] + C (3) Vincent [22] shows C = Et∼U [0,1][λ(t)Eq(z0)Eq(zt|z0)[||∇zt log q(zt)||22 − ||∇zt log q(zt|z0)||22]] is independent of θ, making the minimizations in Eq. 3 and Eq. 2 equivalent. Song et al. [4] show that for λ(t) = g(t)2/2, the minimizations correspond to approximate maximum likelihood training based on an upper on the Kullback-Leibler (KL) divergence between the target distribution and the distribution defined by the reverse-time generative SDE with the learnt score function. In particular, the objective of Eq. 2 can then be written: KL ( q(z0)||pθ(z0) ) ≤ Et∼U[0,1] [ g(t)2 2 Eq(z0)Eq(zt|z0) [ ||∇zt log q(zt)−∇zt log pθ(zt)|| 2 2 ]] (4) which can again be transformed into denoising score matching (Eq. 3) following Vincent [22]. 3 Score-based Generative Modeling in Latent Space The LSGM framework in Fig. 1 consists of the encoder qφ(z0|x), SGM prior pθ(z0), and decoder pψ(x|z0). The SGM prior leverages a diffusion process as defined in Eq. 1 and diffuses z0 ∼ qφ(z0|x) samples in latent space to the standard Normal distribution p(z1) = N (z1;0, I). Generation uses the reverse SDE to sample from pθ(z0) with time-dependent score function∇zt log pθ(zt), and the decoder pψ(x|z0) to map the synthesized encodings z0 to data space. Formally, the generative process is written as p(z0,x) = pθ(z0)pψ(x|z0). The goal of training is to learn {φ,θ,ψ}, the parameters of the encoder qφ(z0|x), score function∇zt log pθ(zt), and decoder pψ(x|z0), respectively. We train LSGM by minimizing the variational upper bound on negative data log-likelihood log p(x): L(x,φ,θ,ψ) = Eqφ(z0|x) [ − log pψ(x|z0) ] +KL ( qφ(z0|x)||pθ(z0) ) (5) = Eqφ(z0|x) [ − log pψ(x|z0) ]︸ ︷︷ ︸ reconstruction term +Eqφ(z0|x) [ log qφ(z0|x) ]︸ ︷︷ ︸ negative encoder entropy +Eqφ(z0|x) [ − log pθ(z0) ]︸ ︷︷ ︸ cross entropy (6) following a VAE approach [14, 15], where qφ(z0|x) approximates the true posterior p(z0|x). In this paper, we use Eq. 6 with decomposed KL divergence into its entropy and cross entropy terms. The reconstruction and entropy terms are estimated easily for any explicit encoder as long as the reparameterization trick is available [14]. The challenging part in training LSGM is to train the cross entropy term that involves the SGM prior. We motivate and present our expression for the cross-entropy term in Sec. 3.1, the parameterization of the SGM prior in Sec. 3.2, different weighting mechanisms for the training objective in Sec. 3.3, and variance reduction techniques in Sec. 3.4. 3.1 The Cross Entropy Term One may ask, why not train LSGM with Eq. 5 and rely on the KL in Eq. 4. Directly using the KL expression in Eq. 4 is not possible, as it involves the marginal score ∇zt log q(zt), which is unavailable analytically for common non-Normal distributions q(z0) such as Normalizing flows. 2Other distributions at t = 1 are possible; for instance, see the “variance-exploding” SDE in [2]. In this paper, however, we use only SDEs converging towardsN (z1;0, I) at t = 1. 3We omit the t-subscript of the diffused distributions qt in all score functions of the form∇zt log qt(zt). Transforming into denoising score matching does not help either, since in that case the problematic ∇zt log q(zt) term appears in the C term (see Eq. 3). In contrast to previous works [2, 22], we cannot simply drop C, since it is, in fact, not constant but depends on q(zt), which is trainable in our setup. To circumvent this problem, we instead decompose the KL in Eq. 5 and rather work directly with the cross entropy between the encoder distribution q(z0|x) and the SGM prior p(z0). We show: Theorem 1. Given two distributions q(z0|x) and p(z0), defined in the continuous space RD, denote the marginal distributions of diffused samples under the SDE in Eq. 1 at time t with q(zt|x) and p(zt). Assuming mild smoothness conditions on log q(zt|x) and log p(zt), the cross entropy is: CE(q(z0|x)||p(z0)) = Et∼U[0,1] [ g(t)2 2 Eq(zt,z0|x) [ ||∇zt log q(zt|z0)−∇zt log p(zt)|| 2 2 ]] + D 2 log ( 2πeσ20 ) , with q(zt, z0|x) = q(zt|z0)q(z0|x) and a Normal transition kernel q(zt|z0) = N (zt;µt(z0), σ2t I), where µt and σ 2 t are obtained from f(t) and g(t) for a fixed initial variance σ 2 0 at t = 0. A proof with generic expressions for µt and σ 2 t as well as an intuitive interpretation are in App. A. Importantly, unlike for the KL objective of Eq. 4, no problematic terms depending on the marginal score ∇zt log q(zt|x) arise. This allows us to use this denoising score matching objective for the cross entropy term in Theorem 1 not only for optimizing p(z0) (which is commonly done in the score matching literature), but also for the q(z0|x) encoding distribution. It can be used even with complex q(z0|x) distributions, defined, for example, in a hierarchical fashion [20, 21] or via Normalizing flows [23, 24]. Our novel analysis shows that, for diffusion SDEs following Eq. 1, only the cross entropy can be expressed purely with ∇zt log q(zt|z0). Neither KL nor entropy in [4] can be expressed without the problematic term∇zt log q(zt|x) (details in the Appendix). Note that in Theorem 1, the term∇zt log p(zt) in the score matching expression corresponds to the score that originates from diffusing an initial p(z0) distribution. In practice, we use the expression to learn an SGM prior pθ(z0), which models∇zt log p(zt) by a neural network. With the learnt score ∇zt log pθ(zt) (here we explicitly indicate the parameters θ to clarify that this is the learnt model), the actual SGM prior is defined via the generative reverse-time SDE (or, alternatively, a closely-connected ODE, see Sec. 2 and App. D), which generally defines its own, separate marginal distribution pθ(z0) at t = 0. Importantly, the learnt, approximate score∇zt log pθ(zt) is not necessarily the same as one would obtain when diffusing pθ(z0). Hence, when considering the learnt score∇zt log pθ(zt), the score matching expression in our Theorem only corresponds to an upper bound on the cross entropy between q(z0|x) and pθ(z0) defined by the generative reverse-time SDE. This is discussed in detail in concurrent works [4, 25]. Hence, from the perspective of the learnt SGM prior, we are training with an upper bound on the cross entropy (similar to the bound on the KL in Eq. 4), which can also be considered as the continuous version of the discretized variational objective derived by Ho et al. [1]. 3.2 Mixing Normal and Neural Score Functions In VAEs [14], p(z0) is often chosen as a standard Normal N (z0;0, I). For recent hierarchical VAEs [20, 21], using the reparameterization trick, the prior can be converted to N (z0;0, I) (App. E). Considering a single dimensional latent space, we can assume that the prior at time t is in the form of a geometric mixture p(zt) ∝ N (zt; 0, 1)1−αp′θ(zt)α where p′θ(zt) is a trainable SGM prior and α ∈ [0, 1] is a learnable scalar mixing coefficient. Formulating the prior this way has crucial advantages: (i) We can pretrain LSGM’s autoencoder networks assuming α=0, which corresponds to training the VAE with a standard Normal prior. This pretraining step will bring the distribution of latent variable close to N (z0; 0, 1), allowing the SGM prior to learn a much simpler distribution in the following end-to-end training stage. (ii) The score function for this mixture is of the form ∇zt log p(zt) = −(1− α)zt + α∇zt log p′θ(zt). When the score function is dominated by the linear term, we expect that the reverse SDE can be solved faster, as its drift is dominated by this linear term. For our multivariate latent space, we obtain diffused samples at time t by sampling zt ∼ q(zt|z0) with zt = µt(z0) + σt , where ∼ N ( ;0, I). Since we have ∇zt log q(zt|z0) = − /σt, similar to [1], we parameterize the score function by ∇zt log p(zt) := − θ(zt, t)/σt, where θ(zt, t) := σt(1 − α) zt + α ′θ(zt, t) is defined by our mixed score parameterization that is applied elementwise to the components of the score. With this, we simplify the cross entropy expression to: CE(qφ(z0|x)||pθ(z0)) = Et∼U[0,1] [ w(t) 2 Eqφ(zt,z0|x), [ || − θ(zt, t)||22 ]] + D 2 log ( 2πeσ20 ) , (7) where w(t) = g(t)2/σ2t is a time-dependent weighting scalar. 3.3 Training with Different Weighting Mechanisms Table 1: Weighting mechanisms Mechanism Weights Weighted wll(t) = g(t)2/σ2t Unweighted wun(t) = 1 Reweighted wre(t) = g(t)2 The weighting term w(t) in Eq. 7 trains the prior with maximum likelihood. Similar to [1, 2], we observe that when w(t) is dropped while training the SGM prior (i.e., w(t) = 1), LSGM often yields higher quality samples at a small cost in likelihood. However, in our case, we can only drop the weighting when training the prior. When updating the encoder parameters, we still need to use the maximum likelihood weighting to ensure that the encoder q(z0|x) is brought closer to the true posterior p(z0|x)4. Tab. 1 summarizes three weighting mechanisms we consider in this paper: wll(t) corresponds to maximum likelihood, wun(t) is the unweighted objective used by [1, 2], and wre(t) is a variant obtained by dropping only 1/σ2t . This weighting mechanism has a similar affect on the sample quality as wun(t) = 1; however, in Sec. 3.4, we show that it is easier to define a variance reduction scheme for this weighting mechanism. The following summarizes our training objectives (with t ∼ U [0, 1] and ∼ N ( ;0, I)): min φ,ψ Eqφ(z0|x) [ −log pψ(x|z0) ] +Eqφ(z0|x) [ log qφ(z0|x) ] +Et, ,q(zt|z0),qφ(z0|x) [ wll(t) 2 || − θ(zt, t)||22 ] (8) min θ Et, ,q(zt|z0),qφ(z0|x) [ wll/un/re(t) 2 || − θ(zt, t)||22 ] with q(zt|z0) = N (zt;µt(z0), σ 2 t I), (9) where Eq. 8 trains the VAE encoder and decoder parameters {φ,ψ} using the variational bound L(x,φ,θ,ψ) from Eq. 6. Eq. 9 trains the prior with one of the three weighting mechanisms. Since the SGM prior participates in the objective only in the cross entropy term, we only consider this term when training the prior. Efficient algorithms for training with the objectives are presented in App. G. 3.4 Variance Reduction The objectives in Eqs. 8 and 9 involve sampling of the time variable t, which has high variance [26]. We introduce several techniques for reducing this variance for all three objective weightings. We focus on the “variance preserving” SDEs (VPSDEs) [2, 1, 27], defined by dz = − 12β(t)z dt+ √ β(t) dw where β(t) = β0 + (β1 − β0)t linearly interpolates in [β0, β1] (other SDEs discussed in App. B). We denote the marginal distribution of latent variables by q(z0) := Epdata(x)[q(z0|x)]. Here, we derive variance reduction techniques for CE(q(z0)||p(z0)), assuming that both q(z0) = p(z0) = N (z0;0, I). This is a reasonable simplification for our analysis because pretraining our LSGM model with a N (z0;0, I) prior brings q(z0) close to N (z0;0, I) and our SGM prior is often dominated by the fixed Normal mixture component. We empirically observe that the variance reduction techniques developed with this assumption still work well when q(z0) and p(z0) are not exactly N (z0;0, I). Variance reduction for likelihood weighting: In App. B, for q(z0) = p(z0) = N (z0;0, I), we show CE(q(z0)||p(z0)) is given by D2 Et∼U [0,1][d log σ 2 t /dt] + const. We consider two approaches: (1) Geometric VPSDE: To reduce the variance sampling uniformly from t, we can design the SDE such that d log σ2t /dt is constant for t ∈ [0, 1]. We show in App. B that a β(t) = log(σ2max/σ2min) σ2t (1−σ2t ) with geometric variance σ2t = σ 2 min(σ 2 max/σ 2 min) t satisfies this condition. We call a VPSDE with this β(t) a geometric VPSDE. σ2min and σ 2 max are the hyperparameters of the SDE, with 0<σ 2 min<σ 2 max<1. Although our geometric VPSDE has a geometric variance progression similar to the “variance exploding” SDE (VESDE) [2], it still enjoys the “variance preserving” property of the VPSDE. In App. B, we show that the VESDE does not come with a reduced variance for t-sampling by default. (2) Importance sampling (IS): We can keep β(t) and σ2t unchanged for the original linear VPSDE, and instead use IS to minimize variance. The theory of IS shows that the proposal r(t) ∝ d log σ2t /dt has minimum variance [28]. In App. B, we show that we can sample from r(t) using inverse transform sampling t = var−1((σ21) ρ(σ20) 1−ρ) where var−1 is the inverse of σ2t and ρ ∼ U [0, 1]. This variance reduction technique is available for any VPSDE with arbitrary β(t). In Fig. 2, we train a small LSGM on CIFAR-10 with wll weighting using (i) the original VPSDE with uniform t sampling, (ii) the same SDE but with our IS from t, and (iii) the proposed geometric 4Minimizing L(x,φ,θ,ψ) w.r.t φ is equivalent to minimizing KL ( q(z0|x)||p(z0|x) ) w.r.t q(z0|x). VPSDE. Note how both (ii) and (iii) significantly reduce the variance and allow us to monitor the progress of the training objective. In this case, (i) has difficulty minimizing the objective due to the high variance. In App. B, we show how IS proposals can be formed for other SDEs, including the VESDE and Sub-VPSDE from [2]. Variance reduction for unweighted and reweighted objectives: When training with wun, analytically deriving IS proposal distributions for arbitrary β(t) is challenging. For linear VPSDEs, we provide a derivation in App. B to obtain the optimal IS distribution. In contrast, defining IS proposal distributions is easier when training with wre. In App. B, we show that the optimal distribution is in the form r(t) ∝ dσ2t /dtwhich is sampled by t=var−1((1−ρ)σ20 +ρσ21) with ρ ∼ U [0, 1]. In Fig. 3, we visualize the IS distributions for the three weighting mechanisms for the linear VPSDE with the original [β0, β1] parameters from [2]. r(t) for the likelihood weighting is more tilted towards t = 0 due to the 1/σ2t term in wll. When using differently weighted objectives for training, we can either sample separate t with different IS distributions for each objective, or use IS for the SGM objective (Eq. 9) and reweight the samples according to the likelihood objective for encoder training (Eq. 8). See App. G for details. 4 Related Work Our work builds on score-matching [29, 30, 31, 32, 33, 34, 35, 36, 37], specifically denoising score matching [22], which makes our work related to recent generative models using denoising score matching- and denoising diffusion-based objectives [3, 38, 1, 2, 6]. Among those, [1, 6] use a discretized diffusion process with many noise scales, building on [27], while Song et al. [2] introduce the continuous time framework using SDEs. Experimentally, these works focus on image modeling and, contrary to us, work directly in pixel space. Various works recently tried to address the slow sampling of these types of models and further improve output quality. [39] add an adversarial objective, [5] introduce non-Markovian diffusion processes that allow to trade off synthesis speed, quality, and sample diversity, [40] learn a sequence of conditional energy-based models for denoising, [41] distill the iterative sampling process into single shot synthesis, and [42] learn an adaptive noise schedule, which is adjusted during synthesis to accelerate sampling. Further, [26] propose empirical variance reduction techniques for discretized diffusions and introduce a new, heuristically motivated, noise schedule. In contrast, our proposed noise schedule and our variance reduction techniques are analytically derived and directly tailored to our learning setting in the continuous time setup. Recently, [11] presented a method to generate graphs using score-based models, relaxing the entries of adjacency matrices to continuous values. LSGM would allow to model graph data more naturally using encoders and decoders tailored to graphs [43, 44, 45, 46]. Since our model can be considered a VAE [14, 15] with score-based prior, it is related to approaches that improve VAE priors. For example, Normalizing flows and hierarchical distributions [23, 24, 47, 48, 20, 21], as well as energy-based models [49, 50, 51, 52, 53] have been proposed as VAE priors. Furthermore, classifiers [54, 55, 56], adversarial methods [57], and other techniques [58, 59] have been used to define prior distributions implicitly. In two-stage training, a separate generative model is trained in latent space as a new prior after training the VAE itself [60, 61, 62, 63, 64, 10]. Our work also bears a resemblance to recent methods on improving the sampling quality in generative adversarial networks using gradient flows in the latent space [65, 66, 67, 68], with the main difference that these prior works use a discriminator to update the latent variables, whereas we train an SGM. Concurrent works: [10] proposed to learn a denoising diffusion model in the latent space of a VAE for symbolic music generation. This work does not introduce an end-to-end training framework of the combined VAE and denoising diffusion model and instead trains them in two separate stages. In contrast, concurrently with us [69] proposed an end-to-end training approach, and [70] combines contrastive learning with diffusion models in the latent space of VAEs for controllable generation. However, [10, 69, 70] consider the discretized diffusion objective [1], while we build on the continuous time framework. Also, these models are not equipped with the mixed score parameterization and variance reduction techniques, which we found crucial for the successful training of SGM priors. Additionally, [71, 4, 25] concurrently with us proposed likelihood-based training of SGMs in data space5. [4] developed a bound for the data likelihood in their Theorem 3 of their second version, using a denoising score matching objective, closely related to our cross entropy expression. However, our cross entropy expression is much simpler as we show how several terms can be marginalized out analytically for the diffusion SDEs employed by us (see our proof in App. A). The same marginalization can be applied to Theorem 3 in [4] when the drift coefficient takes a special affine form (i.e., f(z, t) = f(t)z). Moreover, [25] discusses the likelihood-based training of SGMs from a fundamental perspective and shows how several score matching objectives become a variational bound on the data likelihood. [71] introduced a notion of signal-to-noise ratio (SNR) that results in a noise-invariant parameterization of time that depends only on the initial and final noise. Interestingly, our importance sampling distribution in Sec. 3.4 has a similar noise-invariant parameterization of time via t = var−1((σ21) ρ(σ20) 1−ρ), which also depends only on the initial and final diffusion process variances. We additionally show that this time parameterization results in the optimal minimumvariance objective, if the distribution of latent variables follows a standard Normal distribution. Finally, [72] proposed a modified time parameterization that allows modeling unbounded data scores. 5 Experiments Here, we examine the efficacy of LSGM in learning generative models for images. Implementation details: We implement LSGM using the NVAE [20] architecture as VAE backbone and NCSN++ [2] as SGM backbone. NVAE has a hierarchical latent structure. The diffusion process input z0 is constructed by concatenating the latent variables from all groups in the channel dimension. For NVAEs with multiple spatial resolutions in latent groups, we only feed the smallest resolution groups to the SGM prior and assume that the remaining groups have a standard Normal distribution. Sampling: To generate samples from LSGM at test time, we use a black-box ODE solver [73] to sample from the prior. Prior samples are then passed to the decoder to generate samples in data space. Evaluation: We measure NELBO, an upper bound on negative log-likelihood (NLL), using Eq. 6. For estimating log p(z0), we rely on the probability flow ODE [2], which provides an unbiased but stochastic estimation of log p(z0). This stochasticity prevents us from performing an importance weighted estimation of NLL [74] (see App. F for details). For measuring sample quality, Fréchet inception distance (FID) [75] is evaluated with 50K samples. Implementation details in App. G. 5.1 Main Results Unconditional color image generation: Here, we present our main results for unconditional image generation on CIFAR-10 [89] (Tab. 2) and CelebA-HQ-256 (5-bit quantized) [88] (Tab. 3). For CIFAR-10, we train 3 different models: LSGM (FID) and LSGM (balanced) both use the VPSDE with linear β(t) and wun-weighting for the SGM prior in Eq. 9, while performing IS as derived in Sec. 3.4. They only differ in how the backbone VAE is trained. LSGM (NLL) is a model that is trained with our novel geometric VPSDE, using wll-weighting in the prior objective (further details in App. G). When set up for high image quality, LSGM achieves a new state-of-the-art FID of 2.10. When tuned towards NLL, we achieve a NELBO of 2.87, which is significantly better than previous score-based models. Only autoregressive models, which come with very slow synthesis, and VDVAE [21] reach similar or higher likelihoods, but they usually have much poorer image quality. For CelebA-HQ-256, we observe that when LSGM is trained with different SDE types and weighting mechanisms, it often obtains similar NELBO potentially due to applying the SGM prior only to small latent variable groups and using Normal priors at the larger groups. With wre-weighting and linear VPSDE, LSGM obtains the state-of-the-art FID score of 7.22 on a par with the original SGM [2]. For both datasets, we also report results for the VAE backbone used in our LSGM. Although this baseline achieves competitive NLL, its sample quality is behind our LSGM and the original SGM. Modeling binarized images: Next, we examine LSGM on dynamically binarized MNIST [93] and OMNIGLOT [74]. We apply LSGM to binary images using a decoder with pixel-wise independent Bernoulli distributions. For these datasets, we report both NELBO and NLL in nats in Tab. 4 and Tab. 5. On OMNIGLOT, LSGM achieves state-of-the-art likelihood of ≤87.79 nat, outperforming previous models including VAEs with autoregressive decoders, and even when comparing its NELBO 5We build on the V1 version of [4], which was substantially updated after the NeurIPS submission deadline. Table 5: Dynamically binarized MNIST results. Method NELBO↓ NLL↓ Ours LSGM 78.47 ≤78.47 VAEs NVAE [20] 79.56 78.01 BIVA [48] 80.06 78.41 IAF-VAE [24] 80.80 79.10 DVAE++ [51] - 78.49 Aut. Reg. PixelVAE++ [91] - 78.00 VampPrior [59] - 78.45 MAE [92] - 77.98 against importance weighted estimation of NLL for other methods. On MNIST, LSGM outperforms previous VAEs in NELBO, reaching a NELBO 1.09 nat lower than the state-of-the-art NVAE. Qualitative results: We visualize qualitative results for all datasets in Fig. 5. On the complex multimodal CIFAR-10 dataset, LSGM generates sharp and high-quality images. On CelebA-HQ-256, LSGM generates diverse samples from different ethnicity and age groups with varying head poses and facial expressions. On MNIST and OMNIGLOT, the generated characters are sharp and high-contrast. Sampling time: We compare LSGM against the original SGM [2] trained on the CelebA-HQ-256 dataset in terms of sampling time and number of function evaluations (NFEs) of the ODE solver. Song et al. [2] propose two main sampling techniques including predictor-corrector (PC) and probability flow ODE. PC sampling involves 4000 NFEs and takes 44.6 min. on a Titan V for a batch of 16 images. It yields 7.23 FID score (see Tab. 3). ODE-based sampling from SGM takes 3.91 min. with 335 NFEs, but it obtains a poor FID score of 128.13 with 10−5 as ODE solver error tolerance6. In a stark contrast, ODE-based sampling from our LSGM takes 0.07 min. with average of 23 NFEs, yielding 7.22 FID score. LSGM is 637× and 56× faster than original SGM’s [2] PC and ODE 6We use the VESDE checkpoint at https://github.com/yang-song/score_sde_pytorch. Song et al. [2] report that ODE-based sampling yields worse FID scores for their models (see D.4 in [2]). The problem is more severe for VESDEs. Unfortunately, at submission time only a VESDE model was released. sampling, respectively. In Fig. 4, we visualize FID scores and NFEs for different ODE solver error tolerances. Our LSGM achieves low FID scores for relatively large error tolerances. We identify three main reasons for this significantly faster sampling from LSGM: (i) The SGM prior in our LSGM models latent variables with 32×32 spatial dim., whereas the original SGM [2] directly models 256×256 images. The larger spatial dimensions require a deeper network to achieve a large receptive field. (ii) Inspecting the SGM prior in our model suggests that the score function is heavily dominated by the linear term at the end of training, as the mixing coefficients α are all < 0.02. This makes our SGM prior smooth and numerically faster to solve. (iii) Since SGM is formed in the latent space in our model, errors from solving the ODE can be corrected to some degree using the VAE decoder, while in the original SGM [2] errors directly translate to artifacts in pixel space. 5.2 Ablation Studies SDEs, objective weighting mechanisms and variance reduction. In Tab. 6, we analyze the different weighting mechanisms and variance reduction techniques and compare the geometric VPSDE with the regular VPSDE with linear β(t) [1, 2]. In the table, SGM-obj.-weighting denotes the weighting mechanism used when training the SGM prior (via Eq. 9). t-sampling (SGM-obj.) indicates the sampling approach for t, where rll(t), run(t) and rre(t) denote the IS distributions for the weighted (likelihood), the unweighted, and the reweighted objective, respectively. For training the VAE encoder qφ(z0|x) (last term in Eq. 8), we either sample a separate batch t with importance sampling following rll(t) (only necessary when the SGM prior is not trained with wll itself), or we reweight the samples drawn for training the prior according to the likelihood objective (denoted by rew.). n/a indicates fields that do not apply: The geometric VPSDE has optimal variance for the weighted (likelihood) objective already with uniform sampling; there is no additional IS distribution. Also, we did not derive IS distributions for the geometric VPSDE for wun. NaN indicates experiments that failed due to training instabilities. Previous work [20, 21] have reported instability in training large VAEs. We find that our method inherits similar instabilities from VAEs; however, importance sampling often stabilizes training our LSGM. As expected, we obtain the best NELBOs (red) when training with the weighted, maximum likelihood objective (wll). Importantly, our new geometric VPSDE achieves the best NELBO. Furthermore, the best FIDs (blue) are obtained either by unweighted (wun) or reweighted (wre) SGM prior training, with only slightly worse NELBOs. These experiments were run on the CIFAR10 dataset, using a smaller model than for our main results above (details in App. G). End-to-end training. We proposed to train LSGM end-to-end, in contrast to [10]. Using a similar setup as above we compare end-to-end training of LSGM during the second stage with freezing the VAE encoder and decoder and only training the SGM prior in latent space during the second stage. When training the model end-to-end, we achieve an FID of 5.19 and NELBO of 2.98; when freezing the VAE networks during the second stage, we only get an FID of 9.00 and NELBO of 3.03. These results clearly motivate our end-to-end training strategy. Mixing Normal and neural score functions. We generally found training LSGM without our proposed “mixed score” formulation (Sec. 3.2) to be unstable during end-to-end training, highlighting its importance. To quantify the contribution of the mixed score parametrization for a stable model, we train a small LSGM with only one latent variable group. In this case, without the mixed score, we reached an FID of 34.71 and NELBO of 3.39; with it, we got an FID of 7.60 and NELBO of 3.29. Without the inductive bias provided by the mixed score, learning that the marginal distribution is close to a Normal one for large t purely from samples can be very hard in the high-dimensional latent space, where our diffusion is run. Furthermore, due to our importance sampling schemes, we tend to oversample small, rather than large t. However, synthesizing high-quality images requires an accurate score function estimate for all t. On the other hand, the log-likelihood of samples is highly sensitive to local image statistics and primarily determined at small t. It is plausible that we are still able to learn a reasonable estimate of the score function for these small t even without the mixed score formulation. That may explain why log-likelihood suffers much less than sample quality, as estimated by FID, when we remove the mixed score parameterization. Additional experiments and model samples are presented in App. H. 6 Conclusions We proposed the Latent Score-based Generative Model, a novel framework for end-to-end training of score-based generative models in the latent space of a variational autoencoder. Moving from data to latent space allows us to form more expressive generative models, model non-continuous data, and reduce sampling time using smoother SGMs. To enable training latent SGMs, we made three core contributions: (i) we derived a simple expression for the cross entropy term in the variational objective, (ii) we parameterized the SGM prior by mixing Normal and neural score functions, and (iii) we proposed several techniques for variance reduction in the estimation of the training objective. Experimental results show that latent SGMs outperform recent pixel-space SGMs in terms of both data likelihood and sample quality, and they can also be applied to binary datasets. In large image generation, LSGM generates data several orders of magnitude faster than recent SGMs. Nevertheless, LSGM’s synthesis speed does not yet permit sampling at interactive rates, and our implementation of LSGM is currently limited to image generation. Therefore, future work includes further accelerating sampling, applying LSGMs to other data types, and designing efficient networks for LSGMs. 7 Broader Impact Generating high-quality samples while fully covering the data distribution has been a long-standing challenge in generative learning. A solution to this problem will likely help reduce biases in generative models and lead to improving overall representation of minorities in the data distribution. SGMs are perhaps one of the first deep models that excel at both sample quality and distribution coverage. However, the high computational cost of sampling limits their widespread use. Our proposed LSGM reduces the sampling complexity of SGMs by a large margin and improves their expressivity further. Thus, in the long term, it can enable the usage of SGMs in practical applications. Here, LSGM is examined on the image generation task which has potential benefits and risks discussed in [94, 95]. However, LSGM can be considered a generic framework that extends SGMs to non-continuous data types. In principle LSGM could be used to model, for example, language [96, 97], music [98, 10], or molecules [99, 100]. Furthermore, like other deep generative models, it can potentially be used also for non-generative tasks such as semi-supervised and representation learning [101, 102, 103]. This makes the long-term social impacts of LSGM dependent on the downstream applications. Funding Statement All authors were funded by NVIDIA through full-time employment.
1. What is the main contribution of the paper regarding score-based generative models? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and performance? 3. Do you have any concerns or criticisms regarding the experimental evaluation and comparisons with other works? 4. How does the reviewer assess the clarity, organization, and technical aspects of the paper's content? 5. Are there any minor remarks or suggestions for improvement regarding the paper's presentation or references?
Summary Of The Paper Review
Summary Of The Paper The paper at hand proposes end-to-end training of score-based generative models in the latent space of a variational autoencoder. The VAE is pre-trained using a normal prior which is then replaced by the score-based generative model that is then jointly trained with the VAE. The paper also introduces a novel training objective based on the cross entropy between the encoder distribution and the SGM prior. Two techniques for variance reduction of the loss function are discussed. Review REASONS FOR SCORE: The idea of applying the score based model to the latent space of a VAE is novel, as far as I can see. The experimental evaluation yields promising results on 5-bit CIFAR10, CelebA and binarized MNIST and OMNIGLOT: the proposed LSGM outperforms a variety of generative models in terms of test likelihood and FID-score (although only by a very small margin). The value of the proposed model lies within the sampling time which is decreased from 44.6 min to to 3.91 minutes for a batch of 16 images, which is remarkable. The choice of models that are compared is, in my opinion, complete and the experimental evaluation is overall very convincing. Only [1] achieve slightly better results on CelebA, however the paper appeared on arxiv after the NeurIPS deadline. In Table 3 the authors report an FID score of 10.70 on CelebA for [2], however in Table 3 of [2] an FID of 10.2 is reported (again this last version was uploaded after the deadline). The structure of the paper is reasonable. Although the paper is very technical, the authors succeeded at clearly explaining their approach. A short background section introduces the relevant concepts for readers that are not familiar with score-based generative models. During reading the paper, I spotted no typos. The authors justify their choices by an ablation study where they analyze the effect of SDEs, training objectives, weighting mechanisms, and variance reduction techniques. As a small point of criticism: I had a hard time understanding Table 6. The authors should spend some time refactoring the table and its very long caption. CONCLUSION: Overall, I would reccomend to accept this submission into NeurIPS 2021. The paper is well-written and addresses the efficiency issues of score-based generative models. The experimental results on the benchmark datasets are promissing. MINOR REMARKS: Paragraph captions not always in title-case (for example: "Implementation details") REFERENCES: [1] Kim, Dongjun, et al. "Score Matching Model for Unbounded Data Score." arXiv preprint arXiv:2106.05527 (2021). [2] Esser, Patrick, Robin Rombach, and Bjorn Ommer. "Taming transformers for high-resolution image synthesis." arXiv preprint arXiv:2012.09841 (2021)
NIPS
Title Robust Persistence Diagrams using Reproducing Kernels Abstract Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram. From a statistical perspective, however, persistence diagrams are very sensitive to perturbations in the input space. In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels. Using an analogue of the influence function on the space of persistence diagrams, we establish the proposed framework to be less sensitive to outliers. The robust persistence diagrams are shown to be consistent estimators in bottleneck distance, with the convergence rate controlled by the smoothness of the kernel—this in turn allows us to construct uniform confidence bands in the space of persistence diagrams. Finally, we demonstrate the superiority of the proposed approach on benchmark datasets. 1 Introduction Given a set of points Xn = {X1,X2, . . . ,Xn} observed from a probability distribution P on an input space X ⊆ Rd, understanding the shape of Xn sheds important insights on low-dimensional geometric and topological features which underlie P, and this question has received increasing attention in the past few decades. To this end, Topological Data Analysis (TDA), with a special emphasis on persistent homology [20, 44], has become a mainstay for extracting the shape information from data. In statistics and machine-learning, persistent homology has facilitated the development of novel methodology (e.g., [8, 11, 14]), which has been widely used in a variety of applications dealing with massive, unconventional forms of data (e.g., [5, 22, 43]). Informally speaking, persistent homology detects the presence of topological features across a range of resolutions by examining a nested sequence of spaces, typically referred to as a filtration. The filtration encodes the birth and death of topological features as the resolution varies, and is presented in the form of a concise representation—a persistence diagram or barcode. In the context of dataanalysis, there are two different methods for obtaining filtrations. The first is computed from the pairwise Euclidean distances of Xn, such as the Vietoris-Rips, Čech, and Alpha filtrations [20]. The second approach is based on choosing a function on X that reflects the density of P (or its approximation based on Xn), and, then, constructing a filtration. While the two approaches explore the topological features governing P in different ways, in essence, they generate similar insights. ∗Authors arranged alphabetically 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Despite obvious advantages, the adoption of persistent homology in mainstream statistical methodology is still limited. An important limitation among others, in the statistical context, is that the resulting persistent homology is highly sensitive to outliers. While the stability results of [12, 16] guarantee that small perturbations on all of Xn induce only small changes in the resulting persistence diagrams, a more pathological issue arises when a small fraction of Xn is subject to very large perturbations. Figure 1 illustrates how inference from persistence diagrams can change dramatically when Xn is contaminated with only a few outliers. Another challenge is the mathematical difficulty in performing sensitivity analysis in a formal statistical context. Since the space of persistence diagrams has an unusual mathematical structure, it falls victim to issues such as non-uniqueness of Fréchet means and unbounded curvature of geodesics [18, 29, 36]. With this background, the central objective of this paper is to develop outlier robust persistence diagrams, develop a framework for examining the sensitivity of the resulting persistence diagrams to noise, and establish statistical convergence guarantees. To the best of our knowledge, not much work has been carried out in this direction. Bendich et al. [4] construct persistence diagrams from Rips filtrations on Xn by replacing the Euclidean distance with diffusion distance, Brécheteau and Levrard [7] use a coreset of Xn for computing persistence diagrams from the distance-to-measure, and Anai et al. [2] use weighted-Rips filtrations on Xn to construct more stable persistent diagrams. However, no sensitivity analysis of the resultant diagrams are carried out in [2, 4, 7] to demonstrate their robustness. Contributions. The main contributions of this work are threefold. 1) We propose robust persistence diagrams constructed from filtrations induced by an RKHS-based robust KDE (kernel density estimator) [27] of the underlying density function of P (Section 3). While this idea of inducing filtrations by an appropriate function—[13, 21, 32] use KDE, distance-to-measure (DTM) and kernel distance (KDist), respectively—has already been explored, we show the corresponding persistence diagrams to be less robust compared to our proposal. 2) In Section 4.1, we generalize the notions of influence function and gross error sensitivity—which are usually defined for normed spaces—to the space of persistence diagrams, which lack the vector space structure. Using these generalized notions, we investigate the sensitivity of persistence diagrams constructed from filtrations induced by different functions (e.g., KDE, robust KDE, DTM) and demonstrate the robustness of the proposed method, both mathematically (Remark 4.3) and numerically (Section 5). 3) We establish the statistical consistency of the proposed robust persistence diagrams and provide uniform confidence bands by deriving exponential concentration bounds for the uniform deviation of the robust KDE (Section 4.2). Definitions and Notations. For a metric space X, the ball of radius r centered at x ∈ X is denoted by BX(x, r). P(Rd) is the set of all Borel probability measures on Rd, andM(Rd) denotes the set of probability measures on Rd with compact support and tame density function (See Section 2). δx denotes a Dirac measure at x. For bandwidth σ > 0, Hσ denotes a reproducing kernel Hilbert space (RKHS) withKσ : Rd × Rd → R as its reproducing kernel. We denote by Φσ(x) = Kσ(·,x) ∈ Hσ , the feature map associated withKσ , which embeds x ∈ Rd into Φσ(x) ∈ Hσ . Throughout this paper, we assume that Kσ is radial, i.e., Kσ(x,y) = σ−dψ(‖x− y‖2/σ) with ψ(‖ · ‖2) being a pdf on Rd, where ‖x‖22 = ∑d i=1 x 2 i for x = (x1, . . . , xd) ∈ Rd. Some common examples include the Gaussian, Matérn and inverse multiquadric kernels. We denote ‖Kσ‖∞ = · supx,y∈Rd Kσ(x,y) = σ−dψ(0). Without loss of generality, we assume ψ(0) = 1. For P ∈ P(Rd), µP =· ∫ Kσ(·,y)dP(y) ∈ Hσ is called the mean embedding of P, and Dσ =· { µP : P∈P(Rd) } is the space of mean embeddings [30]. 2 Persistent Homology: Preliminaries We present the necessary background on persistent homology for completeness. See [9, 42] for a comprehensive introduction. Persistent Homology. Let φ : X → R≥0 be a function on the metric space (X, d). At level r > 0, the sublevel set Xr = φ−1 ([0, r]) = {x ∈ X : φ(x) ≤ r} encodes the topological information in X. For r < s, the sublevel sets are nested, i.e., Xr ⊆ Xs. Thus {Xr}0≤r<∞ is a nested sequence of topological spaces, called a filtration, denoted by Sub(φ), and φ is called the filter function. As the level r varies, the evolution of the topology is captured in the filtration. Roughly speaking, new cycles (i.e., connected components, loops, voids and higher order analogues) can appear or existing cycles can merge. A new k-dimensional feature is said to be born at b ∈ R when a nontrivial k-cycle appears in Xb. The same k-cycle dies at level d > b when it disappears in all Xd+ for > 0. Persistent homology is an algebraic module which tracks the persistence pairs (b, d) of births b and deaths d with multiplicity µ across the entire filtration Sub(φ). Mutatis mutandis, a similar notion holds for superlevel sets Xr = φ−1 ([r,∞)), inducing the filtration Sup(φ). For r < s, the inclusion Xr ⊇ Xs is reversed and a cycle born at b dies at a level d < b, resulting in the persistence pair (d, b) instead. Figure 2 shows 3 connected components in the superlevel set for r = 8. The components were born as r swept through the blue points, and die when r approaches the red points. In practice, the filtrations are computed on a grid representation -4 -2 0 2 4 0 5 10 15 0 5 8 10 15 0 5 8 10 15 Superlevel Set for r=8 Filter Function φ(x) 0 5 10 15 0 5 10 15 0th-Persistence Diagram ← Death → ← B irt h → Figure 2: Dgm (Sup(φ)) for φ : R→ R. of the underlying space using cubical homology. We refer the reader to Appendix E for more details. Persistence Diagrams. By collecting all persistence pairs, the persistent homology features are concisely represented as a persistence diagram Dgm (Sub(φ)) =· { (b, d) ∈ R2 : 0 ≤ b < d ≤ ∞ } . A similar definition carries over to Dgm (Sup(φ)), using (d, b) instead. See Figure 2 for an illustration. When the context is clear, we drop the reference to the filtration and simply write Dgm(φ). The kth persistence diagram is the subset of Dgm(φ) corresponding to the k-dimensional features. The space of persistence diagrams is the locally-finite multiset of points on Ω = {(x, y) : 0 ≤ x < y ≤ ∞}, endowed with the family of p-Wasserstein metrics Wp, for 1 ≤ p ≤ ∞. We refer the reader to [18, 19] for a thorough introduction. W∞ is commonly referred to as the bottleneck distance. Definition 2.1. Given two persistence diagrams D1 and D2, the bottleneck distance is given by W∞ (D1, D2) = inf γ∈Γ sup p∈D1∪∆ ‖p− γ(p)‖∞ , where Γ = {γ : D1 ∪∆→ D2 ∪∆} is the set of all bijections from D1 to D2, including the diagonal ∆ = { (x, y) ∈ R2 : 0 ≤ x = y ≤ ∞ } with infinite multiplicity. An assumption we make at the outset is that the filter function f is tame. Tameness is a metric regularity condition which ensures that the number of points on the persistence diagrams are finite, and, in addition, the number of nontrivial cycles which share identical persistence pairings are also finite. Tame functions satisfy the celebrated stability property w.r.t. the bottleneck distance. Proposition 2.2 (Stability of Persistence Diagrams [12, 16]). Given two tame functions f, g : X→ R, W∞ (Dgm(f),Dgm(g)) ≤ ‖f − g‖∞ . The space of persistence diagrams is, in general, challenging to work with. However, the stability property provides a handle on the persistence space through the function space of filter functions. 3 Robust Persistence Diagrams Given Xn = {X1,X2, . . . ,Xn} ⊆ Rd drawn iid from a probability distribution P ∈ M(Rd) with density f , the corresponding persistence diagram can be obtained by considering a filter function φn : Rd → R, constructed from Xn as an approximation to its population analogue, φP : Rd → R, that carries the topological information of P. Commonly used φP include the (i) kernelized density, fσ , (ii) Kernel Distance (KDist), dKσP , and (iii) distance-to-measure (DTM), dP,m, which are defined as: fσ(x) = · ∫ X Kσ(x,y)dP(y) ; dKσP = · ‖µδx − µP‖Hσ ; dP,m(x) = · √ 1 m m ∫ 0 F−1x (u)du, where Fx(t) = P (‖X− x‖2 ≤ t) and σ,m > 0. For these φP, the corresponding empirical analogues, φn, are constructed by replacing P with the empirical measure, Pn =· 1n ∑n i=1 δXi . For example, the empirical analogue of fσ is the familiar kernel density estimator (KDE), fnσ = 1 n ∑n i=1Kσ(·,Xi). While KDE and KDist encode the shape and distribution of mass for supp(P) by approximating the density f (sublevel sets of KDist are rescaled versions of superlevel sets of KDE [13, 32]), DTM, on the other hand, approximates the distance function to supp(P). Since φn is based on Pn, it is sensitive to outliers in Xn, which, in turn affect the persistence diagrams (as illustrated in Figure 1). To this end, in this paper, we propose robust persistence diagrams constructed using superlevel filtrations of a robust density estimator of f , i.e., the filter function, φn is chosen to be a robust density estimator of f . Specifically, we use the robust KDE, fnρ,σ , introduced by [27] as the filter function, which is defined as a solution to the following M-estimation problem: fnρ,σ = · arg inf g∈G ∫ X ρ ( ‖Φσ(y)− g‖Hσ ) dPn(y), (1) where ρ : R≥0 → R≥0 is a robust loss function, and G = Hσ ∩ Dσ = Dσ is the hypothesis class. Observe that when ρ(z) = 12z 2, the unique solution to Eq. (1) is given by the KDE, fnσ . Therefore, a robust KDE is obtained by replacing the square loss with a robust loss, which satisfies the following assumptions. These assumptions, which are similar to those of [27, 39] guarantee the existence and uniqueness (if ρ is convex) of fnρ,σ [27], and are satisfied by most robust loss functions, including the Huber loss, ρ(z) = 12z 2 1 {z ≤ 1} + ( z − 12 ) 1 {z > 1} and the Charbonnier loss, ρ(z) = √ 1 + z2 − 1. (A1) ρ is strictly-increasing and M -Lipschitz, with ρ(0) = 0. (A2) ρ′(x) is continuous and bounded with ρ′(0) = 0 . (A3) ϕ(x) = ρ′(x)/x is bounded, L-Lipschitz and continuous, with ϕ(0) <∞. (A4) ρ′′ exists, with ρ′′ and ϕ nonincreasing. Unlike for squared loss, the solution fnρ,σ cannot be obtained in a closed form. However, it can be shown to be the fixed point of an iterative procedure, referred to as KIRWLS algorithm [27]. The KIRWLS algorithm starts with initial weights {w(0)i }ni=1 such that ∑n i=1 w (0) i = 1, and generates the iterative sequence of estimators {f (k)ρ,σ}k∈N as f (k)ρ,σ = n∑ i=1 w (k−1) i Kσ(·,Xi) ; w (k) i = ϕ(‖Φσ(Xi)− f (k)ρ,σ‖Hσ )∑n j=1 ϕ(‖Φσ(Xj)− f (k) ρ,σ‖Hσ ) . Intuitively, note that if Xi is an outlier, then the corresponding weight wi is small (since ϕ is nonincreasing) and therefore less weight is given to the contribution of Xi in the density estimator. Hence, the weights serve as a measure of inlyingness—smaller (resp. larger) the weights, lesser (resp. more) inlying are the points. When Pn is replaced by P, the solution of Eq. (1) is its population analogue, fρ,σ . Although fρ,σ does not admit a closed form solution, it can be shown [27] that there exists a non-negative real-valued function wσ satisfying ∫ Rd wσ(x) dP(x) = 1 such that fρ,σ = ∫ Rd Kσ(·,x)wσ(x)dP(x) = ∫ Rd ϕ(‖Φσ(x)− fρ,σ‖Hσ )∫ Rd ϕ(‖Φσ(y)− fρ,σ‖Hσ )dP(y) Kσ(·,x) dP(x), (2) where wσ acts as a population analogue of the weights in KIRWLS algorithm. To summarize our proposal, the fixed point of the KIRWLS algorithm, which yields the robust density estimator fnρ,σ, is used as the filter function to obtain a robust persistence diagram of Xn. On the computational front, note that fnρ,σ is computationally more complex than the KDE, f n σ , requiring O(n`) computations compared to O(n) of the latter, with ` being the number of iterations required to reach the fixed point of KIRWLS. However, once these filter functions are computed, the corresponding persistence diagrams have similar computational complexity as both require computing superlevel sets, which, in turn, require function evaluations that scale as O(n) for both fnρ,σ and f n σ . 4 Theoretical Analysis of Robust Persistence Diagrams In this section, we investigate the theoretical properties of the proposed robust persistence diagrams. First, in Section 4.1, we examine the sensitivity of persistence diagrams to outlying perturbations through the notion of metric derivative and compare the effect of different filter functions. Next, in Section 4.2, we establish consistency and convergence rates for the robust persistence diagram to its population analogue. These results allow to construct uniform confidence bands for the robust persistence diagram. The proofs of the results are provided in Appendix A. 4.1 A measure of sensitivity of persistence diagrams to outliers The influence function and gross error sensitivity are arguably the most popular tools in robust statistics for diagnosing the sensitivity of an estimator to a single adversarial contamination [23, 26]. Given a statistical functional T : P(X) → (V, ‖·‖V ), which takes an input probability measure P ∈ P(X) on the input space X and produces a statistic P 7→ T (P) in some normed space (V, ‖·‖V ), the influence function of x ∈ X at P is given by the Gâteaux derivative of T at P restricted to the space of signed Borel measures with zero expectation: IF(T ;P,x) =· ∂ ∂ T ( (1− )P + δx )∣∣∣ =0 = lim →0 T ((1− )P + δx)− T (P) , and the gross error sensitivity at P is given by Γ(T ;P) =· supx∈X ‖IF(T ;P,x)‖V . However, a persistence diagram (which is a statistical functional) does not take values in a normed space and therefore the notion of influence functions has to be generalized to metric spaces through the concept of a metric derivative: Given a complete metric space (X, dX) and a curve s : [0, 1]→ X , the metric derivative at = 0 is given by |s′| (0) =· lim →0 1 dX(s(0), s( )). Using this generalization, we have the following definition, which allows to examine the influence an outlier has on the persistence diagram obtained from a filtration. Definition 4.1. Given a probability measure P ∈ P(Rd) and a filter function φP depending on P, the persistence influence of a perturbation x ∈ Rd on Dgm (φP) is defined as Ψ (φP;x) = lim →0 1 W∞ ( Dgm ( φP x ) ,Dgm (φP) ) , where P x = · (1− )P + δx, and the gross-influence is defined as Γ(φP) = supx∈Rd Ψ (φP;x). For > 0, let f ,xρ,σ be the robust KDE associated with the probability measure P x. The following result (proved in Appendix A.1) bounds the persistence influence for the persistence diagram induced by the filter function fρ,σ , which is the population analogue of robust KDE. Theorem 4.2. For a loss ρ satisfying (A1)–(A3), and σ > 0, if lim →0 1 ( f ,xρ,σ − fρ,σ ) exists, then the persistence influence of x ∈ Rd on Dgm (fρ,σ) satisfies Ψ (fρ,σ;x) ≤ ‖Kσ‖ 1 2 ∞ ρ ′ ( ‖Φσ(x)− fρ,σ‖Hσ )(∫ Rd ζ ( ‖Φσ(y)− fρ,σ‖Hσ ) dP(y) )−1 , (3) where ζ(z) = ϕ(z)− zϕ′(z). Remark 4.3. We make the following observations from Theorem 4.2. (i) Choosing ρ(z) = 12z 2 and noting that ϕ(z) = ρ′′(z) = 1, a similar analysis, as in the proof of Theorem 4.2, yields a bound for the persistence influence of the KDE as Ψ (fσ;x) ≤ σ−d/2 ‖Φσ(x)− fσ‖Hσ . On the other hand, for robust loss functions, the term in Eq. (3) involving ρ′ is bounded because of (A2), making them less sensitive to very large perturbations. In fact, for nonincreasing ϕ, it can be shown (see Appendix C) that Ψ (fρ,σ;x) ≤ σ−d/2wσ(x) ‖Φσ(x)− fρ,σ‖Hσ , where, in contrast to KDE, the measure of inlyingness, wσ , weighs down extreme outliers. (ii) For the generalized Charbonnier loss (a robust loss function), given by ρ(z) = ( 1 + z2 )α/2 − 1 for 1 ≤ α < 2, the persistence influence satisfies Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ‖Φσ(x)− fρ,σ‖2Hσ )α−1 2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) 1−α 2 . Note that for α = 1, the bound on the persistence influence Ψ (fρ,σ;x) does not depend on how extreme the outlier x is. Similarly, for the Cauchy loss, given by ρ(z) = log(1 + z2), we have Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) . This shows that for large perturbations, the gross error sensitivity for the Cauchy and Charbonnier losses are far more stable than that of KDE. This behavior is also empirically illustrated in Figure 3. The experiment is detailed in Appendix C. (iii) For the DTM function, it can be shown that Ψ (dP,m;x) ≤ 2√ m sup {∣∣∣f(x)− ∫ Rd f(y)dP(y) ∣∣∣ : ‖∇f‖L2(P) ≤ 1} . (4) While dP,m cannot be compared to both fσ and fρ,σ, as it captures topological information at a different scale, determined by m, we point out that when supp(P) is compact, Ψ (dP,m;x) is not guaranteed to be bounded, unlike in Ψ (fρ,σ;x). We refer the reader to Appendix C for more details. It follows from Remark 4.3 that as σ → 0, the persistence influence of both the KDE and robust KDE behave asO(σ−d), showing that the robustness of robust persistence diagrams manifests only in cases where σ > 0. However, robustness alone has no bearing if the robust persistence diagram and the persistence diagram from the KDE are fundamentally different, i.e., they estimate different quantities as σ → 0. The following result (proved in Appendix A.2) shows that as σ → 0, Dgm (fρ,σ) recovers the same information as that in Dgm (fσ), which is same as Dgm (f), where f is the density of P. Theorem 4.4. For a strictly-convex loss ρ satisfying (A1)–(A4), and σ > 0, suppose P ∈M(Rd) with density f , and fρ,σ is the robust KDE. Then W∞ (Dgm (fρ,σ) ,Dgm (f))→ 0 as σ → 0. Suppose P = (1− π)P0 + πQ, where P0 corresponds to the true signal which we are interested in studying, and Q manifests as some ambient noise with 0 < π < 12 . In light of Theorem 4.4, by letting σ → 0, along with the topological features of P0, we are also capturing the topological features of Q, which may obfuscate any statistical inference made using the persistence diagrams. In a manner, choosing σ > 0 suppresses the noise in the resulting persistence diagrams, thereby making them more stable. On a similar note, the authors in [21] state that for a suitable bandwidth σ > 0, the level sets of fσ carry the same topological information as supp(P), despite the fact that some subtle details in f may be omitted. In what follows, we consider the setting where robust persistence diagrams are constructed for a fixed σ > 0. 4.2 Statistical properties of robust persistence diagrams from samples Suppose Dgm ( fnρ,σ ) is the robust persistence diagram obtained from the robust KDE on a sample Xn and Dgm (fρ,σ) is its population analogue obtained from fρ,σ. The following result (proved in Appendix A.3) establishes the consistency of Dgm ( fnρ,σ ) in the W∞ metric. Theorem 4.5. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose Xn is observed iid from a distribution P∈M(Rd) with density f . Then W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) p→ 0 as n→∞. We present the convergence rate of the above convergence in Theorem 4.7, which depends on the smoothness of Hσ. In a similar spirit to [21], this result paves the way for constructing uniform confidence bands. Before we present the result, we first introduce the notion of entropy numbers associated with an RKHS. Definition 4.6 (Entropy Number). Given a metric space (T, d) the nth entropy number is defined as en(T, d) = · inf > 0 : ∃ {t1, t2, . . . , t2n−1} ⊂ T such that T ⊂ 2n−1⋃ i=1 Bd(ti, ) . Further, if (V, ‖·‖V ) and (W, ‖·‖W ) are two normed spaces and L : V → W is a bounded, linear operator, then en(L) = en(L : V →W ) =· en (L(BV ), ‖·‖W ), where BV is a unit ball in V . Loosely speaking, entropy numbers are related to the eigenvalues of the integral operator associated with the kernel Kσ , and measure the capacity of the RKHS in approximating functions in L2(Rd). In our context, the entropy numbers will provide useful bounds on the covering numbers of sets in the hypothesis class G. We refer the reader to [35] for more details. With this background, the following theorem (proved in Appendix A.4) provides a method for constructing uniform confidence bands for the persistence diagram constructed using the robust KDE on Xn. Theorem 4.7. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose the kernel Kσ satisfies en (id : Hσ → L∞(X)) ≤ aσn− 1 2p , where aσ > 1, 0 < p < 1 and X ⊂ Rd. Then, for a fixed confidence level 0 < α < 1, sup P∈M(X) P⊗n { W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) > 2M ‖Kσ‖ 1 2 ∞ µ ( ξ(n, p) + δ √ 2 log (1/α) n )} ≤ α, where ξ(n, p) is given by ξ(n, p) = γ apσ (1−2p) · 1√ n if 0 < p < 1/2, γC √ aσ · log(n)√n if p = 1/2, γ p √ aσ 2p−1 · 1 n1/4p if 1/2 < p < 1, for fixed constants γ > 12√ log 2 , C > 3− log(9aσ) and µ = 2 min { ϕ(2 ‖Kσ‖ 1 2 ∞), ρ ′′(2 ‖Kσ‖ 1 2 ∞) } . Remark 4.8. We highlight some salient observations from Theorem 4.7. (i) If diam(X) = r, and the kernel Kσ is m-times differentiable, then from [35, Theorem 6.26], the entropy numbers associated with Kσ satisfy en (id : Hσ → L∞(X)) ≤ crmn− m d . In light of Theorem 4.7, for p = d2m , we can make two important observations. First, as the dimension of the input space X increases, we have that the rate of convergence decreases; which is a direct consequence from the curse of dimensionality. Second, for a fixed dimension of the input space, the parameter p in Theorem 4.7 can be understood to be inversely proportional to the smoothness of the kernel. Specifically, as the smoothness of the kernel increases, the rate of convergence is faster, and we obtain sharper confidence bands. This makes a case for employing smoother kernels. (ii) A similar result is obtained in [21, Lemma 8] for persistence diagrams from the KDE, with a convergence rate Op(n−1/2), where the proof relies on a simple application of Hoeffding’s inequality, unlike the sophisticated tools the proof of Theorem 4.7 warrants for the robust KDE. 5 Experiments We illustrate the performance of robust persistence diagrams in machine learning applications through synthetic and real-world experiments.1 In all the experiments, the kernel bandwidth σ is chosen as the median distance of each xi ∈ Xn to its kth–nearest neighbour using the Gaussian kernel with the Hampel loss (similar setting as in [27])—we denote this bandwidth as σ(k). Since DTM is closely related to the k-NN density estimator [6], we choose the DTM smoothing parameter as m(k) = k/n. Additionally, the KIRWLS algorithm is run until the relative change of empirical risk < 10−6. Runtime Analysis. For n = 1000, Xn is sampled from a torus inside [0, 2]3. For each grid resolution α ∈ {0.04, 0.06, 0.08, 0.10}, the robust persistence diagram Dgm ( fnρ,σ ) and the KDE persistence diagram Dgm (fnσ ) are constructed from the superlevel filtration of cubical homology. The total time taken to compute the persistence diagrams is reported in Table 1. The results demonstrate that the computational bottleneck is the persistent homology pipeline, and not the KIRWLS for fnρ,σ . Bottleneck Simulation. The objective of this experiment is to assess how the robust KDE persistence diagram compares to the KDE persistence diagram in recovering the topological features of the underlying signal. Xn is observed uniformly from two circles and Ym is sampled uniformly from the enclosing square such that m = 200 and m/n = π ∈ {20%, 30%, 40%}—shown in Figure 4 (a). For each noise level π, and for each of N = 100 realizations of Xn and Ym, the robust persistence diagram Dρ,σ and the KDE persistence diagram Dσ are constructed from the noisy samples Xn∪Ym. In addition, we compute the KDE persistence diagram D#σ on Xn alone as a proxy for the target persistence diagram one would obtain in the absence of any contamination. The bandwidth σ(k) > 0 is chosen for k = 5. For each realization i, bottleneck distances Ui = W∞ ( Dρ,σ,D#σ ) and Vi = W∞ ( Dσ,D#σ ) are computed for 1st-order homological features. The boxplots and p-values for the one-sided hypothesis testH0 : U−V = 0 vs. H1 : U−V < 0 are reported in Figures 4 (b, c, d). The results demonstrate that the robust persistence diagram is noticeably better in recovering the true homological features, and in fact demonstrates superior performance when the noise levels are higher. Spectral Clustering using Persistent Homology. We perform a variant of the six-class benchmark experiment from [1, Section 6.1]. The data comprises of six different 3D “objects”: cube, circle, sphere, 3clusters, 3clustersIn3clusters, and torus. 25 point clouds are sampled from each object with additive Gaussian noise (SD= 0.1), and ambient Matérn cluster noise. For each point cloud, Xn, the robust persistence diagram Dgm ( fnρ,σ ) and the persistence diagram Dgm (dXn), from the distance function, are constructed. Additionally, Dgm (dXn) is transformed to the persistence image Img (dXn , h) for h = 0.1. Note that Dgm ( fnρ,σ ) is a robust diagram while Img (dXn , h) is a stable vectorization of a non-robust diagram [1]. For each homological order {H0, H1, H2}, distance 1https://github.com/sidv23/robust-PDs matrices {∆0,∆1,∆2} are computed: Wp metric for Dgm (fρ,σ), and Lp metric for Img (dXn , h) with p ∈ {1, 2,∞}, and spectral clustering is performed on the resulting distance-matrices. The quality of the clustering is assessed using the rand-index. The results, reported in Table 2, evidence the superiority of employing inherently robust persistence diagrams in contrast to a robust vectorization of an inherently noisy persistence diagram. MPEG7. In this experiment, we examine the performance of persistence diagrams in a classification task on [28]. For simplicity, we only consider five classes: beetle, bone, spring, deer and horse. We first extract the boundary of the images using a Laplace convolution, and sample Xn uniformly from the boundary of each image, adding uniform noise (π = 15%) in the enclosing region. Persistence diagrams Dgm (fnσ ) and Dgm ( fnρ,σ ) from the KDE and robust KDE are constructed. In addition, owing to their ability to capture nuanced multi-scale features, we also construct Dgm (dn,m) from the DTM filtration. The smoothing parameters σ(k) and m(k) are chosen as earlier for k = 5. The persistence diagrams are normalized to have a max persistence max{|d− b| = 1 : (b, d) ∈ Dgm(φ)}, and then vectorized as persistence images, Img (fnσ , h), Img ( fnρ,σ, h ) , and Img (dn,m, h) for various bandwidths h. A linear SVM classifier is then trained on the resulting persistence images. In the first experiment we only consider the first three classes, and in the second experiment we consider all five classes. The results for the classification error, shown in Figure 5, demonstrate the superiority of the proposed method. We refer the reader to Appendix D for additional experiments. 6 Conclusion & Discussion In this paper, we proposed a statistically consistent robust persistent diagram using RKHS-based robust KDE as the filter function. By generalizing the notion of influence function to the space of persistence diagrams, we mathematically and empirically demonstrated the robustness of the proposed method to that of persistence diagrams induced by other filter functions such as KDE. Through numerical experiments, we demonstrated the advantage of using robust persistence diagrams in machine learning applications. We would like to highlight that most of the theoretical results of this paper crucially hinge on the loss function being convex. As a future direction, we would like to generalize the current results to non-convex loss functions, and explore robust persistence diagrams induced other types of robust density estimators, which could potentially yield more robust persistence diagrams. Another important direction we intend to explore is to enhance the computational efficiency of the proposed approach using coresets, as in [7], and/or using weighted Rips filtrations, as in [2]. We provide a brief discussion in Appendix E. Broader Impact Over the last decade, Topological Data Analysis has become an important tool for extracting geometric and topological information from data, and its applications have been far reaching. For example, it has been used successfully in the study the fragile X-syndrome, to discover traumatic brain injuries, and has also become an important tool in the study of protein structure. In astrophysics, it has aided the study of cosmic microwave background, and the discovery of cosmic voids and filamental structures in cosmological data. With a continual increase in its adoption in data analysis, it has become important to understand the limitations of using persistent homology in machine learning applications. As real-world data is often flustered with measurement errors and other forms of noise, in this work, we examine the sensitivity of persistence diagrams to such noise, and provide methods to mitigate the effect of this noise, so as to make reliable topological inference. Acknowledgments and Disclosure of Funding The authors would like to thank the anonymous reviewers for their helpful comments and constructive feedback. Siddharth Vishwanath and Bharath Sriperumbudur are supported in part by NSF DMS CAREER Award 1945396. Kenji Fukumizu is supported in part by JST CREST Grant Number JPMJCR15D3, Japan. Satoshi Kuriki is partially supported by JSPS KAKENHI Grant Number JP16H02792, Japan.
1. What is the focus of the paper regarding topological features? 2. What are the strengths of the paper, especially in terms of its theoretical analysis? 3. What are the weaknesses of the paper, particularly regarding its density and experiment section? 4. How does the reviewer suggest improving the experimental section to highlight the benefits of persistent homology and the new robust kernel? 5. What is the reviewer's opinion on the relevance of the paper's topic in topological data analysis?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces a novel method for obtaining robust topological features. Specifically, it introduces a way to 'robustify' a kernel, such that the resulting topological descriptors are stable with respect to outliers. Next to explaining and illustrating the necessity of such robust kernels, the paper also analyses their theoretical properties, in particular the stability and convergence properties. This theoretical part is, in my opinion, the main contribution of the paper. In addition to a thorough theoretical discussion that even includes bounds, the paper also contains an empirical comparison of different filtrations and their behaviour under noise. Strengths The main strength of the paper is the thorough theoretical analysis of the problem at hand. Even though the actual procedure for making the kernel more robust is already known, the analysis of its approximation properties in the context of topological data analysis (TDA) is novel. Moreover, I appreciate the conceptual depth of this paper: stability properties are shown, but also bounds based on entropy. The paper is thus chock full of interesting and relevant theory. The subject discussed in this paper is highly relevant for TDA, as the existence of outliers indeed 'plagues' the calculations to some extent; shedding some light on this topic is thus highly relevant: often, the celebrated stability theorem is misunderstood in applications and large-scale outliers are not handled correctly. Having a more thorough discussion concerning this issue is very relevant and I am glad that the paper raises this point. Weaknesses Though I appreciate the thoroughness of the paper, it is also to some extent its main weakness: the main paper is rather dense and requires a thorough reading to be understandable. Given the length of the proofs, only the main results are stated in the text, but some terminology is missing (see my comments on clarity below) to fully understand all results. This goes slightly at the expense of the experimental section, which provides only a very cursory overview of the advantages of the proposed approach. More precisely, I would suggest expanding the experimental section to include experiments that serve to highlight the benefits of persistent homology *and* the new robust kernel. Currently, I fear that the section might be considered to be slightly underwhelming to non-expert readers, because it is not clear what the benefits of a topological view are. I realise that this might be tough to accomplish, but the paper could use examples from Adams et al. (Persistence Images: A Stable Vector Representation of Persistent Homology, https://arxiv.org/abs/1507.06217) to be better comparable to the existing literature. This could be achieved by moving Section 4.2 to the appendix, for example. While I like the results discussed here, they are not strictly required for the experimental section and serve more to illustrate the benefits of the proposed method in comparison to other filtrations. Moreover, I think the paper should delineate itself from the recent paper 'DTM-based Filtrations' by Anai et al. (arXiv:1811.04757), since the main feature of that filtration is also to be robust to noise and outliers. This is particularly relevant as the paper mentions DTM as one way to generate filtrations (notice that DTM-based filtrations are slightly more complex than calculating just DTM).
NIPS
Title Robust Persistence Diagrams using Reproducing Kernels Abstract Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram. From a statistical perspective, however, persistence diagrams are very sensitive to perturbations in the input space. In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels. Using an analogue of the influence function on the space of persistence diagrams, we establish the proposed framework to be less sensitive to outliers. The robust persistence diagrams are shown to be consistent estimators in bottleneck distance, with the convergence rate controlled by the smoothness of the kernel—this in turn allows us to construct uniform confidence bands in the space of persistence diagrams. Finally, we demonstrate the superiority of the proposed approach on benchmark datasets. 1 Introduction Given a set of points Xn = {X1,X2, . . . ,Xn} observed from a probability distribution P on an input space X ⊆ Rd, understanding the shape of Xn sheds important insights on low-dimensional geometric and topological features which underlie P, and this question has received increasing attention in the past few decades. To this end, Topological Data Analysis (TDA), with a special emphasis on persistent homology [20, 44], has become a mainstay for extracting the shape information from data. In statistics and machine-learning, persistent homology has facilitated the development of novel methodology (e.g., [8, 11, 14]), which has been widely used in a variety of applications dealing with massive, unconventional forms of data (e.g., [5, 22, 43]). Informally speaking, persistent homology detects the presence of topological features across a range of resolutions by examining a nested sequence of spaces, typically referred to as a filtration. The filtration encodes the birth and death of topological features as the resolution varies, and is presented in the form of a concise representation—a persistence diagram or barcode. In the context of dataanalysis, there are two different methods for obtaining filtrations. The first is computed from the pairwise Euclidean distances of Xn, such as the Vietoris-Rips, Čech, and Alpha filtrations [20]. The second approach is based on choosing a function on X that reflects the density of P (or its approximation based on Xn), and, then, constructing a filtration. While the two approaches explore the topological features governing P in different ways, in essence, they generate similar insights. ∗Authors arranged alphabetically 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Despite obvious advantages, the adoption of persistent homology in mainstream statistical methodology is still limited. An important limitation among others, in the statistical context, is that the resulting persistent homology is highly sensitive to outliers. While the stability results of [12, 16] guarantee that small perturbations on all of Xn induce only small changes in the resulting persistence diagrams, a more pathological issue arises when a small fraction of Xn is subject to very large perturbations. Figure 1 illustrates how inference from persistence diagrams can change dramatically when Xn is contaminated with only a few outliers. Another challenge is the mathematical difficulty in performing sensitivity analysis in a formal statistical context. Since the space of persistence diagrams has an unusual mathematical structure, it falls victim to issues such as non-uniqueness of Fréchet means and unbounded curvature of geodesics [18, 29, 36]. With this background, the central objective of this paper is to develop outlier robust persistence diagrams, develop a framework for examining the sensitivity of the resulting persistence diagrams to noise, and establish statistical convergence guarantees. To the best of our knowledge, not much work has been carried out in this direction. Bendich et al. [4] construct persistence diagrams from Rips filtrations on Xn by replacing the Euclidean distance with diffusion distance, Brécheteau and Levrard [7] use a coreset of Xn for computing persistence diagrams from the distance-to-measure, and Anai et al. [2] use weighted-Rips filtrations on Xn to construct more stable persistent diagrams. However, no sensitivity analysis of the resultant diagrams are carried out in [2, 4, 7] to demonstrate their robustness. Contributions. The main contributions of this work are threefold. 1) We propose robust persistence diagrams constructed from filtrations induced by an RKHS-based robust KDE (kernel density estimator) [27] of the underlying density function of P (Section 3). While this idea of inducing filtrations by an appropriate function—[13, 21, 32] use KDE, distance-to-measure (DTM) and kernel distance (KDist), respectively—has already been explored, we show the corresponding persistence diagrams to be less robust compared to our proposal. 2) In Section 4.1, we generalize the notions of influence function and gross error sensitivity—which are usually defined for normed spaces—to the space of persistence diagrams, which lack the vector space structure. Using these generalized notions, we investigate the sensitivity of persistence diagrams constructed from filtrations induced by different functions (e.g., KDE, robust KDE, DTM) and demonstrate the robustness of the proposed method, both mathematically (Remark 4.3) and numerically (Section 5). 3) We establish the statistical consistency of the proposed robust persistence diagrams and provide uniform confidence bands by deriving exponential concentration bounds for the uniform deviation of the robust KDE (Section 4.2). Definitions and Notations. For a metric space X, the ball of radius r centered at x ∈ X is denoted by BX(x, r). P(Rd) is the set of all Borel probability measures on Rd, andM(Rd) denotes the set of probability measures on Rd with compact support and tame density function (See Section 2). δx denotes a Dirac measure at x. For bandwidth σ > 0, Hσ denotes a reproducing kernel Hilbert space (RKHS) withKσ : Rd × Rd → R as its reproducing kernel. We denote by Φσ(x) = Kσ(·,x) ∈ Hσ , the feature map associated withKσ , which embeds x ∈ Rd into Φσ(x) ∈ Hσ . Throughout this paper, we assume that Kσ is radial, i.e., Kσ(x,y) = σ−dψ(‖x− y‖2/σ) with ψ(‖ · ‖2) being a pdf on Rd, where ‖x‖22 = ∑d i=1 x 2 i for x = (x1, . . . , xd) ∈ Rd. Some common examples include the Gaussian, Matérn and inverse multiquadric kernels. We denote ‖Kσ‖∞ = · supx,y∈Rd Kσ(x,y) = σ−dψ(0). Without loss of generality, we assume ψ(0) = 1. For P ∈ P(Rd), µP =· ∫ Kσ(·,y)dP(y) ∈ Hσ is called the mean embedding of P, and Dσ =· { µP : P∈P(Rd) } is the space of mean embeddings [30]. 2 Persistent Homology: Preliminaries We present the necessary background on persistent homology for completeness. See [9, 42] for a comprehensive introduction. Persistent Homology. Let φ : X → R≥0 be a function on the metric space (X, d). At level r > 0, the sublevel set Xr = φ−1 ([0, r]) = {x ∈ X : φ(x) ≤ r} encodes the topological information in X. For r < s, the sublevel sets are nested, i.e., Xr ⊆ Xs. Thus {Xr}0≤r<∞ is a nested sequence of topological spaces, called a filtration, denoted by Sub(φ), and φ is called the filter function. As the level r varies, the evolution of the topology is captured in the filtration. Roughly speaking, new cycles (i.e., connected components, loops, voids and higher order analogues) can appear or existing cycles can merge. A new k-dimensional feature is said to be born at b ∈ R when a nontrivial k-cycle appears in Xb. The same k-cycle dies at level d > b when it disappears in all Xd+ for > 0. Persistent homology is an algebraic module which tracks the persistence pairs (b, d) of births b and deaths d with multiplicity µ across the entire filtration Sub(φ). Mutatis mutandis, a similar notion holds for superlevel sets Xr = φ−1 ([r,∞)), inducing the filtration Sup(φ). For r < s, the inclusion Xr ⊇ Xs is reversed and a cycle born at b dies at a level d < b, resulting in the persistence pair (d, b) instead. Figure 2 shows 3 connected components in the superlevel set for r = 8. The components were born as r swept through the blue points, and die when r approaches the red points. In practice, the filtrations are computed on a grid representation -4 -2 0 2 4 0 5 10 15 0 5 8 10 15 0 5 8 10 15 Superlevel Set for r=8 Filter Function φ(x) 0 5 10 15 0 5 10 15 0th-Persistence Diagram ← Death → ← B irt h → Figure 2: Dgm (Sup(φ)) for φ : R→ R. of the underlying space using cubical homology. We refer the reader to Appendix E for more details. Persistence Diagrams. By collecting all persistence pairs, the persistent homology features are concisely represented as a persistence diagram Dgm (Sub(φ)) =· { (b, d) ∈ R2 : 0 ≤ b < d ≤ ∞ } . A similar definition carries over to Dgm (Sup(φ)), using (d, b) instead. See Figure 2 for an illustration. When the context is clear, we drop the reference to the filtration and simply write Dgm(φ). The kth persistence diagram is the subset of Dgm(φ) corresponding to the k-dimensional features. The space of persistence diagrams is the locally-finite multiset of points on Ω = {(x, y) : 0 ≤ x < y ≤ ∞}, endowed with the family of p-Wasserstein metrics Wp, for 1 ≤ p ≤ ∞. We refer the reader to [18, 19] for a thorough introduction. W∞ is commonly referred to as the bottleneck distance. Definition 2.1. Given two persistence diagrams D1 and D2, the bottleneck distance is given by W∞ (D1, D2) = inf γ∈Γ sup p∈D1∪∆ ‖p− γ(p)‖∞ , where Γ = {γ : D1 ∪∆→ D2 ∪∆} is the set of all bijections from D1 to D2, including the diagonal ∆ = { (x, y) ∈ R2 : 0 ≤ x = y ≤ ∞ } with infinite multiplicity. An assumption we make at the outset is that the filter function f is tame. Tameness is a metric regularity condition which ensures that the number of points on the persistence diagrams are finite, and, in addition, the number of nontrivial cycles which share identical persistence pairings are also finite. Tame functions satisfy the celebrated stability property w.r.t. the bottleneck distance. Proposition 2.2 (Stability of Persistence Diagrams [12, 16]). Given two tame functions f, g : X→ R, W∞ (Dgm(f),Dgm(g)) ≤ ‖f − g‖∞ . The space of persistence diagrams is, in general, challenging to work with. However, the stability property provides a handle on the persistence space through the function space of filter functions. 3 Robust Persistence Diagrams Given Xn = {X1,X2, . . . ,Xn} ⊆ Rd drawn iid from a probability distribution P ∈ M(Rd) with density f , the corresponding persistence diagram can be obtained by considering a filter function φn : Rd → R, constructed from Xn as an approximation to its population analogue, φP : Rd → R, that carries the topological information of P. Commonly used φP include the (i) kernelized density, fσ , (ii) Kernel Distance (KDist), dKσP , and (iii) distance-to-measure (DTM), dP,m, which are defined as: fσ(x) = · ∫ X Kσ(x,y)dP(y) ; dKσP = · ‖µδx − µP‖Hσ ; dP,m(x) = · √ 1 m m ∫ 0 F−1x (u)du, where Fx(t) = P (‖X− x‖2 ≤ t) and σ,m > 0. For these φP, the corresponding empirical analogues, φn, are constructed by replacing P with the empirical measure, Pn =· 1n ∑n i=1 δXi . For example, the empirical analogue of fσ is the familiar kernel density estimator (KDE), fnσ = 1 n ∑n i=1Kσ(·,Xi). While KDE and KDist encode the shape and distribution of mass for supp(P) by approximating the density f (sublevel sets of KDist are rescaled versions of superlevel sets of KDE [13, 32]), DTM, on the other hand, approximates the distance function to supp(P). Since φn is based on Pn, it is sensitive to outliers in Xn, which, in turn affect the persistence diagrams (as illustrated in Figure 1). To this end, in this paper, we propose robust persistence diagrams constructed using superlevel filtrations of a robust density estimator of f , i.e., the filter function, φn is chosen to be a robust density estimator of f . Specifically, we use the robust KDE, fnρ,σ , introduced by [27] as the filter function, which is defined as a solution to the following M-estimation problem: fnρ,σ = · arg inf g∈G ∫ X ρ ( ‖Φσ(y)− g‖Hσ ) dPn(y), (1) where ρ : R≥0 → R≥0 is a robust loss function, and G = Hσ ∩ Dσ = Dσ is the hypothesis class. Observe that when ρ(z) = 12z 2, the unique solution to Eq. (1) is given by the KDE, fnσ . Therefore, a robust KDE is obtained by replacing the square loss with a robust loss, which satisfies the following assumptions. These assumptions, which are similar to those of [27, 39] guarantee the existence and uniqueness (if ρ is convex) of fnρ,σ [27], and are satisfied by most robust loss functions, including the Huber loss, ρ(z) = 12z 2 1 {z ≤ 1} + ( z − 12 ) 1 {z > 1} and the Charbonnier loss, ρ(z) = √ 1 + z2 − 1. (A1) ρ is strictly-increasing and M -Lipschitz, with ρ(0) = 0. (A2) ρ′(x) is continuous and bounded with ρ′(0) = 0 . (A3) ϕ(x) = ρ′(x)/x is bounded, L-Lipschitz and continuous, with ϕ(0) <∞. (A4) ρ′′ exists, with ρ′′ and ϕ nonincreasing. Unlike for squared loss, the solution fnρ,σ cannot be obtained in a closed form. However, it can be shown to be the fixed point of an iterative procedure, referred to as KIRWLS algorithm [27]. The KIRWLS algorithm starts with initial weights {w(0)i }ni=1 such that ∑n i=1 w (0) i = 1, and generates the iterative sequence of estimators {f (k)ρ,σ}k∈N as f (k)ρ,σ = n∑ i=1 w (k−1) i Kσ(·,Xi) ; w (k) i = ϕ(‖Φσ(Xi)− f (k)ρ,σ‖Hσ )∑n j=1 ϕ(‖Φσ(Xj)− f (k) ρ,σ‖Hσ ) . Intuitively, note that if Xi is an outlier, then the corresponding weight wi is small (since ϕ is nonincreasing) and therefore less weight is given to the contribution of Xi in the density estimator. Hence, the weights serve as a measure of inlyingness—smaller (resp. larger) the weights, lesser (resp. more) inlying are the points. When Pn is replaced by P, the solution of Eq. (1) is its population analogue, fρ,σ . Although fρ,σ does not admit a closed form solution, it can be shown [27] that there exists a non-negative real-valued function wσ satisfying ∫ Rd wσ(x) dP(x) = 1 such that fρ,σ = ∫ Rd Kσ(·,x)wσ(x)dP(x) = ∫ Rd ϕ(‖Φσ(x)− fρ,σ‖Hσ )∫ Rd ϕ(‖Φσ(y)− fρ,σ‖Hσ )dP(y) Kσ(·,x) dP(x), (2) where wσ acts as a population analogue of the weights in KIRWLS algorithm. To summarize our proposal, the fixed point of the KIRWLS algorithm, which yields the robust density estimator fnρ,σ, is used as the filter function to obtain a robust persistence diagram of Xn. On the computational front, note that fnρ,σ is computationally more complex than the KDE, f n σ , requiring O(n`) computations compared to O(n) of the latter, with ` being the number of iterations required to reach the fixed point of KIRWLS. However, once these filter functions are computed, the corresponding persistence diagrams have similar computational complexity as both require computing superlevel sets, which, in turn, require function evaluations that scale as O(n) for both fnρ,σ and f n σ . 4 Theoretical Analysis of Robust Persistence Diagrams In this section, we investigate the theoretical properties of the proposed robust persistence diagrams. First, in Section 4.1, we examine the sensitivity of persistence diagrams to outlying perturbations through the notion of metric derivative and compare the effect of different filter functions. Next, in Section 4.2, we establish consistency and convergence rates for the robust persistence diagram to its population analogue. These results allow to construct uniform confidence bands for the robust persistence diagram. The proofs of the results are provided in Appendix A. 4.1 A measure of sensitivity of persistence diagrams to outliers The influence function and gross error sensitivity are arguably the most popular tools in robust statistics for diagnosing the sensitivity of an estimator to a single adversarial contamination [23, 26]. Given a statistical functional T : P(X) → (V, ‖·‖V ), which takes an input probability measure P ∈ P(X) on the input space X and produces a statistic P 7→ T (P) in some normed space (V, ‖·‖V ), the influence function of x ∈ X at P is given by the Gâteaux derivative of T at P restricted to the space of signed Borel measures with zero expectation: IF(T ;P,x) =· ∂ ∂ T ( (1− )P + δx )∣∣∣ =0 = lim →0 T ((1− )P + δx)− T (P) , and the gross error sensitivity at P is given by Γ(T ;P) =· supx∈X ‖IF(T ;P,x)‖V . However, a persistence diagram (which is a statistical functional) does not take values in a normed space and therefore the notion of influence functions has to be generalized to metric spaces through the concept of a metric derivative: Given a complete metric space (X, dX) and a curve s : [0, 1]→ X , the metric derivative at = 0 is given by |s′| (0) =· lim →0 1 dX(s(0), s( )). Using this generalization, we have the following definition, which allows to examine the influence an outlier has on the persistence diagram obtained from a filtration. Definition 4.1. Given a probability measure P ∈ P(Rd) and a filter function φP depending on P, the persistence influence of a perturbation x ∈ Rd on Dgm (φP) is defined as Ψ (φP;x) = lim →0 1 W∞ ( Dgm ( φP x ) ,Dgm (φP) ) , where P x = · (1− )P + δx, and the gross-influence is defined as Γ(φP) = supx∈Rd Ψ (φP;x). For > 0, let f ,xρ,σ be the robust KDE associated with the probability measure P x. The following result (proved in Appendix A.1) bounds the persistence influence for the persistence diagram induced by the filter function fρ,σ , which is the population analogue of robust KDE. Theorem 4.2. For a loss ρ satisfying (A1)–(A3), and σ > 0, if lim →0 1 ( f ,xρ,σ − fρ,σ ) exists, then the persistence influence of x ∈ Rd on Dgm (fρ,σ) satisfies Ψ (fρ,σ;x) ≤ ‖Kσ‖ 1 2 ∞ ρ ′ ( ‖Φσ(x)− fρ,σ‖Hσ )(∫ Rd ζ ( ‖Φσ(y)− fρ,σ‖Hσ ) dP(y) )−1 , (3) where ζ(z) = ϕ(z)− zϕ′(z). Remark 4.3. We make the following observations from Theorem 4.2. (i) Choosing ρ(z) = 12z 2 and noting that ϕ(z) = ρ′′(z) = 1, a similar analysis, as in the proof of Theorem 4.2, yields a bound for the persistence influence of the KDE as Ψ (fσ;x) ≤ σ−d/2 ‖Φσ(x)− fσ‖Hσ . On the other hand, for robust loss functions, the term in Eq. (3) involving ρ′ is bounded because of (A2), making them less sensitive to very large perturbations. In fact, for nonincreasing ϕ, it can be shown (see Appendix C) that Ψ (fρ,σ;x) ≤ σ−d/2wσ(x) ‖Φσ(x)− fρ,σ‖Hσ , where, in contrast to KDE, the measure of inlyingness, wσ , weighs down extreme outliers. (ii) For the generalized Charbonnier loss (a robust loss function), given by ρ(z) = ( 1 + z2 )α/2 − 1 for 1 ≤ α < 2, the persistence influence satisfies Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ‖Φσ(x)− fρ,σ‖2Hσ )α−1 2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) 1−α 2 . Note that for α = 1, the bound on the persistence influence Ψ (fρ,σ;x) does not depend on how extreme the outlier x is. Similarly, for the Cauchy loss, given by ρ(z) = log(1 + z2), we have Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) . This shows that for large perturbations, the gross error sensitivity for the Cauchy and Charbonnier losses are far more stable than that of KDE. This behavior is also empirically illustrated in Figure 3. The experiment is detailed in Appendix C. (iii) For the DTM function, it can be shown that Ψ (dP,m;x) ≤ 2√ m sup {∣∣∣f(x)− ∫ Rd f(y)dP(y) ∣∣∣ : ‖∇f‖L2(P) ≤ 1} . (4) While dP,m cannot be compared to both fσ and fρ,σ, as it captures topological information at a different scale, determined by m, we point out that when supp(P) is compact, Ψ (dP,m;x) is not guaranteed to be bounded, unlike in Ψ (fρ,σ;x). We refer the reader to Appendix C for more details. It follows from Remark 4.3 that as σ → 0, the persistence influence of both the KDE and robust KDE behave asO(σ−d), showing that the robustness of robust persistence diagrams manifests only in cases where σ > 0. However, robustness alone has no bearing if the robust persistence diagram and the persistence diagram from the KDE are fundamentally different, i.e., they estimate different quantities as σ → 0. The following result (proved in Appendix A.2) shows that as σ → 0, Dgm (fρ,σ) recovers the same information as that in Dgm (fσ), which is same as Dgm (f), where f is the density of P. Theorem 4.4. For a strictly-convex loss ρ satisfying (A1)–(A4), and σ > 0, suppose P ∈M(Rd) with density f , and fρ,σ is the robust KDE. Then W∞ (Dgm (fρ,σ) ,Dgm (f))→ 0 as σ → 0. Suppose P = (1− π)P0 + πQ, where P0 corresponds to the true signal which we are interested in studying, and Q manifests as some ambient noise with 0 < π < 12 . In light of Theorem 4.4, by letting σ → 0, along with the topological features of P0, we are also capturing the topological features of Q, which may obfuscate any statistical inference made using the persistence diagrams. In a manner, choosing σ > 0 suppresses the noise in the resulting persistence diagrams, thereby making them more stable. On a similar note, the authors in [21] state that for a suitable bandwidth σ > 0, the level sets of fσ carry the same topological information as supp(P), despite the fact that some subtle details in f may be omitted. In what follows, we consider the setting where robust persistence diagrams are constructed for a fixed σ > 0. 4.2 Statistical properties of robust persistence diagrams from samples Suppose Dgm ( fnρ,σ ) is the robust persistence diagram obtained from the robust KDE on a sample Xn and Dgm (fρ,σ) is its population analogue obtained from fρ,σ. The following result (proved in Appendix A.3) establishes the consistency of Dgm ( fnρ,σ ) in the W∞ metric. Theorem 4.5. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose Xn is observed iid from a distribution P∈M(Rd) with density f . Then W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) p→ 0 as n→∞. We present the convergence rate of the above convergence in Theorem 4.7, which depends on the smoothness of Hσ. In a similar spirit to [21], this result paves the way for constructing uniform confidence bands. Before we present the result, we first introduce the notion of entropy numbers associated with an RKHS. Definition 4.6 (Entropy Number). Given a metric space (T, d) the nth entropy number is defined as en(T, d) = · inf > 0 : ∃ {t1, t2, . . . , t2n−1} ⊂ T such that T ⊂ 2n−1⋃ i=1 Bd(ti, ) . Further, if (V, ‖·‖V ) and (W, ‖·‖W ) are two normed spaces and L : V → W is a bounded, linear operator, then en(L) = en(L : V →W ) =· en (L(BV ), ‖·‖W ), where BV is a unit ball in V . Loosely speaking, entropy numbers are related to the eigenvalues of the integral operator associated with the kernel Kσ , and measure the capacity of the RKHS in approximating functions in L2(Rd). In our context, the entropy numbers will provide useful bounds on the covering numbers of sets in the hypothesis class G. We refer the reader to [35] for more details. With this background, the following theorem (proved in Appendix A.4) provides a method for constructing uniform confidence bands for the persistence diagram constructed using the robust KDE on Xn. Theorem 4.7. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose the kernel Kσ satisfies en (id : Hσ → L∞(X)) ≤ aσn− 1 2p , where aσ > 1, 0 < p < 1 and X ⊂ Rd. Then, for a fixed confidence level 0 < α < 1, sup P∈M(X) P⊗n { W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) > 2M ‖Kσ‖ 1 2 ∞ µ ( ξ(n, p) + δ √ 2 log (1/α) n )} ≤ α, where ξ(n, p) is given by ξ(n, p) = γ apσ (1−2p) · 1√ n if 0 < p < 1/2, γC √ aσ · log(n)√n if p = 1/2, γ p √ aσ 2p−1 · 1 n1/4p if 1/2 < p < 1, for fixed constants γ > 12√ log 2 , C > 3− log(9aσ) and µ = 2 min { ϕ(2 ‖Kσ‖ 1 2 ∞), ρ ′′(2 ‖Kσ‖ 1 2 ∞) } . Remark 4.8. We highlight some salient observations from Theorem 4.7. (i) If diam(X) = r, and the kernel Kσ is m-times differentiable, then from [35, Theorem 6.26], the entropy numbers associated with Kσ satisfy en (id : Hσ → L∞(X)) ≤ crmn− m d . In light of Theorem 4.7, for p = d2m , we can make two important observations. First, as the dimension of the input space X increases, we have that the rate of convergence decreases; which is a direct consequence from the curse of dimensionality. Second, for a fixed dimension of the input space, the parameter p in Theorem 4.7 can be understood to be inversely proportional to the smoothness of the kernel. Specifically, as the smoothness of the kernel increases, the rate of convergence is faster, and we obtain sharper confidence bands. This makes a case for employing smoother kernels. (ii) A similar result is obtained in [21, Lemma 8] for persistence diagrams from the KDE, with a convergence rate Op(n−1/2), where the proof relies on a simple application of Hoeffding’s inequality, unlike the sophisticated tools the proof of Theorem 4.7 warrants for the robust KDE. 5 Experiments We illustrate the performance of robust persistence diagrams in machine learning applications through synthetic and real-world experiments.1 In all the experiments, the kernel bandwidth σ is chosen as the median distance of each xi ∈ Xn to its kth–nearest neighbour using the Gaussian kernel with the Hampel loss (similar setting as in [27])—we denote this bandwidth as σ(k). Since DTM is closely related to the k-NN density estimator [6], we choose the DTM smoothing parameter as m(k) = k/n. Additionally, the KIRWLS algorithm is run until the relative change of empirical risk < 10−6. Runtime Analysis. For n = 1000, Xn is sampled from a torus inside [0, 2]3. For each grid resolution α ∈ {0.04, 0.06, 0.08, 0.10}, the robust persistence diagram Dgm ( fnρ,σ ) and the KDE persistence diagram Dgm (fnσ ) are constructed from the superlevel filtration of cubical homology. The total time taken to compute the persistence diagrams is reported in Table 1. The results demonstrate that the computational bottleneck is the persistent homology pipeline, and not the KIRWLS for fnρ,σ . Bottleneck Simulation. The objective of this experiment is to assess how the robust KDE persistence diagram compares to the KDE persistence diagram in recovering the topological features of the underlying signal. Xn is observed uniformly from two circles and Ym is sampled uniformly from the enclosing square such that m = 200 and m/n = π ∈ {20%, 30%, 40%}—shown in Figure 4 (a). For each noise level π, and for each of N = 100 realizations of Xn and Ym, the robust persistence diagram Dρ,σ and the KDE persistence diagram Dσ are constructed from the noisy samples Xn∪Ym. In addition, we compute the KDE persistence diagram D#σ on Xn alone as a proxy for the target persistence diagram one would obtain in the absence of any contamination. The bandwidth σ(k) > 0 is chosen for k = 5. For each realization i, bottleneck distances Ui = W∞ ( Dρ,σ,D#σ ) and Vi = W∞ ( Dσ,D#σ ) are computed for 1st-order homological features. The boxplots and p-values for the one-sided hypothesis testH0 : U−V = 0 vs. H1 : U−V < 0 are reported in Figures 4 (b, c, d). The results demonstrate that the robust persistence diagram is noticeably better in recovering the true homological features, and in fact demonstrates superior performance when the noise levels are higher. Spectral Clustering using Persistent Homology. We perform a variant of the six-class benchmark experiment from [1, Section 6.1]. The data comprises of six different 3D “objects”: cube, circle, sphere, 3clusters, 3clustersIn3clusters, and torus. 25 point clouds are sampled from each object with additive Gaussian noise (SD= 0.1), and ambient Matérn cluster noise. For each point cloud, Xn, the robust persistence diagram Dgm ( fnρ,σ ) and the persistence diagram Dgm (dXn), from the distance function, are constructed. Additionally, Dgm (dXn) is transformed to the persistence image Img (dXn , h) for h = 0.1. Note that Dgm ( fnρ,σ ) is a robust diagram while Img (dXn , h) is a stable vectorization of a non-robust diagram [1]. For each homological order {H0, H1, H2}, distance 1https://github.com/sidv23/robust-PDs matrices {∆0,∆1,∆2} are computed: Wp metric for Dgm (fρ,σ), and Lp metric for Img (dXn , h) with p ∈ {1, 2,∞}, and spectral clustering is performed on the resulting distance-matrices. The quality of the clustering is assessed using the rand-index. The results, reported in Table 2, evidence the superiority of employing inherently robust persistence diagrams in contrast to a robust vectorization of an inherently noisy persistence diagram. MPEG7. In this experiment, we examine the performance of persistence diagrams in a classification task on [28]. For simplicity, we only consider five classes: beetle, bone, spring, deer and horse. We first extract the boundary of the images using a Laplace convolution, and sample Xn uniformly from the boundary of each image, adding uniform noise (π = 15%) in the enclosing region. Persistence diagrams Dgm (fnσ ) and Dgm ( fnρ,σ ) from the KDE and robust KDE are constructed. In addition, owing to their ability to capture nuanced multi-scale features, we also construct Dgm (dn,m) from the DTM filtration. The smoothing parameters σ(k) and m(k) are chosen as earlier for k = 5. The persistence diagrams are normalized to have a max persistence max{|d− b| = 1 : (b, d) ∈ Dgm(φ)}, and then vectorized as persistence images, Img (fnσ , h), Img ( fnρ,σ, h ) , and Img (dn,m, h) for various bandwidths h. A linear SVM classifier is then trained on the resulting persistence images. In the first experiment we only consider the first three classes, and in the second experiment we consider all five classes. The results for the classification error, shown in Figure 5, demonstrate the superiority of the proposed method. We refer the reader to Appendix D for additional experiments. 6 Conclusion & Discussion In this paper, we proposed a statistically consistent robust persistent diagram using RKHS-based robust KDE as the filter function. By generalizing the notion of influence function to the space of persistence diagrams, we mathematically and empirically demonstrated the robustness of the proposed method to that of persistence diagrams induced by other filter functions such as KDE. Through numerical experiments, we demonstrated the advantage of using robust persistence diagrams in machine learning applications. We would like to highlight that most of the theoretical results of this paper crucially hinge on the loss function being convex. As a future direction, we would like to generalize the current results to non-convex loss functions, and explore robust persistence diagrams induced other types of robust density estimators, which could potentially yield more robust persistence diagrams. Another important direction we intend to explore is to enhance the computational efficiency of the proposed approach using coresets, as in [7], and/or using weighted Rips filtrations, as in [2]. We provide a brief discussion in Appendix E. Broader Impact Over the last decade, Topological Data Analysis has become an important tool for extracting geometric and topological information from data, and its applications have been far reaching. For example, it has been used successfully in the study the fragile X-syndrome, to discover traumatic brain injuries, and has also become an important tool in the study of protein structure. In astrophysics, it has aided the study of cosmic microwave background, and the discovery of cosmic voids and filamental structures in cosmological data. With a continual increase in its adoption in data analysis, it has become important to understand the limitations of using persistent homology in machine learning applications. As real-world data is often flustered with measurement errors and other forms of noise, in this work, we examine the sensitivity of persistence diagrams to such noise, and provide methods to mitigate the effect of this noise, so as to make reliable topological inference. Acknowledgments and Disclosure of Funding The authors would like to thank the anonymous reviewers for their helpful comments and constructive feedback. Siddharth Vishwanath and Bharath Sriperumbudur are supported in part by NSF DMS CAREER Award 1945396. Kenji Fukumizu is supported in part by JST CREST Grant Number JPMJCR15D3, Japan. Satoshi Kuriki is partially supported by JSPS KAKENHI Grant Number JP16H02792, Japan.
1. What is the focus and contribution of the paper regarding topological data analysis? 2. What are the strengths of the proposed approach, particularly in its theoretical aspects? 3. What are the weaknesses of the paper, especially regarding practical computations and experimental limitations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Despite appealing theoretical properties, the use of "vanilla" topological data analysis (TDA) ---e.g. Rips persistence diagrams---in applications is compromised due to (among other things) its sensitivity to outliers. In this paper, authors improve on this by proposing to compute the persistence diagrams on top of the super-level sets of *robust* kernel density estimation. Their approach is supported by theoretical results (quantitative evaluation of the sensitivity to outliers, convergence guaranties and convergence rates) and its benefits are illustrated in numerical experiments. Strengths - The theoretical content of the paper is well motivated and the results are interesting and informative. They are likely to be very useful when studying statistical properties of topological descriptors. - The introduction of metric-geometry notions such as the _persistence influence_ (metric derivative of the Bottleneck distance) to analyze statistical properties of topological descriptors is very appealing. Weaknesses - My major concern is that I could not find details about how the persistence diagrams are computed in practice, once the RKDE is evaluated. As far as I know, the practical computation of persistence diagrams on sub/super-level sets in R^d is essentially possible if the filtration has a distance-like structure (as in [29] or using the DTM) or using a discretization of the ground space and using cubical homology (or other variants of simplicial homology). If the former is true, this should be detailed. If the latter, this should be explicitly mentioned (and the impact of the grid-size in terms of quality of the output and running times should be discussed); in particular I assume it makes the practical computations intractable in large dimension. If you manage to compute the persistent homology of the sub/super-level sets of $f^n_rho, sigma$ using another technique, please explain it. - All experiments are done in 2D (perhaps linked with the above remark). The theoretical role played by the ambient dimension $d$ is discussed in Remark 4.2, but experiments in higher dimension (at least 3D) should be discussed/proposed. If this is intractable (e.g. due to requiring a too large grid), this should be mentioned and would consist in a major flaw of the method when targeting numerical applications.
NIPS
Title Robust Persistence Diagrams using Reproducing Kernels Abstract Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram. From a statistical perspective, however, persistence diagrams are very sensitive to perturbations in the input space. In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels. Using an analogue of the influence function on the space of persistence diagrams, we establish the proposed framework to be less sensitive to outliers. The robust persistence diagrams are shown to be consistent estimators in bottleneck distance, with the convergence rate controlled by the smoothness of the kernel—this in turn allows us to construct uniform confidence bands in the space of persistence diagrams. Finally, we demonstrate the superiority of the proposed approach on benchmark datasets. 1 Introduction Given a set of points Xn = {X1,X2, . . . ,Xn} observed from a probability distribution P on an input space X ⊆ Rd, understanding the shape of Xn sheds important insights on low-dimensional geometric and topological features which underlie P, and this question has received increasing attention in the past few decades. To this end, Topological Data Analysis (TDA), with a special emphasis on persistent homology [20, 44], has become a mainstay for extracting the shape information from data. In statistics and machine-learning, persistent homology has facilitated the development of novel methodology (e.g., [8, 11, 14]), which has been widely used in a variety of applications dealing with massive, unconventional forms of data (e.g., [5, 22, 43]). Informally speaking, persistent homology detects the presence of topological features across a range of resolutions by examining a nested sequence of spaces, typically referred to as a filtration. The filtration encodes the birth and death of topological features as the resolution varies, and is presented in the form of a concise representation—a persistence diagram or barcode. In the context of dataanalysis, there are two different methods for obtaining filtrations. The first is computed from the pairwise Euclidean distances of Xn, such as the Vietoris-Rips, Čech, and Alpha filtrations [20]. The second approach is based on choosing a function on X that reflects the density of P (or its approximation based on Xn), and, then, constructing a filtration. While the two approaches explore the topological features governing P in different ways, in essence, they generate similar insights. ∗Authors arranged alphabetically 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Despite obvious advantages, the adoption of persistent homology in mainstream statistical methodology is still limited. An important limitation among others, in the statistical context, is that the resulting persistent homology is highly sensitive to outliers. While the stability results of [12, 16] guarantee that small perturbations on all of Xn induce only small changes in the resulting persistence diagrams, a more pathological issue arises when a small fraction of Xn is subject to very large perturbations. Figure 1 illustrates how inference from persistence diagrams can change dramatically when Xn is contaminated with only a few outliers. Another challenge is the mathematical difficulty in performing sensitivity analysis in a formal statistical context. Since the space of persistence diagrams has an unusual mathematical structure, it falls victim to issues such as non-uniqueness of Fréchet means and unbounded curvature of geodesics [18, 29, 36]. With this background, the central objective of this paper is to develop outlier robust persistence diagrams, develop a framework for examining the sensitivity of the resulting persistence diagrams to noise, and establish statistical convergence guarantees. To the best of our knowledge, not much work has been carried out in this direction. Bendich et al. [4] construct persistence diagrams from Rips filtrations on Xn by replacing the Euclidean distance with diffusion distance, Brécheteau and Levrard [7] use a coreset of Xn for computing persistence diagrams from the distance-to-measure, and Anai et al. [2] use weighted-Rips filtrations on Xn to construct more stable persistent diagrams. However, no sensitivity analysis of the resultant diagrams are carried out in [2, 4, 7] to demonstrate their robustness. Contributions. The main contributions of this work are threefold. 1) We propose robust persistence diagrams constructed from filtrations induced by an RKHS-based robust KDE (kernel density estimator) [27] of the underlying density function of P (Section 3). While this idea of inducing filtrations by an appropriate function—[13, 21, 32] use KDE, distance-to-measure (DTM) and kernel distance (KDist), respectively—has already been explored, we show the corresponding persistence diagrams to be less robust compared to our proposal. 2) In Section 4.1, we generalize the notions of influence function and gross error sensitivity—which are usually defined for normed spaces—to the space of persistence diagrams, which lack the vector space structure. Using these generalized notions, we investigate the sensitivity of persistence diagrams constructed from filtrations induced by different functions (e.g., KDE, robust KDE, DTM) and demonstrate the robustness of the proposed method, both mathematically (Remark 4.3) and numerically (Section 5). 3) We establish the statistical consistency of the proposed robust persistence diagrams and provide uniform confidence bands by deriving exponential concentration bounds for the uniform deviation of the robust KDE (Section 4.2). Definitions and Notations. For a metric space X, the ball of radius r centered at x ∈ X is denoted by BX(x, r). P(Rd) is the set of all Borel probability measures on Rd, andM(Rd) denotes the set of probability measures on Rd with compact support and tame density function (See Section 2). δx denotes a Dirac measure at x. For bandwidth σ > 0, Hσ denotes a reproducing kernel Hilbert space (RKHS) withKσ : Rd × Rd → R as its reproducing kernel. We denote by Φσ(x) = Kσ(·,x) ∈ Hσ , the feature map associated withKσ , which embeds x ∈ Rd into Φσ(x) ∈ Hσ . Throughout this paper, we assume that Kσ is radial, i.e., Kσ(x,y) = σ−dψ(‖x− y‖2/σ) with ψ(‖ · ‖2) being a pdf on Rd, where ‖x‖22 = ∑d i=1 x 2 i for x = (x1, . . . , xd) ∈ Rd. Some common examples include the Gaussian, Matérn and inverse multiquadric kernels. We denote ‖Kσ‖∞ = · supx,y∈Rd Kσ(x,y) = σ−dψ(0). Without loss of generality, we assume ψ(0) = 1. For P ∈ P(Rd), µP =· ∫ Kσ(·,y)dP(y) ∈ Hσ is called the mean embedding of P, and Dσ =· { µP : P∈P(Rd) } is the space of mean embeddings [30]. 2 Persistent Homology: Preliminaries We present the necessary background on persistent homology for completeness. See [9, 42] for a comprehensive introduction. Persistent Homology. Let φ : X → R≥0 be a function on the metric space (X, d). At level r > 0, the sublevel set Xr = φ−1 ([0, r]) = {x ∈ X : φ(x) ≤ r} encodes the topological information in X. For r < s, the sublevel sets are nested, i.e., Xr ⊆ Xs. Thus {Xr}0≤r<∞ is a nested sequence of topological spaces, called a filtration, denoted by Sub(φ), and φ is called the filter function. As the level r varies, the evolution of the topology is captured in the filtration. Roughly speaking, new cycles (i.e., connected components, loops, voids and higher order analogues) can appear or existing cycles can merge. A new k-dimensional feature is said to be born at b ∈ R when a nontrivial k-cycle appears in Xb. The same k-cycle dies at level d > b when it disappears in all Xd+ for > 0. Persistent homology is an algebraic module which tracks the persistence pairs (b, d) of births b and deaths d with multiplicity µ across the entire filtration Sub(φ). Mutatis mutandis, a similar notion holds for superlevel sets Xr = φ−1 ([r,∞)), inducing the filtration Sup(φ). For r < s, the inclusion Xr ⊇ Xs is reversed and a cycle born at b dies at a level d < b, resulting in the persistence pair (d, b) instead. Figure 2 shows 3 connected components in the superlevel set for r = 8. The components were born as r swept through the blue points, and die when r approaches the red points. In practice, the filtrations are computed on a grid representation -4 -2 0 2 4 0 5 10 15 0 5 8 10 15 0 5 8 10 15 Superlevel Set for r=8 Filter Function φ(x) 0 5 10 15 0 5 10 15 0th-Persistence Diagram ← Death → ← B irt h → Figure 2: Dgm (Sup(φ)) for φ : R→ R. of the underlying space using cubical homology. We refer the reader to Appendix E for more details. Persistence Diagrams. By collecting all persistence pairs, the persistent homology features are concisely represented as a persistence diagram Dgm (Sub(φ)) =· { (b, d) ∈ R2 : 0 ≤ b < d ≤ ∞ } . A similar definition carries over to Dgm (Sup(φ)), using (d, b) instead. See Figure 2 for an illustration. When the context is clear, we drop the reference to the filtration and simply write Dgm(φ). The kth persistence diagram is the subset of Dgm(φ) corresponding to the k-dimensional features. The space of persistence diagrams is the locally-finite multiset of points on Ω = {(x, y) : 0 ≤ x < y ≤ ∞}, endowed with the family of p-Wasserstein metrics Wp, for 1 ≤ p ≤ ∞. We refer the reader to [18, 19] for a thorough introduction. W∞ is commonly referred to as the bottleneck distance. Definition 2.1. Given two persistence diagrams D1 and D2, the bottleneck distance is given by W∞ (D1, D2) = inf γ∈Γ sup p∈D1∪∆ ‖p− γ(p)‖∞ , where Γ = {γ : D1 ∪∆→ D2 ∪∆} is the set of all bijections from D1 to D2, including the diagonal ∆ = { (x, y) ∈ R2 : 0 ≤ x = y ≤ ∞ } with infinite multiplicity. An assumption we make at the outset is that the filter function f is tame. Tameness is a metric regularity condition which ensures that the number of points on the persistence diagrams are finite, and, in addition, the number of nontrivial cycles which share identical persistence pairings are also finite. Tame functions satisfy the celebrated stability property w.r.t. the bottleneck distance. Proposition 2.2 (Stability of Persistence Diagrams [12, 16]). Given two tame functions f, g : X→ R, W∞ (Dgm(f),Dgm(g)) ≤ ‖f − g‖∞ . The space of persistence diagrams is, in general, challenging to work with. However, the stability property provides a handle on the persistence space through the function space of filter functions. 3 Robust Persistence Diagrams Given Xn = {X1,X2, . . . ,Xn} ⊆ Rd drawn iid from a probability distribution P ∈ M(Rd) with density f , the corresponding persistence diagram can be obtained by considering a filter function φn : Rd → R, constructed from Xn as an approximation to its population analogue, φP : Rd → R, that carries the topological information of P. Commonly used φP include the (i) kernelized density, fσ , (ii) Kernel Distance (KDist), dKσP , and (iii) distance-to-measure (DTM), dP,m, which are defined as: fσ(x) = · ∫ X Kσ(x,y)dP(y) ; dKσP = · ‖µδx − µP‖Hσ ; dP,m(x) = · √ 1 m m ∫ 0 F−1x (u)du, where Fx(t) = P (‖X− x‖2 ≤ t) and σ,m > 0. For these φP, the corresponding empirical analogues, φn, are constructed by replacing P with the empirical measure, Pn =· 1n ∑n i=1 δXi . For example, the empirical analogue of fσ is the familiar kernel density estimator (KDE), fnσ = 1 n ∑n i=1Kσ(·,Xi). While KDE and KDist encode the shape and distribution of mass for supp(P) by approximating the density f (sublevel sets of KDist are rescaled versions of superlevel sets of KDE [13, 32]), DTM, on the other hand, approximates the distance function to supp(P). Since φn is based on Pn, it is sensitive to outliers in Xn, which, in turn affect the persistence diagrams (as illustrated in Figure 1). To this end, in this paper, we propose robust persistence diagrams constructed using superlevel filtrations of a robust density estimator of f , i.e., the filter function, φn is chosen to be a robust density estimator of f . Specifically, we use the robust KDE, fnρ,σ , introduced by [27] as the filter function, which is defined as a solution to the following M-estimation problem: fnρ,σ = · arg inf g∈G ∫ X ρ ( ‖Φσ(y)− g‖Hσ ) dPn(y), (1) where ρ : R≥0 → R≥0 is a robust loss function, and G = Hσ ∩ Dσ = Dσ is the hypothesis class. Observe that when ρ(z) = 12z 2, the unique solution to Eq. (1) is given by the KDE, fnσ . Therefore, a robust KDE is obtained by replacing the square loss with a robust loss, which satisfies the following assumptions. These assumptions, which are similar to those of [27, 39] guarantee the existence and uniqueness (if ρ is convex) of fnρ,σ [27], and are satisfied by most robust loss functions, including the Huber loss, ρ(z) = 12z 2 1 {z ≤ 1} + ( z − 12 ) 1 {z > 1} and the Charbonnier loss, ρ(z) = √ 1 + z2 − 1. (A1) ρ is strictly-increasing and M -Lipschitz, with ρ(0) = 0. (A2) ρ′(x) is continuous and bounded with ρ′(0) = 0 . (A3) ϕ(x) = ρ′(x)/x is bounded, L-Lipschitz and continuous, with ϕ(0) <∞. (A4) ρ′′ exists, with ρ′′ and ϕ nonincreasing. Unlike for squared loss, the solution fnρ,σ cannot be obtained in a closed form. However, it can be shown to be the fixed point of an iterative procedure, referred to as KIRWLS algorithm [27]. The KIRWLS algorithm starts with initial weights {w(0)i }ni=1 such that ∑n i=1 w (0) i = 1, and generates the iterative sequence of estimators {f (k)ρ,σ}k∈N as f (k)ρ,σ = n∑ i=1 w (k−1) i Kσ(·,Xi) ; w (k) i = ϕ(‖Φσ(Xi)− f (k)ρ,σ‖Hσ )∑n j=1 ϕ(‖Φσ(Xj)− f (k) ρ,σ‖Hσ ) . Intuitively, note that if Xi is an outlier, then the corresponding weight wi is small (since ϕ is nonincreasing) and therefore less weight is given to the contribution of Xi in the density estimator. Hence, the weights serve as a measure of inlyingness—smaller (resp. larger) the weights, lesser (resp. more) inlying are the points. When Pn is replaced by P, the solution of Eq. (1) is its population analogue, fρ,σ . Although fρ,σ does not admit a closed form solution, it can be shown [27] that there exists a non-negative real-valued function wσ satisfying ∫ Rd wσ(x) dP(x) = 1 such that fρ,σ = ∫ Rd Kσ(·,x)wσ(x)dP(x) = ∫ Rd ϕ(‖Φσ(x)− fρ,σ‖Hσ )∫ Rd ϕ(‖Φσ(y)− fρ,σ‖Hσ )dP(y) Kσ(·,x) dP(x), (2) where wσ acts as a population analogue of the weights in KIRWLS algorithm. To summarize our proposal, the fixed point of the KIRWLS algorithm, which yields the robust density estimator fnρ,σ, is used as the filter function to obtain a robust persistence diagram of Xn. On the computational front, note that fnρ,σ is computationally more complex than the KDE, f n σ , requiring O(n`) computations compared to O(n) of the latter, with ` being the number of iterations required to reach the fixed point of KIRWLS. However, once these filter functions are computed, the corresponding persistence diagrams have similar computational complexity as both require computing superlevel sets, which, in turn, require function evaluations that scale as O(n) for both fnρ,σ and f n σ . 4 Theoretical Analysis of Robust Persistence Diagrams In this section, we investigate the theoretical properties of the proposed robust persistence diagrams. First, in Section 4.1, we examine the sensitivity of persistence diagrams to outlying perturbations through the notion of metric derivative and compare the effect of different filter functions. Next, in Section 4.2, we establish consistency and convergence rates for the robust persistence diagram to its population analogue. These results allow to construct uniform confidence bands for the robust persistence diagram. The proofs of the results are provided in Appendix A. 4.1 A measure of sensitivity of persistence diagrams to outliers The influence function and gross error sensitivity are arguably the most popular tools in robust statistics for diagnosing the sensitivity of an estimator to a single adversarial contamination [23, 26]. Given a statistical functional T : P(X) → (V, ‖·‖V ), which takes an input probability measure P ∈ P(X) on the input space X and produces a statistic P 7→ T (P) in some normed space (V, ‖·‖V ), the influence function of x ∈ X at P is given by the Gâteaux derivative of T at P restricted to the space of signed Borel measures with zero expectation: IF(T ;P,x) =· ∂ ∂ T ( (1− )P + δx )∣∣∣ =0 = lim →0 T ((1− )P + δx)− T (P) , and the gross error sensitivity at P is given by Γ(T ;P) =· supx∈X ‖IF(T ;P,x)‖V . However, a persistence diagram (which is a statistical functional) does not take values in a normed space and therefore the notion of influence functions has to be generalized to metric spaces through the concept of a metric derivative: Given a complete metric space (X, dX) and a curve s : [0, 1]→ X , the metric derivative at = 0 is given by |s′| (0) =· lim →0 1 dX(s(0), s( )). Using this generalization, we have the following definition, which allows to examine the influence an outlier has on the persistence diagram obtained from a filtration. Definition 4.1. Given a probability measure P ∈ P(Rd) and a filter function φP depending on P, the persistence influence of a perturbation x ∈ Rd on Dgm (φP) is defined as Ψ (φP;x) = lim →0 1 W∞ ( Dgm ( φP x ) ,Dgm (φP) ) , where P x = · (1− )P + δx, and the gross-influence is defined as Γ(φP) = supx∈Rd Ψ (φP;x). For > 0, let f ,xρ,σ be the robust KDE associated with the probability measure P x. The following result (proved in Appendix A.1) bounds the persistence influence for the persistence diagram induced by the filter function fρ,σ , which is the population analogue of robust KDE. Theorem 4.2. For a loss ρ satisfying (A1)–(A3), and σ > 0, if lim →0 1 ( f ,xρ,σ − fρ,σ ) exists, then the persistence influence of x ∈ Rd on Dgm (fρ,σ) satisfies Ψ (fρ,σ;x) ≤ ‖Kσ‖ 1 2 ∞ ρ ′ ( ‖Φσ(x)− fρ,σ‖Hσ )(∫ Rd ζ ( ‖Φσ(y)− fρ,σ‖Hσ ) dP(y) )−1 , (3) where ζ(z) = ϕ(z)− zϕ′(z). Remark 4.3. We make the following observations from Theorem 4.2. (i) Choosing ρ(z) = 12z 2 and noting that ϕ(z) = ρ′′(z) = 1, a similar analysis, as in the proof of Theorem 4.2, yields a bound for the persistence influence of the KDE as Ψ (fσ;x) ≤ σ−d/2 ‖Φσ(x)− fσ‖Hσ . On the other hand, for robust loss functions, the term in Eq. (3) involving ρ′ is bounded because of (A2), making them less sensitive to very large perturbations. In fact, for nonincreasing ϕ, it can be shown (see Appendix C) that Ψ (fρ,σ;x) ≤ σ−d/2wσ(x) ‖Φσ(x)− fρ,σ‖Hσ , where, in contrast to KDE, the measure of inlyingness, wσ , weighs down extreme outliers. (ii) For the generalized Charbonnier loss (a robust loss function), given by ρ(z) = ( 1 + z2 )α/2 − 1 for 1 ≤ α < 2, the persistence influence satisfies Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ‖Φσ(x)− fρ,σ‖2Hσ )α−1 2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) 1−α 2 . Note that for α = 1, the bound on the persistence influence Ψ (fρ,σ;x) does not depend on how extreme the outlier x is. Similarly, for the Cauchy loss, given by ρ(z) = log(1 + z2), we have Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) . This shows that for large perturbations, the gross error sensitivity for the Cauchy and Charbonnier losses are far more stable than that of KDE. This behavior is also empirically illustrated in Figure 3. The experiment is detailed in Appendix C. (iii) For the DTM function, it can be shown that Ψ (dP,m;x) ≤ 2√ m sup {∣∣∣f(x)− ∫ Rd f(y)dP(y) ∣∣∣ : ‖∇f‖L2(P) ≤ 1} . (4) While dP,m cannot be compared to both fσ and fρ,σ, as it captures topological information at a different scale, determined by m, we point out that when supp(P) is compact, Ψ (dP,m;x) is not guaranteed to be bounded, unlike in Ψ (fρ,σ;x). We refer the reader to Appendix C for more details. It follows from Remark 4.3 that as σ → 0, the persistence influence of both the KDE and robust KDE behave asO(σ−d), showing that the robustness of robust persistence diagrams manifests only in cases where σ > 0. However, robustness alone has no bearing if the robust persistence diagram and the persistence diagram from the KDE are fundamentally different, i.e., they estimate different quantities as σ → 0. The following result (proved in Appendix A.2) shows that as σ → 0, Dgm (fρ,σ) recovers the same information as that in Dgm (fσ), which is same as Dgm (f), where f is the density of P. Theorem 4.4. For a strictly-convex loss ρ satisfying (A1)–(A4), and σ > 0, suppose P ∈M(Rd) with density f , and fρ,σ is the robust KDE. Then W∞ (Dgm (fρ,σ) ,Dgm (f))→ 0 as σ → 0. Suppose P = (1− π)P0 + πQ, where P0 corresponds to the true signal which we are interested in studying, and Q manifests as some ambient noise with 0 < π < 12 . In light of Theorem 4.4, by letting σ → 0, along with the topological features of P0, we are also capturing the topological features of Q, which may obfuscate any statistical inference made using the persistence diagrams. In a manner, choosing σ > 0 suppresses the noise in the resulting persistence diagrams, thereby making them more stable. On a similar note, the authors in [21] state that for a suitable bandwidth σ > 0, the level sets of fσ carry the same topological information as supp(P), despite the fact that some subtle details in f may be omitted. In what follows, we consider the setting where robust persistence diagrams are constructed for a fixed σ > 0. 4.2 Statistical properties of robust persistence diagrams from samples Suppose Dgm ( fnρ,σ ) is the robust persistence diagram obtained from the robust KDE on a sample Xn and Dgm (fρ,σ) is its population analogue obtained from fρ,σ. The following result (proved in Appendix A.3) establishes the consistency of Dgm ( fnρ,σ ) in the W∞ metric. Theorem 4.5. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose Xn is observed iid from a distribution P∈M(Rd) with density f . Then W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) p→ 0 as n→∞. We present the convergence rate of the above convergence in Theorem 4.7, which depends on the smoothness of Hσ. In a similar spirit to [21], this result paves the way for constructing uniform confidence bands. Before we present the result, we first introduce the notion of entropy numbers associated with an RKHS. Definition 4.6 (Entropy Number). Given a metric space (T, d) the nth entropy number is defined as en(T, d) = · inf > 0 : ∃ {t1, t2, . . . , t2n−1} ⊂ T such that T ⊂ 2n−1⋃ i=1 Bd(ti, ) . Further, if (V, ‖·‖V ) and (W, ‖·‖W ) are two normed spaces and L : V → W is a bounded, linear operator, then en(L) = en(L : V →W ) =· en (L(BV ), ‖·‖W ), where BV is a unit ball in V . Loosely speaking, entropy numbers are related to the eigenvalues of the integral operator associated with the kernel Kσ , and measure the capacity of the RKHS in approximating functions in L2(Rd). In our context, the entropy numbers will provide useful bounds on the covering numbers of sets in the hypothesis class G. We refer the reader to [35] for more details. With this background, the following theorem (proved in Appendix A.4) provides a method for constructing uniform confidence bands for the persistence diagram constructed using the robust KDE on Xn. Theorem 4.7. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose the kernel Kσ satisfies en (id : Hσ → L∞(X)) ≤ aσn− 1 2p , where aσ > 1, 0 < p < 1 and X ⊂ Rd. Then, for a fixed confidence level 0 < α < 1, sup P∈M(X) P⊗n { W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) > 2M ‖Kσ‖ 1 2 ∞ µ ( ξ(n, p) + δ √ 2 log (1/α) n )} ≤ α, where ξ(n, p) is given by ξ(n, p) = γ apσ (1−2p) · 1√ n if 0 < p < 1/2, γC √ aσ · log(n)√n if p = 1/2, γ p √ aσ 2p−1 · 1 n1/4p if 1/2 < p < 1, for fixed constants γ > 12√ log 2 , C > 3− log(9aσ) and µ = 2 min { ϕ(2 ‖Kσ‖ 1 2 ∞), ρ ′′(2 ‖Kσ‖ 1 2 ∞) } . Remark 4.8. We highlight some salient observations from Theorem 4.7. (i) If diam(X) = r, and the kernel Kσ is m-times differentiable, then from [35, Theorem 6.26], the entropy numbers associated with Kσ satisfy en (id : Hσ → L∞(X)) ≤ crmn− m d . In light of Theorem 4.7, for p = d2m , we can make two important observations. First, as the dimension of the input space X increases, we have that the rate of convergence decreases; which is a direct consequence from the curse of dimensionality. Second, for a fixed dimension of the input space, the parameter p in Theorem 4.7 can be understood to be inversely proportional to the smoothness of the kernel. Specifically, as the smoothness of the kernel increases, the rate of convergence is faster, and we obtain sharper confidence bands. This makes a case for employing smoother kernels. (ii) A similar result is obtained in [21, Lemma 8] for persistence diagrams from the KDE, with a convergence rate Op(n−1/2), where the proof relies on a simple application of Hoeffding’s inequality, unlike the sophisticated tools the proof of Theorem 4.7 warrants for the robust KDE. 5 Experiments We illustrate the performance of robust persistence diagrams in machine learning applications through synthetic and real-world experiments.1 In all the experiments, the kernel bandwidth σ is chosen as the median distance of each xi ∈ Xn to its kth–nearest neighbour using the Gaussian kernel with the Hampel loss (similar setting as in [27])—we denote this bandwidth as σ(k). Since DTM is closely related to the k-NN density estimator [6], we choose the DTM smoothing parameter as m(k) = k/n. Additionally, the KIRWLS algorithm is run until the relative change of empirical risk < 10−6. Runtime Analysis. For n = 1000, Xn is sampled from a torus inside [0, 2]3. For each grid resolution α ∈ {0.04, 0.06, 0.08, 0.10}, the robust persistence diagram Dgm ( fnρ,σ ) and the KDE persistence diagram Dgm (fnσ ) are constructed from the superlevel filtration of cubical homology. The total time taken to compute the persistence diagrams is reported in Table 1. The results demonstrate that the computational bottleneck is the persistent homology pipeline, and not the KIRWLS for fnρ,σ . Bottleneck Simulation. The objective of this experiment is to assess how the robust KDE persistence diagram compares to the KDE persistence diagram in recovering the topological features of the underlying signal. Xn is observed uniformly from two circles and Ym is sampled uniformly from the enclosing square such that m = 200 and m/n = π ∈ {20%, 30%, 40%}—shown in Figure 4 (a). For each noise level π, and for each of N = 100 realizations of Xn and Ym, the robust persistence diagram Dρ,σ and the KDE persistence diagram Dσ are constructed from the noisy samples Xn∪Ym. In addition, we compute the KDE persistence diagram D#σ on Xn alone as a proxy for the target persistence diagram one would obtain in the absence of any contamination. The bandwidth σ(k) > 0 is chosen for k = 5. For each realization i, bottleneck distances Ui = W∞ ( Dρ,σ,D#σ ) and Vi = W∞ ( Dσ,D#σ ) are computed for 1st-order homological features. The boxplots and p-values for the one-sided hypothesis testH0 : U−V = 0 vs. H1 : U−V < 0 are reported in Figures 4 (b, c, d). The results demonstrate that the robust persistence diagram is noticeably better in recovering the true homological features, and in fact demonstrates superior performance when the noise levels are higher. Spectral Clustering using Persistent Homology. We perform a variant of the six-class benchmark experiment from [1, Section 6.1]. The data comprises of six different 3D “objects”: cube, circle, sphere, 3clusters, 3clustersIn3clusters, and torus. 25 point clouds are sampled from each object with additive Gaussian noise (SD= 0.1), and ambient Matérn cluster noise. For each point cloud, Xn, the robust persistence diagram Dgm ( fnρ,σ ) and the persistence diagram Dgm (dXn), from the distance function, are constructed. Additionally, Dgm (dXn) is transformed to the persistence image Img (dXn , h) for h = 0.1. Note that Dgm ( fnρ,σ ) is a robust diagram while Img (dXn , h) is a stable vectorization of a non-robust diagram [1]. For each homological order {H0, H1, H2}, distance 1https://github.com/sidv23/robust-PDs matrices {∆0,∆1,∆2} are computed: Wp metric for Dgm (fρ,σ), and Lp metric for Img (dXn , h) with p ∈ {1, 2,∞}, and spectral clustering is performed on the resulting distance-matrices. The quality of the clustering is assessed using the rand-index. The results, reported in Table 2, evidence the superiority of employing inherently robust persistence diagrams in contrast to a robust vectorization of an inherently noisy persistence diagram. MPEG7. In this experiment, we examine the performance of persistence diagrams in a classification task on [28]. For simplicity, we only consider five classes: beetle, bone, spring, deer and horse. We first extract the boundary of the images using a Laplace convolution, and sample Xn uniformly from the boundary of each image, adding uniform noise (π = 15%) in the enclosing region. Persistence diagrams Dgm (fnσ ) and Dgm ( fnρ,σ ) from the KDE and robust KDE are constructed. In addition, owing to their ability to capture nuanced multi-scale features, we also construct Dgm (dn,m) from the DTM filtration. The smoothing parameters σ(k) and m(k) are chosen as earlier for k = 5. The persistence diagrams are normalized to have a max persistence max{|d− b| = 1 : (b, d) ∈ Dgm(φ)}, and then vectorized as persistence images, Img (fnσ , h), Img ( fnρ,σ, h ) , and Img (dn,m, h) for various bandwidths h. A linear SVM classifier is then trained on the resulting persistence images. In the first experiment we only consider the first three classes, and in the second experiment we consider all five classes. The results for the classification error, shown in Figure 5, demonstrate the superiority of the proposed method. We refer the reader to Appendix D for additional experiments. 6 Conclusion & Discussion In this paper, we proposed a statistically consistent robust persistent diagram using RKHS-based robust KDE as the filter function. By generalizing the notion of influence function to the space of persistence diagrams, we mathematically and empirically demonstrated the robustness of the proposed method to that of persistence diagrams induced by other filter functions such as KDE. Through numerical experiments, we demonstrated the advantage of using robust persistence diagrams in machine learning applications. We would like to highlight that most of the theoretical results of this paper crucially hinge on the loss function being convex. As a future direction, we would like to generalize the current results to non-convex loss functions, and explore robust persistence diagrams induced other types of robust density estimators, which could potentially yield more robust persistence diagrams. Another important direction we intend to explore is to enhance the computational efficiency of the proposed approach using coresets, as in [7], and/or using weighted Rips filtrations, as in [2]. We provide a brief discussion in Appendix E. Broader Impact Over the last decade, Topological Data Analysis has become an important tool for extracting geometric and topological information from data, and its applications have been far reaching. For example, it has been used successfully in the study the fragile X-syndrome, to discover traumatic brain injuries, and has also become an important tool in the study of protein structure. In astrophysics, it has aided the study of cosmic microwave background, and the discovery of cosmic voids and filamental structures in cosmological data. With a continual increase in its adoption in data analysis, it has become important to understand the limitations of using persistent homology in machine learning applications. As real-world data is often flustered with measurement errors and other forms of noise, in this work, we examine the sensitivity of persistence diagrams to such noise, and provide methods to mitigate the effect of this noise, so as to make reliable topological inference. Acknowledgments and Disclosure of Funding The authors would like to thank the anonymous reviewers for their helpful comments and constructive feedback. Siddharth Vishwanath and Bharath Sriperumbudur are supported in part by NSF DMS CAREER Award 1945396. Kenji Fukumizu is supported in part by JST CREST Grant Number JPMJCR15D3, Japan. Satoshi Kuriki is partially supported by JSPS KAKENHI Grant Number JP16H02792, Japan.
1. What is the focus and contribution of the paper regarding persistent homology? 2. What are the strengths of the proposed approach, particularly in terms of its robustness and theoretical guarantees? 3. What are the weaknesses of the paper, especially regarding computational complexity and empirical experiments? 4. Do you have any concerns about the applicability of the proposed method to different data types or domains? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions I thank the authors for the responses (especially the new experimental results). This helps address some of my concerns on empirical contributions of this paper. Nevertheless, my score remains the same (and I remain very positive about the paper). In recent years, persistent homology has been used in many applications to summarize / characterize different types of data. The resulting persistence diagram summary has certain stability guarantee w.r.t. certain perturbations, although in practice, the noise / perturbation often goes beyond the allowed model. Hence an important question is to develop robust persistence summaries (in terms of robust filtration functions to induce the persistence). This paper proposes such a summary for a specific, but arguably a very common setting, where input data is a set of points sampled from a distribution. (Note that persistent homology can be, and have been, applied to many other domains or data types, where the specific filtration introduced in this paper may not be applicable any more. Nevertheless, the point setting is one of the most important settings in data analysis.) In particular, they propose a filtration based on Robust KDE of underlying density function. While on the surface, this may look similar to several prior approaches such as those based on standard KDE or based on the DTM (distance to measures), the paper shows that using the robust estimator has better theoretical guarantees (in terms of sensitivity to noise) as well as better empirical performance. The theoretical results are established based on a generalization of the so-called influence function. While there are empirical results to demonstrate the new robust persistence diagrams are more noise-resilient, I view the development of this robust KDE based filtration, and many theoretical results/statistical analysis about the resulting robust persistence diagrams as the key contribution of this paper. Strengths + Having robust persistence based summaries that are robust against various forms of statistical noise is very important for the practical usage of persistence summaries. This paper tackles this problem of a specific but important setting. + While the robust KDE induced filtration seems similar to previous such filtrations, the paper provides a rather comprehensive range of theoretical / statistical analysis of the resulting robust persistence diagrams, and show that this robust persistence diagrams have better robustness/convergence properties. + The paper also presents some empirical results with synthetic data to show that the new diagrams indeed are more noise resilient. Weaknesses - The new robust KDE has better properties, however, it is also more expensive to compute the robust KDE (compared to KDE or DTM). The paper provided theoretical time complexity comparison. But it would be good to also show the empirical time needed in the experiments. - In all the empirical experiments, either the data is synthetic, or the noise is artificial. Furthermore, it appears usually the noise added are uniform noise. It would be good to experiment on more realistic datasets, and/or other types of noise.
NIPS
Title Robust Persistence Diagrams using Reproducing Kernels Abstract Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram. From a statistical perspective, however, persistence diagrams are very sensitive to perturbations in the input space. In this work, we develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using reproducing kernels. Using an analogue of the influence function on the space of persistence diagrams, we establish the proposed framework to be less sensitive to outliers. The robust persistence diagrams are shown to be consistent estimators in bottleneck distance, with the convergence rate controlled by the smoothness of the kernel—this in turn allows us to construct uniform confidence bands in the space of persistence diagrams. Finally, we demonstrate the superiority of the proposed approach on benchmark datasets. 1 Introduction Given a set of points Xn = {X1,X2, . . . ,Xn} observed from a probability distribution P on an input space X ⊆ Rd, understanding the shape of Xn sheds important insights on low-dimensional geometric and topological features which underlie P, and this question has received increasing attention in the past few decades. To this end, Topological Data Analysis (TDA), with a special emphasis on persistent homology [20, 44], has become a mainstay for extracting the shape information from data. In statistics and machine-learning, persistent homology has facilitated the development of novel methodology (e.g., [8, 11, 14]), which has been widely used in a variety of applications dealing with massive, unconventional forms of data (e.g., [5, 22, 43]). Informally speaking, persistent homology detects the presence of topological features across a range of resolutions by examining a nested sequence of spaces, typically referred to as a filtration. The filtration encodes the birth and death of topological features as the resolution varies, and is presented in the form of a concise representation—a persistence diagram or barcode. In the context of dataanalysis, there are two different methods for obtaining filtrations. The first is computed from the pairwise Euclidean distances of Xn, such as the Vietoris-Rips, Čech, and Alpha filtrations [20]. The second approach is based on choosing a function on X that reflects the density of P (or its approximation based on Xn), and, then, constructing a filtration. While the two approaches explore the topological features governing P in different ways, in essence, they generate similar insights. ∗Authors arranged alphabetically 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Despite obvious advantages, the adoption of persistent homology in mainstream statistical methodology is still limited. An important limitation among others, in the statistical context, is that the resulting persistent homology is highly sensitive to outliers. While the stability results of [12, 16] guarantee that small perturbations on all of Xn induce only small changes in the resulting persistence diagrams, a more pathological issue arises when a small fraction of Xn is subject to very large perturbations. Figure 1 illustrates how inference from persistence diagrams can change dramatically when Xn is contaminated with only a few outliers. Another challenge is the mathematical difficulty in performing sensitivity analysis in a formal statistical context. Since the space of persistence diagrams has an unusual mathematical structure, it falls victim to issues such as non-uniqueness of Fréchet means and unbounded curvature of geodesics [18, 29, 36]. With this background, the central objective of this paper is to develop outlier robust persistence diagrams, develop a framework for examining the sensitivity of the resulting persistence diagrams to noise, and establish statistical convergence guarantees. To the best of our knowledge, not much work has been carried out in this direction. Bendich et al. [4] construct persistence diagrams from Rips filtrations on Xn by replacing the Euclidean distance with diffusion distance, Brécheteau and Levrard [7] use a coreset of Xn for computing persistence diagrams from the distance-to-measure, and Anai et al. [2] use weighted-Rips filtrations on Xn to construct more stable persistent diagrams. However, no sensitivity analysis of the resultant diagrams are carried out in [2, 4, 7] to demonstrate their robustness. Contributions. The main contributions of this work are threefold. 1) We propose robust persistence diagrams constructed from filtrations induced by an RKHS-based robust KDE (kernel density estimator) [27] of the underlying density function of P (Section 3). While this idea of inducing filtrations by an appropriate function—[13, 21, 32] use KDE, distance-to-measure (DTM) and kernel distance (KDist), respectively—has already been explored, we show the corresponding persistence diagrams to be less robust compared to our proposal. 2) In Section 4.1, we generalize the notions of influence function and gross error sensitivity—which are usually defined for normed spaces—to the space of persistence diagrams, which lack the vector space structure. Using these generalized notions, we investigate the sensitivity of persistence diagrams constructed from filtrations induced by different functions (e.g., KDE, robust KDE, DTM) and demonstrate the robustness of the proposed method, both mathematically (Remark 4.3) and numerically (Section 5). 3) We establish the statistical consistency of the proposed robust persistence diagrams and provide uniform confidence bands by deriving exponential concentration bounds for the uniform deviation of the robust KDE (Section 4.2). Definitions and Notations. For a metric space X, the ball of radius r centered at x ∈ X is denoted by BX(x, r). P(Rd) is the set of all Borel probability measures on Rd, andM(Rd) denotes the set of probability measures on Rd with compact support and tame density function (See Section 2). δx denotes a Dirac measure at x. For bandwidth σ > 0, Hσ denotes a reproducing kernel Hilbert space (RKHS) withKσ : Rd × Rd → R as its reproducing kernel. We denote by Φσ(x) = Kσ(·,x) ∈ Hσ , the feature map associated withKσ , which embeds x ∈ Rd into Φσ(x) ∈ Hσ . Throughout this paper, we assume that Kσ is radial, i.e., Kσ(x,y) = σ−dψ(‖x− y‖2/σ) with ψ(‖ · ‖2) being a pdf on Rd, where ‖x‖22 = ∑d i=1 x 2 i for x = (x1, . . . , xd) ∈ Rd. Some common examples include the Gaussian, Matérn and inverse multiquadric kernels. We denote ‖Kσ‖∞ = · supx,y∈Rd Kσ(x,y) = σ−dψ(0). Without loss of generality, we assume ψ(0) = 1. For P ∈ P(Rd), µP =· ∫ Kσ(·,y)dP(y) ∈ Hσ is called the mean embedding of P, and Dσ =· { µP : P∈P(Rd) } is the space of mean embeddings [30]. 2 Persistent Homology: Preliminaries We present the necessary background on persistent homology for completeness. See [9, 42] for a comprehensive introduction. Persistent Homology. Let φ : X → R≥0 be a function on the metric space (X, d). At level r > 0, the sublevel set Xr = φ−1 ([0, r]) = {x ∈ X : φ(x) ≤ r} encodes the topological information in X. For r < s, the sublevel sets are nested, i.e., Xr ⊆ Xs. Thus {Xr}0≤r<∞ is a nested sequence of topological spaces, called a filtration, denoted by Sub(φ), and φ is called the filter function. As the level r varies, the evolution of the topology is captured in the filtration. Roughly speaking, new cycles (i.e., connected components, loops, voids and higher order analogues) can appear or existing cycles can merge. A new k-dimensional feature is said to be born at b ∈ R when a nontrivial k-cycle appears in Xb. The same k-cycle dies at level d > b when it disappears in all Xd+ for > 0. Persistent homology is an algebraic module which tracks the persistence pairs (b, d) of births b and deaths d with multiplicity µ across the entire filtration Sub(φ). Mutatis mutandis, a similar notion holds for superlevel sets Xr = φ−1 ([r,∞)), inducing the filtration Sup(φ). For r < s, the inclusion Xr ⊇ Xs is reversed and a cycle born at b dies at a level d < b, resulting in the persistence pair (d, b) instead. Figure 2 shows 3 connected components in the superlevel set for r = 8. The components were born as r swept through the blue points, and die when r approaches the red points. In practice, the filtrations are computed on a grid representation -4 -2 0 2 4 0 5 10 15 0 5 8 10 15 0 5 8 10 15 Superlevel Set for r=8 Filter Function φ(x) 0 5 10 15 0 5 10 15 0th-Persistence Diagram ← Death → ← B irt h → Figure 2: Dgm (Sup(φ)) for φ : R→ R. of the underlying space using cubical homology. We refer the reader to Appendix E for more details. Persistence Diagrams. By collecting all persistence pairs, the persistent homology features are concisely represented as a persistence diagram Dgm (Sub(φ)) =· { (b, d) ∈ R2 : 0 ≤ b < d ≤ ∞ } . A similar definition carries over to Dgm (Sup(φ)), using (d, b) instead. See Figure 2 for an illustration. When the context is clear, we drop the reference to the filtration and simply write Dgm(φ). The kth persistence diagram is the subset of Dgm(φ) corresponding to the k-dimensional features. The space of persistence diagrams is the locally-finite multiset of points on Ω = {(x, y) : 0 ≤ x < y ≤ ∞}, endowed with the family of p-Wasserstein metrics Wp, for 1 ≤ p ≤ ∞. We refer the reader to [18, 19] for a thorough introduction. W∞ is commonly referred to as the bottleneck distance. Definition 2.1. Given two persistence diagrams D1 and D2, the bottleneck distance is given by W∞ (D1, D2) = inf γ∈Γ sup p∈D1∪∆ ‖p− γ(p)‖∞ , where Γ = {γ : D1 ∪∆→ D2 ∪∆} is the set of all bijections from D1 to D2, including the diagonal ∆ = { (x, y) ∈ R2 : 0 ≤ x = y ≤ ∞ } with infinite multiplicity. An assumption we make at the outset is that the filter function f is tame. Tameness is a metric regularity condition which ensures that the number of points on the persistence diagrams are finite, and, in addition, the number of nontrivial cycles which share identical persistence pairings are also finite. Tame functions satisfy the celebrated stability property w.r.t. the bottleneck distance. Proposition 2.2 (Stability of Persistence Diagrams [12, 16]). Given two tame functions f, g : X→ R, W∞ (Dgm(f),Dgm(g)) ≤ ‖f − g‖∞ . The space of persistence diagrams is, in general, challenging to work with. However, the stability property provides a handle on the persistence space through the function space of filter functions. 3 Robust Persistence Diagrams Given Xn = {X1,X2, . . . ,Xn} ⊆ Rd drawn iid from a probability distribution P ∈ M(Rd) with density f , the corresponding persistence diagram can be obtained by considering a filter function φn : Rd → R, constructed from Xn as an approximation to its population analogue, φP : Rd → R, that carries the topological information of P. Commonly used φP include the (i) kernelized density, fσ , (ii) Kernel Distance (KDist), dKσP , and (iii) distance-to-measure (DTM), dP,m, which are defined as: fσ(x) = · ∫ X Kσ(x,y)dP(y) ; dKσP = · ‖µδx − µP‖Hσ ; dP,m(x) = · √ 1 m m ∫ 0 F−1x (u)du, where Fx(t) = P (‖X− x‖2 ≤ t) and σ,m > 0. For these φP, the corresponding empirical analogues, φn, are constructed by replacing P with the empirical measure, Pn =· 1n ∑n i=1 δXi . For example, the empirical analogue of fσ is the familiar kernel density estimator (KDE), fnσ = 1 n ∑n i=1Kσ(·,Xi). While KDE and KDist encode the shape and distribution of mass for supp(P) by approximating the density f (sublevel sets of KDist are rescaled versions of superlevel sets of KDE [13, 32]), DTM, on the other hand, approximates the distance function to supp(P). Since φn is based on Pn, it is sensitive to outliers in Xn, which, in turn affect the persistence diagrams (as illustrated in Figure 1). To this end, in this paper, we propose robust persistence diagrams constructed using superlevel filtrations of a robust density estimator of f , i.e., the filter function, φn is chosen to be a robust density estimator of f . Specifically, we use the robust KDE, fnρ,σ , introduced by [27] as the filter function, which is defined as a solution to the following M-estimation problem: fnρ,σ = · arg inf g∈G ∫ X ρ ( ‖Φσ(y)− g‖Hσ ) dPn(y), (1) where ρ : R≥0 → R≥0 is a robust loss function, and G = Hσ ∩ Dσ = Dσ is the hypothesis class. Observe that when ρ(z) = 12z 2, the unique solution to Eq. (1) is given by the KDE, fnσ . Therefore, a robust KDE is obtained by replacing the square loss with a robust loss, which satisfies the following assumptions. These assumptions, which are similar to those of [27, 39] guarantee the existence and uniqueness (if ρ is convex) of fnρ,σ [27], and are satisfied by most robust loss functions, including the Huber loss, ρ(z) = 12z 2 1 {z ≤ 1} + ( z − 12 ) 1 {z > 1} and the Charbonnier loss, ρ(z) = √ 1 + z2 − 1. (A1) ρ is strictly-increasing and M -Lipschitz, with ρ(0) = 0. (A2) ρ′(x) is continuous and bounded with ρ′(0) = 0 . (A3) ϕ(x) = ρ′(x)/x is bounded, L-Lipschitz and continuous, with ϕ(0) <∞. (A4) ρ′′ exists, with ρ′′ and ϕ nonincreasing. Unlike for squared loss, the solution fnρ,σ cannot be obtained in a closed form. However, it can be shown to be the fixed point of an iterative procedure, referred to as KIRWLS algorithm [27]. The KIRWLS algorithm starts with initial weights {w(0)i }ni=1 such that ∑n i=1 w (0) i = 1, and generates the iterative sequence of estimators {f (k)ρ,σ}k∈N as f (k)ρ,σ = n∑ i=1 w (k−1) i Kσ(·,Xi) ; w (k) i = ϕ(‖Φσ(Xi)− f (k)ρ,σ‖Hσ )∑n j=1 ϕ(‖Φσ(Xj)− f (k) ρ,σ‖Hσ ) . Intuitively, note that if Xi is an outlier, then the corresponding weight wi is small (since ϕ is nonincreasing) and therefore less weight is given to the contribution of Xi in the density estimator. Hence, the weights serve as a measure of inlyingness—smaller (resp. larger) the weights, lesser (resp. more) inlying are the points. When Pn is replaced by P, the solution of Eq. (1) is its population analogue, fρ,σ . Although fρ,σ does not admit a closed form solution, it can be shown [27] that there exists a non-negative real-valued function wσ satisfying ∫ Rd wσ(x) dP(x) = 1 such that fρ,σ = ∫ Rd Kσ(·,x)wσ(x)dP(x) = ∫ Rd ϕ(‖Φσ(x)− fρ,σ‖Hσ )∫ Rd ϕ(‖Φσ(y)− fρ,σ‖Hσ )dP(y) Kσ(·,x) dP(x), (2) where wσ acts as a population analogue of the weights in KIRWLS algorithm. To summarize our proposal, the fixed point of the KIRWLS algorithm, which yields the robust density estimator fnρ,σ, is used as the filter function to obtain a robust persistence diagram of Xn. On the computational front, note that fnρ,σ is computationally more complex than the KDE, f n σ , requiring O(n`) computations compared to O(n) of the latter, with ` being the number of iterations required to reach the fixed point of KIRWLS. However, once these filter functions are computed, the corresponding persistence diagrams have similar computational complexity as both require computing superlevel sets, which, in turn, require function evaluations that scale as O(n) for both fnρ,σ and f n σ . 4 Theoretical Analysis of Robust Persistence Diagrams In this section, we investigate the theoretical properties of the proposed robust persistence diagrams. First, in Section 4.1, we examine the sensitivity of persistence diagrams to outlying perturbations through the notion of metric derivative and compare the effect of different filter functions. Next, in Section 4.2, we establish consistency and convergence rates for the robust persistence diagram to its population analogue. These results allow to construct uniform confidence bands for the robust persistence diagram. The proofs of the results are provided in Appendix A. 4.1 A measure of sensitivity of persistence diagrams to outliers The influence function and gross error sensitivity are arguably the most popular tools in robust statistics for diagnosing the sensitivity of an estimator to a single adversarial contamination [23, 26]. Given a statistical functional T : P(X) → (V, ‖·‖V ), which takes an input probability measure P ∈ P(X) on the input space X and produces a statistic P 7→ T (P) in some normed space (V, ‖·‖V ), the influence function of x ∈ X at P is given by the Gâteaux derivative of T at P restricted to the space of signed Borel measures with zero expectation: IF(T ;P,x) =· ∂ ∂ T ( (1− )P + δx )∣∣∣ =0 = lim →0 T ((1− )P + δx)− T (P) , and the gross error sensitivity at P is given by Γ(T ;P) =· supx∈X ‖IF(T ;P,x)‖V . However, a persistence diagram (which is a statistical functional) does not take values in a normed space and therefore the notion of influence functions has to be generalized to metric spaces through the concept of a metric derivative: Given a complete metric space (X, dX) and a curve s : [0, 1]→ X , the metric derivative at = 0 is given by |s′| (0) =· lim →0 1 dX(s(0), s( )). Using this generalization, we have the following definition, which allows to examine the influence an outlier has on the persistence diagram obtained from a filtration. Definition 4.1. Given a probability measure P ∈ P(Rd) and a filter function φP depending on P, the persistence influence of a perturbation x ∈ Rd on Dgm (φP) is defined as Ψ (φP;x) = lim →0 1 W∞ ( Dgm ( φP x ) ,Dgm (φP) ) , where P x = · (1− )P + δx, and the gross-influence is defined as Γ(φP) = supx∈Rd Ψ (φP;x). For > 0, let f ,xρ,σ be the robust KDE associated with the probability measure P x. The following result (proved in Appendix A.1) bounds the persistence influence for the persistence diagram induced by the filter function fρ,σ , which is the population analogue of robust KDE. Theorem 4.2. For a loss ρ satisfying (A1)–(A3), and σ > 0, if lim →0 1 ( f ,xρ,σ − fρ,σ ) exists, then the persistence influence of x ∈ Rd on Dgm (fρ,σ) satisfies Ψ (fρ,σ;x) ≤ ‖Kσ‖ 1 2 ∞ ρ ′ ( ‖Φσ(x)− fρ,σ‖Hσ )(∫ Rd ζ ( ‖Φσ(y)− fρ,σ‖Hσ ) dP(y) )−1 , (3) where ζ(z) = ϕ(z)− zϕ′(z). Remark 4.3. We make the following observations from Theorem 4.2. (i) Choosing ρ(z) = 12z 2 and noting that ϕ(z) = ρ′′(z) = 1, a similar analysis, as in the proof of Theorem 4.2, yields a bound for the persistence influence of the KDE as Ψ (fσ;x) ≤ σ−d/2 ‖Φσ(x)− fσ‖Hσ . On the other hand, for robust loss functions, the term in Eq. (3) involving ρ′ is bounded because of (A2), making them less sensitive to very large perturbations. In fact, for nonincreasing ϕ, it can be shown (see Appendix C) that Ψ (fρ,σ;x) ≤ σ−d/2wσ(x) ‖Φσ(x)− fρ,σ‖Hσ , where, in contrast to KDE, the measure of inlyingness, wσ , weighs down extreme outliers. (ii) For the generalized Charbonnier loss (a robust loss function), given by ρ(z) = ( 1 + z2 )α/2 − 1 for 1 ≤ α < 2, the persistence influence satisfies Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ‖Φσ(x)− fρ,σ‖2Hσ )α−1 2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) 1−α 2 . Note that for α = 1, the bound on the persistence influence Ψ (fρ,σ;x) does not depend on how extreme the outlier x is. Similarly, for the Cauchy loss, given by ρ(z) = log(1 + z2), we have Ψ (fρ,σ;x) ≤ σ−d/2 ( 1 + ∫ Rd ‖Φσ(y)− fρ,σ‖2Hσ dP(y) ) . This shows that for large perturbations, the gross error sensitivity for the Cauchy and Charbonnier losses are far more stable than that of KDE. This behavior is also empirically illustrated in Figure 3. The experiment is detailed in Appendix C. (iii) For the DTM function, it can be shown that Ψ (dP,m;x) ≤ 2√ m sup {∣∣∣f(x)− ∫ Rd f(y)dP(y) ∣∣∣ : ‖∇f‖L2(P) ≤ 1} . (4) While dP,m cannot be compared to both fσ and fρ,σ, as it captures topological information at a different scale, determined by m, we point out that when supp(P) is compact, Ψ (dP,m;x) is not guaranteed to be bounded, unlike in Ψ (fρ,σ;x). We refer the reader to Appendix C for more details. It follows from Remark 4.3 that as σ → 0, the persistence influence of both the KDE and robust KDE behave asO(σ−d), showing that the robustness of robust persistence diagrams manifests only in cases where σ > 0. However, robustness alone has no bearing if the robust persistence diagram and the persistence diagram from the KDE are fundamentally different, i.e., they estimate different quantities as σ → 0. The following result (proved in Appendix A.2) shows that as σ → 0, Dgm (fρ,σ) recovers the same information as that in Dgm (fσ), which is same as Dgm (f), where f is the density of P. Theorem 4.4. For a strictly-convex loss ρ satisfying (A1)–(A4), and σ > 0, suppose P ∈M(Rd) with density f , and fρ,σ is the robust KDE. Then W∞ (Dgm (fρ,σ) ,Dgm (f))→ 0 as σ → 0. Suppose P = (1− π)P0 + πQ, where P0 corresponds to the true signal which we are interested in studying, and Q manifests as some ambient noise with 0 < π < 12 . In light of Theorem 4.4, by letting σ → 0, along with the topological features of P0, we are also capturing the topological features of Q, which may obfuscate any statistical inference made using the persistence diagrams. In a manner, choosing σ > 0 suppresses the noise in the resulting persistence diagrams, thereby making them more stable. On a similar note, the authors in [21] state that for a suitable bandwidth σ > 0, the level sets of fσ carry the same topological information as supp(P), despite the fact that some subtle details in f may be omitted. In what follows, we consider the setting where robust persistence diagrams are constructed for a fixed σ > 0. 4.2 Statistical properties of robust persistence diagrams from samples Suppose Dgm ( fnρ,σ ) is the robust persistence diagram obtained from the robust KDE on a sample Xn and Dgm (fρ,σ) is its population analogue obtained from fρ,σ. The following result (proved in Appendix A.3) establishes the consistency of Dgm ( fnρ,σ ) in the W∞ metric. Theorem 4.5. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose Xn is observed iid from a distribution P∈M(Rd) with density f . Then W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) p→ 0 as n→∞. We present the convergence rate of the above convergence in Theorem 4.7, which depends on the smoothness of Hσ. In a similar spirit to [21], this result paves the way for constructing uniform confidence bands. Before we present the result, we first introduce the notion of entropy numbers associated with an RKHS. Definition 4.6 (Entropy Number). Given a metric space (T, d) the nth entropy number is defined as en(T, d) = · inf > 0 : ∃ {t1, t2, . . . , t2n−1} ⊂ T such that T ⊂ 2n−1⋃ i=1 Bd(ti, ) . Further, if (V, ‖·‖V ) and (W, ‖·‖W ) are two normed spaces and L : V → W is a bounded, linear operator, then en(L) = en(L : V →W ) =· en (L(BV ), ‖·‖W ), where BV is a unit ball in V . Loosely speaking, entropy numbers are related to the eigenvalues of the integral operator associated with the kernel Kσ , and measure the capacity of the RKHS in approximating functions in L2(Rd). In our context, the entropy numbers will provide useful bounds on the covering numbers of sets in the hypothesis class G. We refer the reader to [35] for more details. With this background, the following theorem (proved in Appendix A.4) provides a method for constructing uniform confidence bands for the persistence diagram constructed using the robust KDE on Xn. Theorem 4.7. For convex loss ρ satisfying (A1)–(A4), and fixed σ > 0, suppose the kernel Kσ satisfies en (id : Hσ → L∞(X)) ≤ aσn− 1 2p , where aσ > 1, 0 < p < 1 and X ⊂ Rd. Then, for a fixed confidence level 0 < α < 1, sup P∈M(X) P⊗n { W∞ ( Dgm ( fnρ,σ ) ,Dgm (fρ,σ) ) > 2M ‖Kσ‖ 1 2 ∞ µ ( ξ(n, p) + δ √ 2 log (1/α) n )} ≤ α, where ξ(n, p) is given by ξ(n, p) = γ apσ (1−2p) · 1√ n if 0 < p < 1/2, γC √ aσ · log(n)√n if p = 1/2, γ p √ aσ 2p−1 · 1 n1/4p if 1/2 < p < 1, for fixed constants γ > 12√ log 2 , C > 3− log(9aσ) and µ = 2 min { ϕ(2 ‖Kσ‖ 1 2 ∞), ρ ′′(2 ‖Kσ‖ 1 2 ∞) } . Remark 4.8. We highlight some salient observations from Theorem 4.7. (i) If diam(X) = r, and the kernel Kσ is m-times differentiable, then from [35, Theorem 6.26], the entropy numbers associated with Kσ satisfy en (id : Hσ → L∞(X)) ≤ crmn− m d . In light of Theorem 4.7, for p = d2m , we can make two important observations. First, as the dimension of the input space X increases, we have that the rate of convergence decreases; which is a direct consequence from the curse of dimensionality. Second, for a fixed dimension of the input space, the parameter p in Theorem 4.7 can be understood to be inversely proportional to the smoothness of the kernel. Specifically, as the smoothness of the kernel increases, the rate of convergence is faster, and we obtain sharper confidence bands. This makes a case for employing smoother kernels. (ii) A similar result is obtained in [21, Lemma 8] for persistence diagrams from the KDE, with a convergence rate Op(n−1/2), where the proof relies on a simple application of Hoeffding’s inequality, unlike the sophisticated tools the proof of Theorem 4.7 warrants for the robust KDE. 5 Experiments We illustrate the performance of robust persistence diagrams in machine learning applications through synthetic and real-world experiments.1 In all the experiments, the kernel bandwidth σ is chosen as the median distance of each xi ∈ Xn to its kth–nearest neighbour using the Gaussian kernel with the Hampel loss (similar setting as in [27])—we denote this bandwidth as σ(k). Since DTM is closely related to the k-NN density estimator [6], we choose the DTM smoothing parameter as m(k) = k/n. Additionally, the KIRWLS algorithm is run until the relative change of empirical risk < 10−6. Runtime Analysis. For n = 1000, Xn is sampled from a torus inside [0, 2]3. For each grid resolution α ∈ {0.04, 0.06, 0.08, 0.10}, the robust persistence diagram Dgm ( fnρ,σ ) and the KDE persistence diagram Dgm (fnσ ) are constructed from the superlevel filtration of cubical homology. The total time taken to compute the persistence diagrams is reported in Table 1. The results demonstrate that the computational bottleneck is the persistent homology pipeline, and not the KIRWLS for fnρ,σ . Bottleneck Simulation. The objective of this experiment is to assess how the robust KDE persistence diagram compares to the KDE persistence diagram in recovering the topological features of the underlying signal. Xn is observed uniformly from two circles and Ym is sampled uniformly from the enclosing square such that m = 200 and m/n = π ∈ {20%, 30%, 40%}—shown in Figure 4 (a). For each noise level π, and for each of N = 100 realizations of Xn and Ym, the robust persistence diagram Dρ,σ and the KDE persistence diagram Dσ are constructed from the noisy samples Xn∪Ym. In addition, we compute the KDE persistence diagram D#σ on Xn alone as a proxy for the target persistence diagram one would obtain in the absence of any contamination. The bandwidth σ(k) > 0 is chosen for k = 5. For each realization i, bottleneck distances Ui = W∞ ( Dρ,σ,D#σ ) and Vi = W∞ ( Dσ,D#σ ) are computed for 1st-order homological features. The boxplots and p-values for the one-sided hypothesis testH0 : U−V = 0 vs. H1 : U−V < 0 are reported in Figures 4 (b, c, d). The results demonstrate that the robust persistence diagram is noticeably better in recovering the true homological features, and in fact demonstrates superior performance when the noise levels are higher. Spectral Clustering using Persistent Homology. We perform a variant of the six-class benchmark experiment from [1, Section 6.1]. The data comprises of six different 3D “objects”: cube, circle, sphere, 3clusters, 3clustersIn3clusters, and torus. 25 point clouds are sampled from each object with additive Gaussian noise (SD= 0.1), and ambient Matérn cluster noise. For each point cloud, Xn, the robust persistence diagram Dgm ( fnρ,σ ) and the persistence diagram Dgm (dXn), from the distance function, are constructed. Additionally, Dgm (dXn) is transformed to the persistence image Img (dXn , h) for h = 0.1. Note that Dgm ( fnρ,σ ) is a robust diagram while Img (dXn , h) is a stable vectorization of a non-robust diagram [1]. For each homological order {H0, H1, H2}, distance 1https://github.com/sidv23/robust-PDs matrices {∆0,∆1,∆2} are computed: Wp metric for Dgm (fρ,σ), and Lp metric for Img (dXn , h) with p ∈ {1, 2,∞}, and spectral clustering is performed on the resulting distance-matrices. The quality of the clustering is assessed using the rand-index. The results, reported in Table 2, evidence the superiority of employing inherently robust persistence diagrams in contrast to a robust vectorization of an inherently noisy persistence diagram. MPEG7. In this experiment, we examine the performance of persistence diagrams in a classification task on [28]. For simplicity, we only consider five classes: beetle, bone, spring, deer and horse. We first extract the boundary of the images using a Laplace convolution, and sample Xn uniformly from the boundary of each image, adding uniform noise (π = 15%) in the enclosing region. Persistence diagrams Dgm (fnσ ) and Dgm ( fnρ,σ ) from the KDE and robust KDE are constructed. In addition, owing to their ability to capture nuanced multi-scale features, we also construct Dgm (dn,m) from the DTM filtration. The smoothing parameters σ(k) and m(k) are chosen as earlier for k = 5. The persistence diagrams are normalized to have a max persistence max{|d− b| = 1 : (b, d) ∈ Dgm(φ)}, and then vectorized as persistence images, Img (fnσ , h), Img ( fnρ,σ, h ) , and Img (dn,m, h) for various bandwidths h. A linear SVM classifier is then trained on the resulting persistence images. In the first experiment we only consider the first three classes, and in the second experiment we consider all five classes. The results for the classification error, shown in Figure 5, demonstrate the superiority of the proposed method. We refer the reader to Appendix D for additional experiments. 6 Conclusion & Discussion In this paper, we proposed a statistically consistent robust persistent diagram using RKHS-based robust KDE as the filter function. By generalizing the notion of influence function to the space of persistence diagrams, we mathematically and empirically demonstrated the robustness of the proposed method to that of persistence diagrams induced by other filter functions such as KDE. Through numerical experiments, we demonstrated the advantage of using robust persistence diagrams in machine learning applications. We would like to highlight that most of the theoretical results of this paper crucially hinge on the loss function being convex. As a future direction, we would like to generalize the current results to non-convex loss functions, and explore robust persistence diagrams induced other types of robust density estimators, which could potentially yield more robust persistence diagrams. Another important direction we intend to explore is to enhance the computational efficiency of the proposed approach using coresets, as in [7], and/or using weighted Rips filtrations, as in [2]. We provide a brief discussion in Appendix E. Broader Impact Over the last decade, Topological Data Analysis has become an important tool for extracting geometric and topological information from data, and its applications have been far reaching. For example, it has been used successfully in the study the fragile X-syndrome, to discover traumatic brain injuries, and has also become an important tool in the study of protein structure. In astrophysics, it has aided the study of cosmic microwave background, and the discovery of cosmic voids and filamental structures in cosmological data. With a continual increase in its adoption in data analysis, it has become important to understand the limitations of using persistent homology in machine learning applications. As real-world data is often flustered with measurement errors and other forms of noise, in this work, we examine the sensitivity of persistence diagrams to such noise, and provide methods to mitigate the effect of this noise, so as to make reliable topological inference. Acknowledgments and Disclosure of Funding The authors would like to thank the anonymous reviewers for their helpful comments and constructive feedback. Siddharth Vishwanath and Bharath Sriperumbudur are supported in part by NSF DMS CAREER Award 1945396. Kenji Fukumizu is supported in part by JST CREST Grant Number JPMJCR15D3, Japan. Satoshi Kuriki is partially supported by JSPS KAKENHI Grant Number JP16H02792, Japan.
1. What is the focus and contribution of the paper regarding persistence diagrams? 2. What are the strengths of the proposed approach, particularly in terms of robustness and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding computational time? 4. Do you have any concerns or questions about the application of the proposed method in downstream tasks? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose to calculate the persistence diagram of a dataset using a robust KDE approach. A robust KDE [25] is a generalized version of kernel density estimation formulated as an M-estimation problem with a plug-in estimation loss (Eq. (1)). When the plug-in loss to be optimized in Eq. (1) is a robust loss, the solution may not be closed form but can be calculated using existing algorithms. Theoretically, the paper extends existing concept of influence function to the context of persistence diagrams, measuring how perturbations of the underlying probability density at a particular point will affect the persistence diagram through an intermediate estimated density function. It is shown that using a robust plug-in loss, the influence function (and its supreme norm, called gross-influence) on the persistence diagram can be proven to have a tighter bound than existing approaches like KDE as filter function, Kernel Distance (KDist) and Distance to the Measure (DTM). The paper proceeds and derive the convergence rates and confidence interval, assuming the plug-in loss is convex. Strengths This theoretical paper tackles a very important question in topological data analysis – how the density estimation affects the persistence diagram that characterizes the topology of the data. It uses a more generalized framework (robust KDE) and shows it achieves better robustness than existing results (KDE-based estimation, KDist and DTM). These results can be useful for statistical analysis of the topology of data in downstream applications. It can also be potentially used in downsteam tasks (e.g., [12,40]) relying on persistence homology to characterize the dataset. Weaknesses I am assuming the downside of the method is computational time. It would be useful to report the computational time in the experiments to provide a more comprehensive view of the paper.
NIPS
Title Power and limitations of single-qubit native quantum neural networks Abstract Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization. While the applications of QNN have been widely investigated, its theoretical foundation remains less understood. In this paper, we formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks that consist of interleaved encoding circuit blocks and trainable circuit blocks. First, we prove that single-qubit quantum neural networks can approximate any univariate function by mapping the model to a partial Fourier series. We in particular establish the exact correlations between the parameters of the trainable gates and the Fourier coefficients, resolving an open problem on the universal approximation property of QNN. Second, we discuss the limitations of single-qubit native QNNs on approximating multivariate functions by analyzing the frequency spectrum and the flexibility of Fourier coefficients. We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. We believe these results would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. 1 Introduction Quantum computing is a technology that exploits the laws of quantum mechanics to solve complicated problems much faster than classical computers. It has been applied in areas such as breaking cryptographic systems [1], searching databases [2], and quantum simulation [3, 4], in which it gives a quantum speedup over the best known classical algorithms. With the fast development of quantum hardware, recent results [5–7] have shown quantum advantages in specific tasks. An emerging direction is to investigate if quantum computing can offer quantum advantages in artificial intelligence, giving rise to an interdisciplinary area called quantum machine learning [8]. A leading strategy to quantum machine learning uses quantum neural networks (QNNs), which are quantum analogs of artificial neural networks (NNs). Much progress has been made in applications of QNN in various topics [9–11], including quantum autoencoder [12, 13], supervised learning [14–17], dynamic learning [18–20], quantum chemistry [21], and quantum metrology [22–24]. Similar to the field of machine learning, a crucial challenge of quantum machine learning is to design powerful and efficient QNN models for quantum learning tasks, which requires a theoretical understanding of how structural properties of QNN may affect its expressive power. The expressive power of a QNN model can be characterized by the function classes that it can approximate. Recently, the universal approximation property (UAP) of QNN models has been ∗Corresponding author. [email protected] †Z. Y. and H. Y. contributed equally to this work. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). investigated, which is similar to the universal approximation theorem [25, 26] in machine learning theory. The authors of [27] suggested that a QNN model can be written as a partial Fourier series in the data and proved the existence of a multi-qubit QNN model that can realize a universal function approximator. The UAP of single-qubit models remains an open conjecture, due to the difficulties in analyzing the flexibility of Fourier coefficients. Another work [28] considered hybrid classicalquantum neural networks and obtained the UAP by using the Stone-Weierstrass theorem. Ref. [29] proved that even a single-qubit hybrid QNN can approximate any bounded function. The above results of UAP show that the expressivity of QNNs is strong, but it does not reveal the relationship between the structural properties of a QNN and its expressive ability. Therefore the UAP may not be a good guide for constructing QNN models with practical interests. In particular, it is worth noting that the existence proof in Ref. [27] is under the assumption of multi-qubit systems, exponential circuit depth, and arbitrary observables, which does not explicitly give the structure of QNNs. Meanwhile, Refs. [28, 29] demonstrated the construction of QNNs in detail, but it is unclear whether the powerful expressivity comes from the classical part or the quantum part of hybrid models. Moreover, a systematic analysis of how parameters in the QNN affect the classes of functions that it can approximate is missing. The absence of these theoretical foundations hinders the understanding on the expressive power and limitation of QNNs, which makes it highly necessary but challenging to design effective and efficient QNNs. To theoretically investigate the expressivity of QNNs, it is important to study the simplest case of single-qubit QNNs, just like the celebrated universal approximation theorem first showing the expressivity of depth-2 NNs [25, 26]. In this paper, we formulate an analytical framework that correlates the structural properties of a single-qubit native QNN and its expressive power. We consider data re-uploading models that consist of interleaved data encoding circuit blocks and trainable circuit blocks [30]. First, we prove that there exists a single-qubit native QNN that can express any Fourier series, which is a universal approximator for any square-integrable univariate function. It solves the open problem on the UAP of single-qubit QNNs in Ref. [27]. Second, we systematically analyze how parameters in trainable circuit blocks affect the Fourier coefficients. The main results on the expressivity of QNNs are summarized as in Fig. 1. Third, we discuss potential difficulties for singlequbit native QNNs to approximate multivariate functions. Additionally, we compare native QNNs with the hybrid version and show the fundamental difference in their expressive power. We also demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments on approximating univariate and multivariate functions. Our analysis, beyond the UAP of QNNs, improves the understanding of the relationship between the expressive power and the structure of QNNs. This fundamental framework provides a theoretical foundation for data re-uploading QNN models, which is helpful to construct effective and efficient QNNs for quantum machine learning tasks. We will start by giving some background and defining the native QNN models in the next section, and then analyze the expressivity of single-qubit native QNNs in Section 3. In Section 4, we discuss the limitation of single-qubit native QNNs and compare native QNNs with hybrid QNNs, which shows the fundamental difference between their expressive power. The numerical experiments on the expressivity and limitations of single-qubit native QNNs are described in Section 5. 2 Preliminaries 2.1 A primer on quantum computing Quantum state The basic unit of information in quantum computation is one quantum bit, or qubit for short. Just like a classical bit has a state in either 0 or 1, a qubit also has a state. A single-qubit state is a unit vector in a 2-dimensional Hilbert space C2, which is commonly denoted in Dirac notation |ψ⟩ = α |0⟩ + β |1⟩, where |0⟩ = (1, 0)T and |1⟩ = (0, 1)T are known as computational basis states. Here |ψ⟩ denotes a column vector and its conjugate transpose ⟨ψ| := |ψ⟩† is a row vector. Then the inner product ⟨ψ|ψ⟩ = ∥ψ∥2 denotes the square of L2-norm of |ψ⟩. Note that |ψ⟩ is a normalized state so ⟨ψ|ψ⟩ = |α|2 + |β|2 = 1. Having this constraint, a single-qubit state can be represented as a point at surface of a Bloch sphere, written as |ψ⟩ = cos(θ/2) |0⟩+ eiϕ sin(θ/2) |1⟩, where θ and ϕ are re-interpreted as azimuthal angle and polar angle in spherical coordinates. More generally, a quantum state of n qubits can be represented as a normalized vector in the n-fold tensor product Hilbert space C2n . Quantum gate Quantum gates are basic operations used to manipulate qubits. Unlike some classical logical gates, quantum gates are reversible, so they can be represented as unitary transformations in the Hilbert space. A unitary matrix U satisfies U†U = UU† = I . A commonly used group of single-qubit quantum gates is the Pauli gates, which can be written as Pauli matrices: X = [ 0 1 1 0 ] , Y = [ 0 −i i 0 ] , Z = [ 1 0 0 −1 ] . (1) The Pauli X , Y , and Z gates are equivalent to a rotation around the x, y, and z axes of the Bloch sphere by π radians, respectively. A group of more general gates is the rotation operator gates {RP (θ) = e−i θ 2P | P ∈ {X,Y, Z}}, which allows the rotating angle around the x, y and z axes of the Bloch sphere to be customized. They can be written in the matrix form as RX(θ) = [ cos θ2 −i sin θ 2 −i sin θ2 cos θ 2 ] , RY (θ) = [ cos θ2 − sin θ 2 sin θ2 cos θ 2 ] , RZ(θ) = [ e−i θ 2 0 0 ei θ 2 ] . (2) Quantum measurement A measurement is a quantum operation to retrieve classical information from a quantum state. The simplest measurement is the computational basis measurement; for a single-qubit state |ψ⟩ = α |0⟩+β |1⟩, the outcome of such a measurement is either |0⟩ with probability |α|2 or |1⟩ with probability |β|2. Computational basis measurements can be generalized to Pauli measurements, where Pauli matrices are observables that we can measure. For example, measuring Pauli Z is equivalent to the computational basis measurement, since |0⟩ and |1⟩ are eigenvectors of Z with corresponding eigenvalues ±1. Pauli Z measurement returns +1 if the resulting state is |0⟩ and returns −1 if the resulting state is |1⟩. We can calculate the expected value of Pauli Z measurement when the state is |ψ⟩: ⟨ψ|Z |ψ⟩ = (α∗ ⟨0|+ β∗ ⟨1|)Z(α |0⟩+ β |1⟩) = |α|2 − |β|2. (3) Pauli measurements can be extended to the case of multiple qubits by a tensor product of Pauli matrices. 2.2 Data re-uploading quantum neural networks We consider the data re-uploading QNN model [30], which is a generalized framework of quantum machine learning models based on parameterized quantum circuits [31]. A data re-uploading QNN is a quantum circuit that consists of interleaved data encoding circuit blocks S(·) and trainable circuit blocks V (·), Uθ,L(x) = V (θ0) L∏ j=1 S(x)V (θj), (4) where x is the input data, θ = (θ0, . . . ,θL) is a set of trainable parameters, and L denotes the number of layers. It is common to build the data encoding blocks and trainable blocks using the most prevalent parameterized quantum operators {RX , RY , RZ}. We define the output of this model as the expectation value of measuring some observable M , fθ,L(x) = ⟨0|U†θ,L(x)MUθ,L(x) |0⟩ . (5) Note that some data re-uploading QNNs introduce trainable weights in data pre-processing or postprocessing, which are considered as hybrid QNNs. For example, the data encoding block defined as S(w · x) is essentially equivalent to feeding data x into a neuron with weight w and then uploading the output to an encoding block S(·). Such a mixing structure makes it hard to tell whether the expressive power comes from the classical or quantum part. To solely study the expressive power of QNNs, we define the concept of native QNN, where all trainable weights are introduced by parameters of tunable quantum gates so that they can be distinguished from a hybrid QNN. Throughout this paper, we simply refer to the native QNN as QNN for short unless specified otherwise. 3 Expressivity of single-qubit QNNs To better understand the expressive power of QNNs, we start investigating the simplest case of single-qubit models. Ref. [27] investigated the expressive power of QNNs using the Fourier series formalism. In this section, we establish an exact correlation between the single-qubit QNN and the Fourier series in terms of both the frequency spectrum and Fourier coefficients. Note that we consider one-dimensional input data for now, which corresponds to the class of univariate functions. A Fourier series is an expansion of a periodic function f(x) in infinite terms of a sum of sines and cosines which can be written in the exponential form as f(x) = ∞∑ n=−∞ cne i 2πT nx, (6) where cn = 1 T ∫ T f(x)ei 2π T nxdx (7) are the Fourier coefficients. Here T is the period of function f(x). The quantities n 2πT are called the frequencies, which are multiples of the base frequency 2πT . The set of frequency {n 2π T }n is called the frequency spectrum of Fourier series. In approximation theory, a partial Fourier series (or truncated Fourier series) sN (x) = N∑ n=−N cne i πT nx (8) is commonly used to approximate the function f(x). A partial Fourier series can be transformed to a Laurent polynomial P ∈ C[z, z−1] by the substitution z = ei 2πT x, i.e., P (z) = N∑ n=−N cnz n. (9) A Laurent polynomial P ∈ F[z, z−1] is a linear combination of positive and negative powers of the variable z with coefficients in F. The degree of a Laurent polynomial P is the maximum absolute value of any exponent of z with non-zero coefficients, denoted by deg(P ). We say that a Laurent polynomial P has parity 0 if all coefficients corresponding to odd powers of z are 0, and similarly P has parity 1 if all coefficients corresponding to even powers of z are 0. Following the pattern of Fourier series, we first consider using RZ(x) = e−ixZ/2 to encode the input x and let RY (·) be the trainable gate. We can write the QNN as UYZYθ,L(x) = RY (θ0) L∏ j=1 RZ(x)RY (θj), (10) and the quantum circuit is shown in Fig. 2. To characterize the expressivity of this kind of basic QNN, we first rigorously show that the QNN UYZYθ,L(x) can be represented in the form of a partial Fourier series with real coefficients. Lemma 1 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 such that UYZYθ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (11) if and only if real Laurent polynomials P,Q ∈ R[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 1 decomposes the unitary matrix of the QNN UYZYθ,L(x) into Laurent polynomials with real coefficients, which can be used to represent a partial Fourier series with real coefficients. The proof of Lemma 1 uses a method of mathematical induction that is in the similar spirit of the proof of quantum signal processing [32–35], which is a powerful subroutine in Hamiltonian simulation [4] and quantum singular value transformation [35]. The forward direction is straightforward by the definition of UYZYθ,L(x) in Eq. (10). The proof of the backward direction is by induction in L where the base case L = 0 holds trivially. For L > 0, we prove that for any UYZYθ,L(x) where P,Q satisfy the three conditions, there exists a unique block R†Y (θk)R † Z(x) such that polynomials P̂ and Q̂ in UYZYθ,L(x)R † Y (θk)R † Z(x) satisfy the three conditions for L − 1. Lemma 1 explicitly correlates the frequency spectrum of the Fourier series and the number of layers L of the QNN. The proof of Lemma 1 also illustrates the exact correspondence between the Fourier coefficients and parameters of trainable gates. A detailed proof can be found in Appendix A.1. Other than characterizing the QNN with Laurent polynomials, we also need to specify the achievable Laurent polynomials P (x) for which there exists a correspondingQ(x) satisfying the three conditions in Lemma 1. It has been proved in Refs. [32, 34] that the only constraint is |P (x)| ≤ 1 for all x ∈ R. That is, for any P ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2, if |P (x)| ≤ 1 for all x ∈ R, there exists a Q ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2 such that |P (x)|2 + |Q(x)|2 = 1 for all x ∈ R. By Lemma 1, the partial Fourier series corresponding to the QNN UYZYθ,L(x) only has real coefficients. With the exponential form of Eq. (6), a Fourier series with real coefficients only has cos(nx) terms, which means UYZYθ,L(x) can be used to approximate any even function on the interval [−π, π]. Thus we establish the following proposition, whose proof is deferred to Appendix A.2. Proposition 2 For any even square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UYZYθ,L(x) such that |ψ(x)⟩ = UYZYθ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (12) for some normalizing constant α. Although the above result states that the QNN UYZYθ,L(x) |0⟩ is able to approximate a class of even functions within arbitrary precision, we can see that the main limitation of the expressive power of QNN UYZYθ,L(x) is the real Fourier coefficients, which may restrict its universal approximation capability. To tackle this issue, our idea is to introduce complex coefficients to the corresponding Laurent polynomials, which can be implemented by adding a trainable Pauli Z rotation operator in each layer. Specifically, we consider the QNN UWZWθ,ϕ,L(x) = RZ(φ)W (θ0, ϕ0) L∏ j=1 RZ(x)W (θj , ϕj), (13) where each trainable block is W (θj , ϕj) := RY (θj)RZ(ϕj). Here we add an extra RZ(φ) gate to adjust the relative phase between P and Q. The quantum circuit of UWZWθ,ϕ,L(x) is illustrated in Fig. 3. To characterize the capability of this QNN, we establish the following Lemma which implies UWZWθ,ϕ,L(x) can express any Fourier partial sum with complex Fourier coefficients. Lemma 3 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 and ϕ = (φ, ϕ0, ϕ1, . . . , ϕL) ∈ RL+2 such that UWZWθ,ϕ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (14) if and only if Laurent polynomials P,Q ∈ C[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 3 demonstrates a decomposition of the QNN UWZWθ,ϕ,L(x) into Laurent polynomials with complex coefficients, which can be used to represent a partial Fourier series with complex coefficients in form of Eq. (8). The proof of Lemma 3 is similar to the proof of Lemma 1 and its details are provided in Appendix A.3. Again, the proof demonstrates the effect of parameterized gates on the control of Fourier coefficients. Similarly, the constraint for the achievable complex Laurent polynomials P (x) in UWZWθ,ϕ,L(x) is that |P (x)| ≤ 1 for all x ∈ R, as proved in Refs. [36, 37]. We then prove in the following Theorem 4 that UWZWθ,ϕ,L(x) is able to approximate any square-integrable function within arbitrary precision, using the well-established result in Fourier analysis. The detailed proof is deferred to Appendix A.4. Theorem 4 (Univariate approximation properties of single-qubit QNNs.) For any univariate square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UWZWθ,ϕ,L(x) such that |ψ(x)⟩ = UWZWθ,ϕ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (15) for some normalizing constant α. Up till now we only let the encoding gate be the RZ(x) gate, what if we use other rotation operator gates to encode the data? It actually does not matter which one we choose as the encoding gate if the trainable gates are universal. Note that Pauli rotation operators RX(x), RY (x), RZ(x) have two eigenvalues cos(x/2)± i sin(x/2), and they can be diagonalized as Q†RZ(x)Q. Merging unitaries Q† and Q to universal trainable gates gives the QNN that uses RZ(x) as the encoding gate. We hereby define the generic single-qubit QNNs as UUZUθ,ϕ,λ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(x)U3(θj , ϕj , λj), (16) where each trainable block is the generic rotation gate U3(θ, ϕ, λ) = [ cos θ2 −e iλ sin θ2 eiϕ sin θ2 e i(ϕ+λ) cos θ2 ] . (17) By definition, any L-layer single-qubit QNN, including UWZWθ,ϕ,L, can be expressed as U UZU θ,ϕ,λ,L. Thus UUZUθ,ϕ,λ,L is surely a universal approximator. 4 Limitations of single-qubit QNNs We have proved that a single-qubit QNN is a universal approximator for univariate functions, it is natural to consider its limitations. Is there a single-qubit QNN that can approximate arbitrary multivariate functions? We answer this question from the perspective of multivariate Fourier series. Specifically, we consider the generic form of single-qubit QNNs defined in Eq. (16) and upload the classical data x := (x(1), x(2), · · · , x(d)) ∈ Rd as Uθ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(xj)U3(θj , ϕj , λj), (18) where each xj ∈ x and L ∈ N+. Without loss of generality, assume that each dimension x(i) is uploaded the same number of times, denoted by K. Naturally, we have Kd = L. Further, we rewrite the output of QNNs defined in Eq. (5) as the following form. fθ,L(x) = ∑ ω∈Ω cωe iω·x, (19) where Ω = {−K, · · · , 0, · · · ,K}d, and the cω is determined by parameters θ and the observable M . A detailed analysis can be found in Appendix B. We can see that Eq. (19) cannot be represented as a K-truncated multivariate Fourier series. Specifically, by the curse of dimensionality, it requires exponentially many terms in d to approximate a function in d dimensions. However, for fθ,L(x), the degrees of freedom grow linearly with the number of layers L. It implies that single-qubit native QNNs potentially lack the capability to universally approximate arbitrary multivariate functions from the perspective of the Fourier series. Despite the potential limitation of native QNNs in multivariate approximation, it has been proved that a single-qubit hybrid QNN can approximate arbitrary multivariate functions [28, 29]. However, the UAP of hybrid QNNs is fundamentally different from the native model that we investigated. Those hybrid models involve trainable weights either in data pre-processing or post-processing. Specifically, introducing trainable weights in data pre-processing is equivalent to multiplying each frequency of the Fourier series by an arbitrary real coefficient, i.e. S(wx) = RZ(wx) = e −iw x2Z . (20) Therefore it enriches the frequency spectrum of native QNNs, which only contain integer multiples of the fundamental frequency. It can also be readily extended to the encoding of multi-dimensional data x := (x(1), x(2), · · · , x(d)) as RZ(w1x (1))RZ(w2x (2)) · · ·RZ(wdx(d)) = RZ(w · x) = e− 1 2 iw·xZ , (21) where w = (w1, . . . , wd) is a vector of trainable weights. Using such an encoding method enables a single-qubit QNN to approximate any continuous multivariate function [29]. We notice that, although the trainable weights enrich the frequency spectrum of the Fourier series, the capability of hybrid QNNs to approximate arbitrary multivariate functions is not obtained using the multivariate Fourier series, but the universal approximation theorem [25, 26] in machine learning theory. In another word, the multivariate UAP of a hybrid QNN mostly comes from the classical structure, and the QNN serves as an activation function σ(x) = e−ix in the universal approximation theorem. This fact might be able to shed some light on the reason why a hybrid QNN does not provide quantum advantages over the classical NN. 5 Numerical experiments In order to better illustrate the expressive power of single-qubit native QNNs, we supplement the theoretical results with numerical experiments. Specifically, we demonstrate the flexibility and approximation capability of single-qubit native QNNs in Section 5.1. The limitations of single-qubit QNNs are illustrated in Section 5.2 through the numerical experiments on approximating multivariate functions. All simulations are carried out with the Paddle Quantum toolkit on the PaddlePaddle Deep Learning Platform, using a desktop with an 8-core i7 CPU and 32GB RAM. 5.1 Univariate function approximation A damping function f(x) = sin (5x)/5x is used to demonstrate the approximation performance of single-qubit native QNN models. The dataset consists of 300 data points uniformly sampled from the interval [0, π], from which 200 are selected for the training set and 100 for the test set. Since the function f(x) is an even function, we use the QNN model as defined in Eq. (10). The parameters of trainable gates are initialized from the uniform distribution on [0, 2π]. We adopt a variational quantum algorithm, where a gradient-based optimizer is used to search and update parameters in the QNN. The mean squared error (MSE) serves as the loss function. Here the Adam optimizer is used with a learning rate of 0.1. We set the training iterations to be 100 with a batch size of 20 for all experiments. While approximating a function f(x) by a truncated Fourier series, the approximation error decreases as the number of expansion terms increases. As shown in Lemma 3, the frequency spectrum and Fourier coefficients will be extended by consecutive repetitions of the encoding gate and trainable gate. The numerical results in Fig. 4 illustrate that the approximation error decreases as the number of layers increases, which are consistent with our theoretical analysis. To further show the flexibility and capability of single-qubit QNNs, we pick a square wave function as the target function. The training set contains 400 data points sampled from the interval [0, 20]. The numerical results are illustrated in Fig. 5. By simply repeating 45 layers, the single-qubit QNN UWZWθ,ϕ,L(x) learns the function hidden beneath the training data. In particular, the approximation works well not only for input variables located between the training data but also outside of the region, because the Fourier series has a natural capability in dealing with periodic functions. 5.2 Multivariate function approximation We numerically demonstrate the limitations of single-qubit native QNNs in approximate multivariate functions. We examine the convergence of the loss as the number of layers of the circuit increases and compare the outcome with the target function. Specifically, we consider a bivariate function f(x, y) = (x2 + y − 1.5π)2 + (x+ y2 − π)2 as the target function. Note that f(x, y) is normalized on the interval [−π, π]2, i.e., −1 ≤ f(x, y) ≤ 1. The training set consists of 400 data points sampled from interval [−π, π]2. We use the singlequbit QNN with various numbers of layers defined as Eq. (18) to learn the target function. The experimental setting is the same as in the univariate function approximation. In order to reduce the effect of randomness, the experimental results are averaged over 5 independent training instances. Fig. 6 shows that the single-qubit native QNN has difficulty in approximating bivariate functions. The approximation result of QNN as shown in Fig. 6b is quite different from the target function, even for a very deep circuit of 40 layers. Also, the training loss in Fig. 6c does not decrease as the number of layers increases. Note that the target function is only bivariate here, the limitations of single-qubit native QNNs will be more obvious in the case of higher dimensions. We further propose a possible strategy that extends single-qubit QNNs to multiple qubits, which could potentially overcome the limitations and handle practical classification tasks, see Appendix C for details. 6 Conclusion and outlook In this work, we presented a systematic investigation of the expressive power of single-qubit native QNNs, which are capable to approximate any square-integrable univariate function with arbitrary precision. We not only give an existence proof but also analytically show an exact mapping between native QNNs and the partial Fourier series from perspectives of both frequency spectrum and Fourier coefficients, which solves an open problem on the UAP of single-qubit QNNs in Ref. [27]. Our proof, inspired by quantum signal processing, explicitly illustrates the correlation between parameters of trainable gates and the Fourier coefficients. Other than the expressivity, we also discuss the limitation of single-qubit QNNs from the perspective of multivariate Fourier series. Both the expressivity and limitation of single-qubit QNNs are validated by numerical simulations. We expect our results provide a fundamental framework to the class of data re-uploading QNNs, which serves as insightful guidance on the design of such QNN models. Although the expressive power of a single-qubit QNN have been well investigated, it may not be an ideal model in practice due to the potential limitations on approximating multivariate functions. Moreover, single-qubit models can be efficiently simulated by classical computers and hence cannot bring any quantum advantage. The multi-qubit QNNs as shown in Ref. [27] and in Appendix C might require exponential circuit depth, which is impractical to implement and also does not fit the systematic analysis for the single-qubit case. Therefore one future step is to efficiently generalize the framework of single-qubit QNNs to multi-qubit cases. One promising approach is to encode data into multiqubit unitaries by block encoding and then mapping higher-dimensional operations on multi-qubit systems to single-qubit gates by qubitization [38]. Such techniques are originally used in multi-qubit extensions of quantum signal processing, such as quantum singular value transformation [35] and quantum phase processing [37]. By the connection between single-qubit QNNs and quantum signal processing, block encoding and qubitization may lead to useful QNN models for multi-qubit cases and establish corresponding systematic analyses. A recent paper presents a method that extends quantum signal processing to multivariate [39], which might also be applicable to single-qubit QNNs. We believe our results and their possible extensions would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. Acknowledgments and Disclosure of Funding We would like to thank Runyao Duan for helpful suggestions on quantum signal processing. We also thank Guangxi Li, Geng Liu, Youle Wang, Haokai Zhang, Lei Zhang, and Chengkai Zhu for useful comments. Z. Y. and H. Y. contributed equally to this work. Part of this work was done when Z. Y., H. Y., and M. L. were research interns at Baidu Research.
1. What is the focus and contribution of the paper on single qubit neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its analysis and extensions? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What is a reasonable approximation of multi-variate functions in terms of dimension d and error e?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper the authors analyze the single qubit neural networks by using the re-uploading QNN formalism. The contributions of the paper are the proofs that a single qubit neural network, simply a single qubit quantum circuits with half of parameterizable gates can approximate any single bit uni-variate function. In additions the authors provide analysis of the re-uploading networks for simulating multi-variate functions. The authors show that the re-uploading networks are not universal in approximating multi-variate functions. Strengths And Weaknesses The paper analysis of single qubit networks in the re-uploading format, limited by the Fourier-transform blocks is a nice extension to the original re-uploading NN.The proof of the univariate function approximation is a simpler form of the universal circuit Solovay-Kitaev approximation theorem. The principle of the proof starting from eq.4 can be simplified by assuming that if the target function is expressed simply by V(\theta_0) then the sum over S(x)V(\theta_j) = I. That is each of the encoding block can be nilled by finding exactly the inverse transform by learning. For the multi-variate approximation, it is true that the studied setting is not universal. While both of the studied problems are correctly analyzed I do wonder if this is really related to QNN. In particular, now when most of the algorithms are being built as VQE I wonder if this is the correct venue. Questions What is a reasonable approximation of multi-variate function as to the dimension d in Omega and error e? Limitations No ethical concerns have been determined
NIPS
Title Power and limitations of single-qubit native quantum neural networks Abstract Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization. While the applications of QNN have been widely investigated, its theoretical foundation remains less understood. In this paper, we formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks that consist of interleaved encoding circuit blocks and trainable circuit blocks. First, we prove that single-qubit quantum neural networks can approximate any univariate function by mapping the model to a partial Fourier series. We in particular establish the exact correlations between the parameters of the trainable gates and the Fourier coefficients, resolving an open problem on the universal approximation property of QNN. Second, we discuss the limitations of single-qubit native QNNs on approximating multivariate functions by analyzing the frequency spectrum and the flexibility of Fourier coefficients. We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. We believe these results would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. 1 Introduction Quantum computing is a technology that exploits the laws of quantum mechanics to solve complicated problems much faster than classical computers. It has been applied in areas such as breaking cryptographic systems [1], searching databases [2], and quantum simulation [3, 4], in which it gives a quantum speedup over the best known classical algorithms. With the fast development of quantum hardware, recent results [5–7] have shown quantum advantages in specific tasks. An emerging direction is to investigate if quantum computing can offer quantum advantages in artificial intelligence, giving rise to an interdisciplinary area called quantum machine learning [8]. A leading strategy to quantum machine learning uses quantum neural networks (QNNs), which are quantum analogs of artificial neural networks (NNs). Much progress has been made in applications of QNN in various topics [9–11], including quantum autoencoder [12, 13], supervised learning [14–17], dynamic learning [18–20], quantum chemistry [21], and quantum metrology [22–24]. Similar to the field of machine learning, a crucial challenge of quantum machine learning is to design powerful and efficient QNN models for quantum learning tasks, which requires a theoretical understanding of how structural properties of QNN may affect its expressive power. The expressive power of a QNN model can be characterized by the function classes that it can approximate. Recently, the universal approximation property (UAP) of QNN models has been ∗Corresponding author. [email protected] †Z. Y. and H. Y. contributed equally to this work. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). investigated, which is similar to the universal approximation theorem [25, 26] in machine learning theory. The authors of [27] suggested that a QNN model can be written as a partial Fourier series in the data and proved the existence of a multi-qubit QNN model that can realize a universal function approximator. The UAP of single-qubit models remains an open conjecture, due to the difficulties in analyzing the flexibility of Fourier coefficients. Another work [28] considered hybrid classicalquantum neural networks and obtained the UAP by using the Stone-Weierstrass theorem. Ref. [29] proved that even a single-qubit hybrid QNN can approximate any bounded function. The above results of UAP show that the expressivity of QNNs is strong, but it does not reveal the relationship between the structural properties of a QNN and its expressive ability. Therefore the UAP may not be a good guide for constructing QNN models with practical interests. In particular, it is worth noting that the existence proof in Ref. [27] is under the assumption of multi-qubit systems, exponential circuit depth, and arbitrary observables, which does not explicitly give the structure of QNNs. Meanwhile, Refs. [28, 29] demonstrated the construction of QNNs in detail, but it is unclear whether the powerful expressivity comes from the classical part or the quantum part of hybrid models. Moreover, a systematic analysis of how parameters in the QNN affect the classes of functions that it can approximate is missing. The absence of these theoretical foundations hinders the understanding on the expressive power and limitation of QNNs, which makes it highly necessary but challenging to design effective and efficient QNNs. To theoretically investigate the expressivity of QNNs, it is important to study the simplest case of single-qubit QNNs, just like the celebrated universal approximation theorem first showing the expressivity of depth-2 NNs [25, 26]. In this paper, we formulate an analytical framework that correlates the structural properties of a single-qubit native QNN and its expressive power. We consider data re-uploading models that consist of interleaved data encoding circuit blocks and trainable circuit blocks [30]. First, we prove that there exists a single-qubit native QNN that can express any Fourier series, which is a universal approximator for any square-integrable univariate function. It solves the open problem on the UAP of single-qubit QNNs in Ref. [27]. Second, we systematically analyze how parameters in trainable circuit blocks affect the Fourier coefficients. The main results on the expressivity of QNNs are summarized as in Fig. 1. Third, we discuss potential difficulties for singlequbit native QNNs to approximate multivariate functions. Additionally, we compare native QNNs with the hybrid version and show the fundamental difference in their expressive power. We also demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments on approximating univariate and multivariate functions. Our analysis, beyond the UAP of QNNs, improves the understanding of the relationship between the expressive power and the structure of QNNs. This fundamental framework provides a theoretical foundation for data re-uploading QNN models, which is helpful to construct effective and efficient QNNs for quantum machine learning tasks. We will start by giving some background and defining the native QNN models in the next section, and then analyze the expressivity of single-qubit native QNNs in Section 3. In Section 4, we discuss the limitation of single-qubit native QNNs and compare native QNNs with hybrid QNNs, which shows the fundamental difference between their expressive power. The numerical experiments on the expressivity and limitations of single-qubit native QNNs are described in Section 5. 2 Preliminaries 2.1 A primer on quantum computing Quantum state The basic unit of information in quantum computation is one quantum bit, or qubit for short. Just like a classical bit has a state in either 0 or 1, a qubit also has a state. A single-qubit state is a unit vector in a 2-dimensional Hilbert space C2, which is commonly denoted in Dirac notation |ψ⟩ = α |0⟩ + β |1⟩, where |0⟩ = (1, 0)T and |1⟩ = (0, 1)T are known as computational basis states. Here |ψ⟩ denotes a column vector and its conjugate transpose ⟨ψ| := |ψ⟩† is a row vector. Then the inner product ⟨ψ|ψ⟩ = ∥ψ∥2 denotes the square of L2-norm of |ψ⟩. Note that |ψ⟩ is a normalized state so ⟨ψ|ψ⟩ = |α|2 + |β|2 = 1. Having this constraint, a single-qubit state can be represented as a point at surface of a Bloch sphere, written as |ψ⟩ = cos(θ/2) |0⟩+ eiϕ sin(θ/2) |1⟩, where θ and ϕ are re-interpreted as azimuthal angle and polar angle in spherical coordinates. More generally, a quantum state of n qubits can be represented as a normalized vector in the n-fold tensor product Hilbert space C2n . Quantum gate Quantum gates are basic operations used to manipulate qubits. Unlike some classical logical gates, quantum gates are reversible, so they can be represented as unitary transformations in the Hilbert space. A unitary matrix U satisfies U†U = UU† = I . A commonly used group of single-qubit quantum gates is the Pauli gates, which can be written as Pauli matrices: X = [ 0 1 1 0 ] , Y = [ 0 −i i 0 ] , Z = [ 1 0 0 −1 ] . (1) The Pauli X , Y , and Z gates are equivalent to a rotation around the x, y, and z axes of the Bloch sphere by π radians, respectively. A group of more general gates is the rotation operator gates {RP (θ) = e−i θ 2P | P ∈ {X,Y, Z}}, which allows the rotating angle around the x, y and z axes of the Bloch sphere to be customized. They can be written in the matrix form as RX(θ) = [ cos θ2 −i sin θ 2 −i sin θ2 cos θ 2 ] , RY (θ) = [ cos θ2 − sin θ 2 sin θ2 cos θ 2 ] , RZ(θ) = [ e−i θ 2 0 0 ei θ 2 ] . (2) Quantum measurement A measurement is a quantum operation to retrieve classical information from a quantum state. The simplest measurement is the computational basis measurement; for a single-qubit state |ψ⟩ = α |0⟩+β |1⟩, the outcome of such a measurement is either |0⟩ with probability |α|2 or |1⟩ with probability |β|2. Computational basis measurements can be generalized to Pauli measurements, where Pauli matrices are observables that we can measure. For example, measuring Pauli Z is equivalent to the computational basis measurement, since |0⟩ and |1⟩ are eigenvectors of Z with corresponding eigenvalues ±1. Pauli Z measurement returns +1 if the resulting state is |0⟩ and returns −1 if the resulting state is |1⟩. We can calculate the expected value of Pauli Z measurement when the state is |ψ⟩: ⟨ψ|Z |ψ⟩ = (α∗ ⟨0|+ β∗ ⟨1|)Z(α |0⟩+ β |1⟩) = |α|2 − |β|2. (3) Pauli measurements can be extended to the case of multiple qubits by a tensor product of Pauli matrices. 2.2 Data re-uploading quantum neural networks We consider the data re-uploading QNN model [30], which is a generalized framework of quantum machine learning models based on parameterized quantum circuits [31]. A data re-uploading QNN is a quantum circuit that consists of interleaved data encoding circuit blocks S(·) and trainable circuit blocks V (·), Uθ,L(x) = V (θ0) L∏ j=1 S(x)V (θj), (4) where x is the input data, θ = (θ0, . . . ,θL) is a set of trainable parameters, and L denotes the number of layers. It is common to build the data encoding blocks and trainable blocks using the most prevalent parameterized quantum operators {RX , RY , RZ}. We define the output of this model as the expectation value of measuring some observable M , fθ,L(x) = ⟨0|U†θ,L(x)MUθ,L(x) |0⟩ . (5) Note that some data re-uploading QNNs introduce trainable weights in data pre-processing or postprocessing, which are considered as hybrid QNNs. For example, the data encoding block defined as S(w · x) is essentially equivalent to feeding data x into a neuron with weight w and then uploading the output to an encoding block S(·). Such a mixing structure makes it hard to tell whether the expressive power comes from the classical or quantum part. To solely study the expressive power of QNNs, we define the concept of native QNN, where all trainable weights are introduced by parameters of tunable quantum gates so that they can be distinguished from a hybrid QNN. Throughout this paper, we simply refer to the native QNN as QNN for short unless specified otherwise. 3 Expressivity of single-qubit QNNs To better understand the expressive power of QNNs, we start investigating the simplest case of single-qubit models. Ref. [27] investigated the expressive power of QNNs using the Fourier series formalism. In this section, we establish an exact correlation between the single-qubit QNN and the Fourier series in terms of both the frequency spectrum and Fourier coefficients. Note that we consider one-dimensional input data for now, which corresponds to the class of univariate functions. A Fourier series is an expansion of a periodic function f(x) in infinite terms of a sum of sines and cosines which can be written in the exponential form as f(x) = ∞∑ n=−∞ cne i 2πT nx, (6) where cn = 1 T ∫ T f(x)ei 2π T nxdx (7) are the Fourier coefficients. Here T is the period of function f(x). The quantities n 2πT are called the frequencies, which are multiples of the base frequency 2πT . The set of frequency {n 2π T }n is called the frequency spectrum of Fourier series. In approximation theory, a partial Fourier series (or truncated Fourier series) sN (x) = N∑ n=−N cne i πT nx (8) is commonly used to approximate the function f(x). A partial Fourier series can be transformed to a Laurent polynomial P ∈ C[z, z−1] by the substitution z = ei 2πT x, i.e., P (z) = N∑ n=−N cnz n. (9) A Laurent polynomial P ∈ F[z, z−1] is a linear combination of positive and negative powers of the variable z with coefficients in F. The degree of a Laurent polynomial P is the maximum absolute value of any exponent of z with non-zero coefficients, denoted by deg(P ). We say that a Laurent polynomial P has parity 0 if all coefficients corresponding to odd powers of z are 0, and similarly P has parity 1 if all coefficients corresponding to even powers of z are 0. Following the pattern of Fourier series, we first consider using RZ(x) = e−ixZ/2 to encode the input x and let RY (·) be the trainable gate. We can write the QNN as UYZYθ,L(x) = RY (θ0) L∏ j=1 RZ(x)RY (θj), (10) and the quantum circuit is shown in Fig. 2. To characterize the expressivity of this kind of basic QNN, we first rigorously show that the QNN UYZYθ,L(x) can be represented in the form of a partial Fourier series with real coefficients. Lemma 1 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 such that UYZYθ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (11) if and only if real Laurent polynomials P,Q ∈ R[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 1 decomposes the unitary matrix of the QNN UYZYθ,L(x) into Laurent polynomials with real coefficients, which can be used to represent a partial Fourier series with real coefficients. The proof of Lemma 1 uses a method of mathematical induction that is in the similar spirit of the proof of quantum signal processing [32–35], which is a powerful subroutine in Hamiltonian simulation [4] and quantum singular value transformation [35]. The forward direction is straightforward by the definition of UYZYθ,L(x) in Eq. (10). The proof of the backward direction is by induction in L where the base case L = 0 holds trivially. For L > 0, we prove that for any UYZYθ,L(x) where P,Q satisfy the three conditions, there exists a unique block R†Y (θk)R † Z(x) such that polynomials P̂ and Q̂ in UYZYθ,L(x)R † Y (θk)R † Z(x) satisfy the three conditions for L − 1. Lemma 1 explicitly correlates the frequency spectrum of the Fourier series and the number of layers L of the QNN. The proof of Lemma 1 also illustrates the exact correspondence between the Fourier coefficients and parameters of trainable gates. A detailed proof can be found in Appendix A.1. Other than characterizing the QNN with Laurent polynomials, we also need to specify the achievable Laurent polynomials P (x) for which there exists a correspondingQ(x) satisfying the three conditions in Lemma 1. It has been proved in Refs. [32, 34] that the only constraint is |P (x)| ≤ 1 for all x ∈ R. That is, for any P ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2, if |P (x)| ≤ 1 for all x ∈ R, there exists a Q ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2 such that |P (x)|2 + |Q(x)|2 = 1 for all x ∈ R. By Lemma 1, the partial Fourier series corresponding to the QNN UYZYθ,L(x) only has real coefficients. With the exponential form of Eq. (6), a Fourier series with real coefficients only has cos(nx) terms, which means UYZYθ,L(x) can be used to approximate any even function on the interval [−π, π]. Thus we establish the following proposition, whose proof is deferred to Appendix A.2. Proposition 2 For any even square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UYZYθ,L(x) such that |ψ(x)⟩ = UYZYθ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (12) for some normalizing constant α. Although the above result states that the QNN UYZYθ,L(x) |0⟩ is able to approximate a class of even functions within arbitrary precision, we can see that the main limitation of the expressive power of QNN UYZYθ,L(x) is the real Fourier coefficients, which may restrict its universal approximation capability. To tackle this issue, our idea is to introduce complex coefficients to the corresponding Laurent polynomials, which can be implemented by adding a trainable Pauli Z rotation operator in each layer. Specifically, we consider the QNN UWZWθ,ϕ,L(x) = RZ(φ)W (θ0, ϕ0) L∏ j=1 RZ(x)W (θj , ϕj), (13) where each trainable block is W (θj , ϕj) := RY (θj)RZ(ϕj). Here we add an extra RZ(φ) gate to adjust the relative phase between P and Q. The quantum circuit of UWZWθ,ϕ,L(x) is illustrated in Fig. 3. To characterize the capability of this QNN, we establish the following Lemma which implies UWZWθ,ϕ,L(x) can express any Fourier partial sum with complex Fourier coefficients. Lemma 3 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 and ϕ = (φ, ϕ0, ϕ1, . . . , ϕL) ∈ RL+2 such that UWZWθ,ϕ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (14) if and only if Laurent polynomials P,Q ∈ C[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 3 demonstrates a decomposition of the QNN UWZWθ,ϕ,L(x) into Laurent polynomials with complex coefficients, which can be used to represent a partial Fourier series with complex coefficients in form of Eq. (8). The proof of Lemma 3 is similar to the proof of Lemma 1 and its details are provided in Appendix A.3. Again, the proof demonstrates the effect of parameterized gates on the control of Fourier coefficients. Similarly, the constraint for the achievable complex Laurent polynomials P (x) in UWZWθ,ϕ,L(x) is that |P (x)| ≤ 1 for all x ∈ R, as proved in Refs. [36, 37]. We then prove in the following Theorem 4 that UWZWθ,ϕ,L(x) is able to approximate any square-integrable function within arbitrary precision, using the well-established result in Fourier analysis. The detailed proof is deferred to Appendix A.4. Theorem 4 (Univariate approximation properties of single-qubit QNNs.) For any univariate square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UWZWθ,ϕ,L(x) such that |ψ(x)⟩ = UWZWθ,ϕ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (15) for some normalizing constant α. Up till now we only let the encoding gate be the RZ(x) gate, what if we use other rotation operator gates to encode the data? It actually does not matter which one we choose as the encoding gate if the trainable gates are universal. Note that Pauli rotation operators RX(x), RY (x), RZ(x) have two eigenvalues cos(x/2)± i sin(x/2), and they can be diagonalized as Q†RZ(x)Q. Merging unitaries Q† and Q to universal trainable gates gives the QNN that uses RZ(x) as the encoding gate. We hereby define the generic single-qubit QNNs as UUZUθ,ϕ,λ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(x)U3(θj , ϕj , λj), (16) where each trainable block is the generic rotation gate U3(θ, ϕ, λ) = [ cos θ2 −e iλ sin θ2 eiϕ sin θ2 e i(ϕ+λ) cos θ2 ] . (17) By definition, any L-layer single-qubit QNN, including UWZWθ,ϕ,L, can be expressed as U UZU θ,ϕ,λ,L. Thus UUZUθ,ϕ,λ,L is surely a universal approximator. 4 Limitations of single-qubit QNNs We have proved that a single-qubit QNN is a universal approximator for univariate functions, it is natural to consider its limitations. Is there a single-qubit QNN that can approximate arbitrary multivariate functions? We answer this question from the perspective of multivariate Fourier series. Specifically, we consider the generic form of single-qubit QNNs defined in Eq. (16) and upload the classical data x := (x(1), x(2), · · · , x(d)) ∈ Rd as Uθ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(xj)U3(θj , ϕj , λj), (18) where each xj ∈ x and L ∈ N+. Without loss of generality, assume that each dimension x(i) is uploaded the same number of times, denoted by K. Naturally, we have Kd = L. Further, we rewrite the output of QNNs defined in Eq. (5) as the following form. fθ,L(x) = ∑ ω∈Ω cωe iω·x, (19) where Ω = {−K, · · · , 0, · · · ,K}d, and the cω is determined by parameters θ and the observable M . A detailed analysis can be found in Appendix B. We can see that Eq. (19) cannot be represented as a K-truncated multivariate Fourier series. Specifically, by the curse of dimensionality, it requires exponentially many terms in d to approximate a function in d dimensions. However, for fθ,L(x), the degrees of freedom grow linearly with the number of layers L. It implies that single-qubit native QNNs potentially lack the capability to universally approximate arbitrary multivariate functions from the perspective of the Fourier series. Despite the potential limitation of native QNNs in multivariate approximation, it has been proved that a single-qubit hybrid QNN can approximate arbitrary multivariate functions [28, 29]. However, the UAP of hybrid QNNs is fundamentally different from the native model that we investigated. Those hybrid models involve trainable weights either in data pre-processing or post-processing. Specifically, introducing trainable weights in data pre-processing is equivalent to multiplying each frequency of the Fourier series by an arbitrary real coefficient, i.e. S(wx) = RZ(wx) = e −iw x2Z . (20) Therefore it enriches the frequency spectrum of native QNNs, which only contain integer multiples of the fundamental frequency. It can also be readily extended to the encoding of multi-dimensional data x := (x(1), x(2), · · · , x(d)) as RZ(w1x (1))RZ(w2x (2)) · · ·RZ(wdx(d)) = RZ(w · x) = e− 1 2 iw·xZ , (21) where w = (w1, . . . , wd) is a vector of trainable weights. Using such an encoding method enables a single-qubit QNN to approximate any continuous multivariate function [29]. We notice that, although the trainable weights enrich the frequency spectrum of the Fourier series, the capability of hybrid QNNs to approximate arbitrary multivariate functions is not obtained using the multivariate Fourier series, but the universal approximation theorem [25, 26] in machine learning theory. In another word, the multivariate UAP of a hybrid QNN mostly comes from the classical structure, and the QNN serves as an activation function σ(x) = e−ix in the universal approximation theorem. This fact might be able to shed some light on the reason why a hybrid QNN does not provide quantum advantages over the classical NN. 5 Numerical experiments In order to better illustrate the expressive power of single-qubit native QNNs, we supplement the theoretical results with numerical experiments. Specifically, we demonstrate the flexibility and approximation capability of single-qubit native QNNs in Section 5.1. The limitations of single-qubit QNNs are illustrated in Section 5.2 through the numerical experiments on approximating multivariate functions. All simulations are carried out with the Paddle Quantum toolkit on the PaddlePaddle Deep Learning Platform, using a desktop with an 8-core i7 CPU and 32GB RAM. 5.1 Univariate function approximation A damping function f(x) = sin (5x)/5x is used to demonstrate the approximation performance of single-qubit native QNN models. The dataset consists of 300 data points uniformly sampled from the interval [0, π], from which 200 are selected for the training set and 100 for the test set. Since the function f(x) is an even function, we use the QNN model as defined in Eq. (10). The parameters of trainable gates are initialized from the uniform distribution on [0, 2π]. We adopt a variational quantum algorithm, where a gradient-based optimizer is used to search and update parameters in the QNN. The mean squared error (MSE) serves as the loss function. Here the Adam optimizer is used with a learning rate of 0.1. We set the training iterations to be 100 with a batch size of 20 for all experiments. While approximating a function f(x) by a truncated Fourier series, the approximation error decreases as the number of expansion terms increases. As shown in Lemma 3, the frequency spectrum and Fourier coefficients will be extended by consecutive repetitions of the encoding gate and trainable gate. The numerical results in Fig. 4 illustrate that the approximation error decreases as the number of layers increases, which are consistent with our theoretical analysis. To further show the flexibility and capability of single-qubit QNNs, we pick a square wave function as the target function. The training set contains 400 data points sampled from the interval [0, 20]. The numerical results are illustrated in Fig. 5. By simply repeating 45 layers, the single-qubit QNN UWZWθ,ϕ,L(x) learns the function hidden beneath the training data. In particular, the approximation works well not only for input variables located between the training data but also outside of the region, because the Fourier series has a natural capability in dealing with periodic functions. 5.2 Multivariate function approximation We numerically demonstrate the limitations of single-qubit native QNNs in approximate multivariate functions. We examine the convergence of the loss as the number of layers of the circuit increases and compare the outcome with the target function. Specifically, we consider a bivariate function f(x, y) = (x2 + y − 1.5π)2 + (x+ y2 − π)2 as the target function. Note that f(x, y) is normalized on the interval [−π, π]2, i.e., −1 ≤ f(x, y) ≤ 1. The training set consists of 400 data points sampled from interval [−π, π]2. We use the singlequbit QNN with various numbers of layers defined as Eq. (18) to learn the target function. The experimental setting is the same as in the univariate function approximation. In order to reduce the effect of randomness, the experimental results are averaged over 5 independent training instances. Fig. 6 shows that the single-qubit native QNN has difficulty in approximating bivariate functions. The approximation result of QNN as shown in Fig. 6b is quite different from the target function, even for a very deep circuit of 40 layers. Also, the training loss in Fig. 6c does not decrease as the number of layers increases. Note that the target function is only bivariate here, the limitations of single-qubit native QNNs will be more obvious in the case of higher dimensions. We further propose a possible strategy that extends single-qubit QNNs to multiple qubits, which could potentially overcome the limitations and handle practical classification tasks, see Appendix C for details. 6 Conclusion and outlook In this work, we presented a systematic investigation of the expressive power of single-qubit native QNNs, which are capable to approximate any square-integrable univariate function with arbitrary precision. We not only give an existence proof but also analytically show an exact mapping between native QNNs and the partial Fourier series from perspectives of both frequency spectrum and Fourier coefficients, which solves an open problem on the UAP of single-qubit QNNs in Ref. [27]. Our proof, inspired by quantum signal processing, explicitly illustrates the correlation between parameters of trainable gates and the Fourier coefficients. Other than the expressivity, we also discuss the limitation of single-qubit QNNs from the perspective of multivariate Fourier series. Both the expressivity and limitation of single-qubit QNNs are validated by numerical simulations. We expect our results provide a fundamental framework to the class of data re-uploading QNNs, which serves as insightful guidance on the design of such QNN models. Although the expressive power of a single-qubit QNN have been well investigated, it may not be an ideal model in practice due to the potential limitations on approximating multivariate functions. Moreover, single-qubit models can be efficiently simulated by classical computers and hence cannot bring any quantum advantage. The multi-qubit QNNs as shown in Ref. [27] and in Appendix C might require exponential circuit depth, which is impractical to implement and also does not fit the systematic analysis for the single-qubit case. Therefore one future step is to efficiently generalize the framework of single-qubit QNNs to multi-qubit cases. One promising approach is to encode data into multiqubit unitaries by block encoding and then mapping higher-dimensional operations on multi-qubit systems to single-qubit gates by qubitization [38]. Such techniques are originally used in multi-qubit extensions of quantum signal processing, such as quantum singular value transformation [35] and quantum phase processing [37]. By the connection between single-qubit QNNs and quantum signal processing, block encoding and qubitization may lead to useful QNN models for multi-qubit cases and establish corresponding systematic analyses. A recent paper presents a method that extends quantum signal processing to multivariate [39], which might also be applicable to single-qubit QNNs. We believe our results and their possible extensions would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. Acknowledgments and Disclosure of Funding We would like to thank Runyao Duan for helpful suggestions on quantum signal processing. We also thank Guangxi Li, Geng Liu, Youle Wang, Haokai Zhang, Lei Zhang, and Chengkai Zhu for useful comments. Z. Y. and H. Y. contributed equally to this work. Part of this work was done when Z. Y., H. Y., and M. L. were research interns at Baidu Research.
1. What is the focus and contribution of the paper regarding quantum neural networks? 2. What are the strengths of the proposed approach, particularly in terms of expressive ability and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its limitations and potential applications? 4. Do you have any concerns regarding the generalization of the results to future studies on multivariate function approximation in QNNs? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper aims to explore the data re-uploading expressive ability of quantum neural networks (QNNs) by providing rigorous theoretical proofs for native single-qubit QNNs. They point out that native QNNs demonstrate the expressivity of the quantum part of hybrid QNNs, decoupled from the classical part of hybrid QNNs. In a series of proofs, they show that single-qubit native QNNs can approximate any univariate square-integrable function arbitrarily close by an exact mapping to the partial Fourier series. They then discuss how the native QNNs lack the expressivity in approximating multivariate functions and hybrid single qubit QNNs can approximate arbitrary multivariate functions with the help of classical structures. They further show that increasing the number of QNN laybers decreases the mean squared error for approximating a function. Strengths And Weaknesses The paper provides rigorous proofs that build up the foundational work for native single-qubit QNNs in a purely quantum context. It presents, in a logical and consistent manner, how univariate square-integrable functions can be loaded into single-qubit QNNs up to arbitrary precision using the partial Fourier series. The numerical experiments show clear evidence for the correlation between layer number and loss convergence, and the inclusion of the square wave is excellent as it requires a large number of fourier terms to approximate. It also does a good job explaining with theory and demonstrating with an experiment how multivariate functions can not be approximated well with single qubit QNNs. The paper talks about the limitations of single-qubit QNNs and the importance of “investigating QNNs with universal approximation properties for multivariate functions”. It also provides a potential design for multi-qubit QNNs in Appendix C. However, it is not immediately clear how the results of this paper, especially theoretical proofs which are exclusively on single-qubit QNNs, can be generalized for future studies of multivariate function approximation in QNNs. Questions It is mentioned that hybrid QNNs provide no quantum advantage over classical neural networks. So do the results of this paper prove, or show a potential of quantum advantage for native QNNs? If not, will a multivariate function approximating native QNN provide quantum advantage? Limitations The authors have adequately addressed the limitations of native single-qubit QNNs as a trivial case for data loading into QNNs using a partial fourier series.
NIPS
Title Power and limitations of single-qubit native quantum neural networks Abstract Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization. While the applications of QNN have been widely investigated, its theoretical foundation remains less understood. In this paper, we formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks that consist of interleaved encoding circuit blocks and trainable circuit blocks. First, we prove that single-qubit quantum neural networks can approximate any univariate function by mapping the model to a partial Fourier series. We in particular establish the exact correlations between the parameters of the trainable gates and the Fourier coefficients, resolving an open problem on the universal approximation property of QNN. Second, we discuss the limitations of single-qubit native QNNs on approximating multivariate functions by analyzing the frequency spectrum and the flexibility of Fourier coefficients. We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. We believe these results would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. 1 Introduction Quantum computing is a technology that exploits the laws of quantum mechanics to solve complicated problems much faster than classical computers. It has been applied in areas such as breaking cryptographic systems [1], searching databases [2], and quantum simulation [3, 4], in which it gives a quantum speedup over the best known classical algorithms. With the fast development of quantum hardware, recent results [5–7] have shown quantum advantages in specific tasks. An emerging direction is to investigate if quantum computing can offer quantum advantages in artificial intelligence, giving rise to an interdisciplinary area called quantum machine learning [8]. A leading strategy to quantum machine learning uses quantum neural networks (QNNs), which are quantum analogs of artificial neural networks (NNs). Much progress has been made in applications of QNN in various topics [9–11], including quantum autoencoder [12, 13], supervised learning [14–17], dynamic learning [18–20], quantum chemistry [21], and quantum metrology [22–24]. Similar to the field of machine learning, a crucial challenge of quantum machine learning is to design powerful and efficient QNN models for quantum learning tasks, which requires a theoretical understanding of how structural properties of QNN may affect its expressive power. The expressive power of a QNN model can be characterized by the function classes that it can approximate. Recently, the universal approximation property (UAP) of QNN models has been ∗Corresponding author. [email protected] †Z. Y. and H. Y. contributed equally to this work. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). investigated, which is similar to the universal approximation theorem [25, 26] in machine learning theory. The authors of [27] suggested that a QNN model can be written as a partial Fourier series in the data and proved the existence of a multi-qubit QNN model that can realize a universal function approximator. The UAP of single-qubit models remains an open conjecture, due to the difficulties in analyzing the flexibility of Fourier coefficients. Another work [28] considered hybrid classicalquantum neural networks and obtained the UAP by using the Stone-Weierstrass theorem. Ref. [29] proved that even a single-qubit hybrid QNN can approximate any bounded function. The above results of UAP show that the expressivity of QNNs is strong, but it does not reveal the relationship between the structural properties of a QNN and its expressive ability. Therefore the UAP may not be a good guide for constructing QNN models with practical interests. In particular, it is worth noting that the existence proof in Ref. [27] is under the assumption of multi-qubit systems, exponential circuit depth, and arbitrary observables, which does not explicitly give the structure of QNNs. Meanwhile, Refs. [28, 29] demonstrated the construction of QNNs in detail, but it is unclear whether the powerful expressivity comes from the classical part or the quantum part of hybrid models. Moreover, a systematic analysis of how parameters in the QNN affect the classes of functions that it can approximate is missing. The absence of these theoretical foundations hinders the understanding on the expressive power and limitation of QNNs, which makes it highly necessary but challenging to design effective and efficient QNNs. To theoretically investigate the expressivity of QNNs, it is important to study the simplest case of single-qubit QNNs, just like the celebrated universal approximation theorem first showing the expressivity of depth-2 NNs [25, 26]. In this paper, we formulate an analytical framework that correlates the structural properties of a single-qubit native QNN and its expressive power. We consider data re-uploading models that consist of interleaved data encoding circuit blocks and trainable circuit blocks [30]. First, we prove that there exists a single-qubit native QNN that can express any Fourier series, which is a universal approximator for any square-integrable univariate function. It solves the open problem on the UAP of single-qubit QNNs in Ref. [27]. Second, we systematically analyze how parameters in trainable circuit blocks affect the Fourier coefficients. The main results on the expressivity of QNNs are summarized as in Fig. 1. Third, we discuss potential difficulties for singlequbit native QNNs to approximate multivariate functions. Additionally, we compare native QNNs with the hybrid version and show the fundamental difference in their expressive power. We also demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments on approximating univariate and multivariate functions. Our analysis, beyond the UAP of QNNs, improves the understanding of the relationship between the expressive power and the structure of QNNs. This fundamental framework provides a theoretical foundation for data re-uploading QNN models, which is helpful to construct effective and efficient QNNs for quantum machine learning tasks. We will start by giving some background and defining the native QNN models in the next section, and then analyze the expressivity of single-qubit native QNNs in Section 3. In Section 4, we discuss the limitation of single-qubit native QNNs and compare native QNNs with hybrid QNNs, which shows the fundamental difference between their expressive power. The numerical experiments on the expressivity and limitations of single-qubit native QNNs are described in Section 5. 2 Preliminaries 2.1 A primer on quantum computing Quantum state The basic unit of information in quantum computation is one quantum bit, or qubit for short. Just like a classical bit has a state in either 0 or 1, a qubit also has a state. A single-qubit state is a unit vector in a 2-dimensional Hilbert space C2, which is commonly denoted in Dirac notation |ψ⟩ = α |0⟩ + β |1⟩, where |0⟩ = (1, 0)T and |1⟩ = (0, 1)T are known as computational basis states. Here |ψ⟩ denotes a column vector and its conjugate transpose ⟨ψ| := |ψ⟩† is a row vector. Then the inner product ⟨ψ|ψ⟩ = ∥ψ∥2 denotes the square of L2-norm of |ψ⟩. Note that |ψ⟩ is a normalized state so ⟨ψ|ψ⟩ = |α|2 + |β|2 = 1. Having this constraint, a single-qubit state can be represented as a point at surface of a Bloch sphere, written as |ψ⟩ = cos(θ/2) |0⟩+ eiϕ sin(θ/2) |1⟩, where θ and ϕ are re-interpreted as azimuthal angle and polar angle in spherical coordinates. More generally, a quantum state of n qubits can be represented as a normalized vector in the n-fold tensor product Hilbert space C2n . Quantum gate Quantum gates are basic operations used to manipulate qubits. Unlike some classical logical gates, quantum gates are reversible, so they can be represented as unitary transformations in the Hilbert space. A unitary matrix U satisfies U†U = UU† = I . A commonly used group of single-qubit quantum gates is the Pauli gates, which can be written as Pauli matrices: X = [ 0 1 1 0 ] , Y = [ 0 −i i 0 ] , Z = [ 1 0 0 −1 ] . (1) The Pauli X , Y , and Z gates are equivalent to a rotation around the x, y, and z axes of the Bloch sphere by π radians, respectively. A group of more general gates is the rotation operator gates {RP (θ) = e−i θ 2P | P ∈ {X,Y, Z}}, which allows the rotating angle around the x, y and z axes of the Bloch sphere to be customized. They can be written in the matrix form as RX(θ) = [ cos θ2 −i sin θ 2 −i sin θ2 cos θ 2 ] , RY (θ) = [ cos θ2 − sin θ 2 sin θ2 cos θ 2 ] , RZ(θ) = [ e−i θ 2 0 0 ei θ 2 ] . (2) Quantum measurement A measurement is a quantum operation to retrieve classical information from a quantum state. The simplest measurement is the computational basis measurement; for a single-qubit state |ψ⟩ = α |0⟩+β |1⟩, the outcome of such a measurement is either |0⟩ with probability |α|2 or |1⟩ with probability |β|2. Computational basis measurements can be generalized to Pauli measurements, where Pauli matrices are observables that we can measure. For example, measuring Pauli Z is equivalent to the computational basis measurement, since |0⟩ and |1⟩ are eigenvectors of Z with corresponding eigenvalues ±1. Pauli Z measurement returns +1 if the resulting state is |0⟩ and returns −1 if the resulting state is |1⟩. We can calculate the expected value of Pauli Z measurement when the state is |ψ⟩: ⟨ψ|Z |ψ⟩ = (α∗ ⟨0|+ β∗ ⟨1|)Z(α |0⟩+ β |1⟩) = |α|2 − |β|2. (3) Pauli measurements can be extended to the case of multiple qubits by a tensor product of Pauli matrices. 2.2 Data re-uploading quantum neural networks We consider the data re-uploading QNN model [30], which is a generalized framework of quantum machine learning models based on parameterized quantum circuits [31]. A data re-uploading QNN is a quantum circuit that consists of interleaved data encoding circuit blocks S(·) and trainable circuit blocks V (·), Uθ,L(x) = V (θ0) L∏ j=1 S(x)V (θj), (4) where x is the input data, θ = (θ0, . . . ,θL) is a set of trainable parameters, and L denotes the number of layers. It is common to build the data encoding blocks and trainable blocks using the most prevalent parameterized quantum operators {RX , RY , RZ}. We define the output of this model as the expectation value of measuring some observable M , fθ,L(x) = ⟨0|U†θ,L(x)MUθ,L(x) |0⟩ . (5) Note that some data re-uploading QNNs introduce trainable weights in data pre-processing or postprocessing, which are considered as hybrid QNNs. For example, the data encoding block defined as S(w · x) is essentially equivalent to feeding data x into a neuron with weight w and then uploading the output to an encoding block S(·). Such a mixing structure makes it hard to tell whether the expressive power comes from the classical or quantum part. To solely study the expressive power of QNNs, we define the concept of native QNN, where all trainable weights are introduced by parameters of tunable quantum gates so that they can be distinguished from a hybrid QNN. Throughout this paper, we simply refer to the native QNN as QNN for short unless specified otherwise. 3 Expressivity of single-qubit QNNs To better understand the expressive power of QNNs, we start investigating the simplest case of single-qubit models. Ref. [27] investigated the expressive power of QNNs using the Fourier series formalism. In this section, we establish an exact correlation between the single-qubit QNN and the Fourier series in terms of both the frequency spectrum and Fourier coefficients. Note that we consider one-dimensional input data for now, which corresponds to the class of univariate functions. A Fourier series is an expansion of a periodic function f(x) in infinite terms of a sum of sines and cosines which can be written in the exponential form as f(x) = ∞∑ n=−∞ cne i 2πT nx, (6) where cn = 1 T ∫ T f(x)ei 2π T nxdx (7) are the Fourier coefficients. Here T is the period of function f(x). The quantities n 2πT are called the frequencies, which are multiples of the base frequency 2πT . The set of frequency {n 2π T }n is called the frequency spectrum of Fourier series. In approximation theory, a partial Fourier series (or truncated Fourier series) sN (x) = N∑ n=−N cne i πT nx (8) is commonly used to approximate the function f(x). A partial Fourier series can be transformed to a Laurent polynomial P ∈ C[z, z−1] by the substitution z = ei 2πT x, i.e., P (z) = N∑ n=−N cnz n. (9) A Laurent polynomial P ∈ F[z, z−1] is a linear combination of positive and negative powers of the variable z with coefficients in F. The degree of a Laurent polynomial P is the maximum absolute value of any exponent of z with non-zero coefficients, denoted by deg(P ). We say that a Laurent polynomial P has parity 0 if all coefficients corresponding to odd powers of z are 0, and similarly P has parity 1 if all coefficients corresponding to even powers of z are 0. Following the pattern of Fourier series, we first consider using RZ(x) = e−ixZ/2 to encode the input x and let RY (·) be the trainable gate. We can write the QNN as UYZYθ,L(x) = RY (θ0) L∏ j=1 RZ(x)RY (θj), (10) and the quantum circuit is shown in Fig. 2. To characterize the expressivity of this kind of basic QNN, we first rigorously show that the QNN UYZYθ,L(x) can be represented in the form of a partial Fourier series with real coefficients. Lemma 1 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 such that UYZYθ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (11) if and only if real Laurent polynomials P,Q ∈ R[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 1 decomposes the unitary matrix of the QNN UYZYθ,L(x) into Laurent polynomials with real coefficients, which can be used to represent a partial Fourier series with real coefficients. The proof of Lemma 1 uses a method of mathematical induction that is in the similar spirit of the proof of quantum signal processing [32–35], which is a powerful subroutine in Hamiltonian simulation [4] and quantum singular value transformation [35]. The forward direction is straightforward by the definition of UYZYθ,L(x) in Eq. (10). The proof of the backward direction is by induction in L where the base case L = 0 holds trivially. For L > 0, we prove that for any UYZYθ,L(x) where P,Q satisfy the three conditions, there exists a unique block R†Y (θk)R † Z(x) such that polynomials P̂ and Q̂ in UYZYθ,L(x)R † Y (θk)R † Z(x) satisfy the three conditions for L − 1. Lemma 1 explicitly correlates the frequency spectrum of the Fourier series and the number of layers L of the QNN. The proof of Lemma 1 also illustrates the exact correspondence between the Fourier coefficients and parameters of trainable gates. A detailed proof can be found in Appendix A.1. Other than characterizing the QNN with Laurent polynomials, we also need to specify the achievable Laurent polynomials P (x) for which there exists a correspondingQ(x) satisfying the three conditions in Lemma 1. It has been proved in Refs. [32, 34] that the only constraint is |P (x)| ≤ 1 for all x ∈ R. That is, for any P ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2, if |P (x)| ≤ 1 for all x ∈ R, there exists a Q ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2 such that |P (x)|2 + |Q(x)|2 = 1 for all x ∈ R. By Lemma 1, the partial Fourier series corresponding to the QNN UYZYθ,L(x) only has real coefficients. With the exponential form of Eq. (6), a Fourier series with real coefficients only has cos(nx) terms, which means UYZYθ,L(x) can be used to approximate any even function on the interval [−π, π]. Thus we establish the following proposition, whose proof is deferred to Appendix A.2. Proposition 2 For any even square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UYZYθ,L(x) such that |ψ(x)⟩ = UYZYθ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (12) for some normalizing constant α. Although the above result states that the QNN UYZYθ,L(x) |0⟩ is able to approximate a class of even functions within arbitrary precision, we can see that the main limitation of the expressive power of QNN UYZYθ,L(x) is the real Fourier coefficients, which may restrict its universal approximation capability. To tackle this issue, our idea is to introduce complex coefficients to the corresponding Laurent polynomials, which can be implemented by adding a trainable Pauli Z rotation operator in each layer. Specifically, we consider the QNN UWZWθ,ϕ,L(x) = RZ(φ)W (θ0, ϕ0) L∏ j=1 RZ(x)W (θj , ϕj), (13) where each trainable block is W (θj , ϕj) := RY (θj)RZ(ϕj). Here we add an extra RZ(φ) gate to adjust the relative phase between P and Q. The quantum circuit of UWZWθ,ϕ,L(x) is illustrated in Fig. 3. To characterize the capability of this QNN, we establish the following Lemma which implies UWZWθ,ϕ,L(x) can express any Fourier partial sum with complex Fourier coefficients. Lemma 3 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 and ϕ = (φ, ϕ0, ϕ1, . . . , ϕL) ∈ RL+2 such that UWZWθ,ϕ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (14) if and only if Laurent polynomials P,Q ∈ C[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 3 demonstrates a decomposition of the QNN UWZWθ,ϕ,L(x) into Laurent polynomials with complex coefficients, which can be used to represent a partial Fourier series with complex coefficients in form of Eq. (8). The proof of Lemma 3 is similar to the proof of Lemma 1 and its details are provided in Appendix A.3. Again, the proof demonstrates the effect of parameterized gates on the control of Fourier coefficients. Similarly, the constraint for the achievable complex Laurent polynomials P (x) in UWZWθ,ϕ,L(x) is that |P (x)| ≤ 1 for all x ∈ R, as proved in Refs. [36, 37]. We then prove in the following Theorem 4 that UWZWθ,ϕ,L(x) is able to approximate any square-integrable function within arbitrary precision, using the well-established result in Fourier analysis. The detailed proof is deferred to Appendix A.4. Theorem 4 (Univariate approximation properties of single-qubit QNNs.) For any univariate square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UWZWθ,ϕ,L(x) such that |ψ(x)⟩ = UWZWθ,ϕ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (15) for some normalizing constant α. Up till now we only let the encoding gate be the RZ(x) gate, what if we use other rotation operator gates to encode the data? It actually does not matter which one we choose as the encoding gate if the trainable gates are universal. Note that Pauli rotation operators RX(x), RY (x), RZ(x) have two eigenvalues cos(x/2)± i sin(x/2), and they can be diagonalized as Q†RZ(x)Q. Merging unitaries Q† and Q to universal trainable gates gives the QNN that uses RZ(x) as the encoding gate. We hereby define the generic single-qubit QNNs as UUZUθ,ϕ,λ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(x)U3(θj , ϕj , λj), (16) where each trainable block is the generic rotation gate U3(θ, ϕ, λ) = [ cos θ2 −e iλ sin θ2 eiϕ sin θ2 e i(ϕ+λ) cos θ2 ] . (17) By definition, any L-layer single-qubit QNN, including UWZWθ,ϕ,L, can be expressed as U UZU θ,ϕ,λ,L. Thus UUZUθ,ϕ,λ,L is surely a universal approximator. 4 Limitations of single-qubit QNNs We have proved that a single-qubit QNN is a universal approximator for univariate functions, it is natural to consider its limitations. Is there a single-qubit QNN that can approximate arbitrary multivariate functions? We answer this question from the perspective of multivariate Fourier series. Specifically, we consider the generic form of single-qubit QNNs defined in Eq. (16) and upload the classical data x := (x(1), x(2), · · · , x(d)) ∈ Rd as Uθ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(xj)U3(θj , ϕj , λj), (18) where each xj ∈ x and L ∈ N+. Without loss of generality, assume that each dimension x(i) is uploaded the same number of times, denoted by K. Naturally, we have Kd = L. Further, we rewrite the output of QNNs defined in Eq. (5) as the following form. fθ,L(x) = ∑ ω∈Ω cωe iω·x, (19) where Ω = {−K, · · · , 0, · · · ,K}d, and the cω is determined by parameters θ and the observable M . A detailed analysis can be found in Appendix B. We can see that Eq. (19) cannot be represented as a K-truncated multivariate Fourier series. Specifically, by the curse of dimensionality, it requires exponentially many terms in d to approximate a function in d dimensions. However, for fθ,L(x), the degrees of freedom grow linearly with the number of layers L. It implies that single-qubit native QNNs potentially lack the capability to universally approximate arbitrary multivariate functions from the perspective of the Fourier series. Despite the potential limitation of native QNNs in multivariate approximation, it has been proved that a single-qubit hybrid QNN can approximate arbitrary multivariate functions [28, 29]. However, the UAP of hybrid QNNs is fundamentally different from the native model that we investigated. Those hybrid models involve trainable weights either in data pre-processing or post-processing. Specifically, introducing trainable weights in data pre-processing is equivalent to multiplying each frequency of the Fourier series by an arbitrary real coefficient, i.e. S(wx) = RZ(wx) = e −iw x2Z . (20) Therefore it enriches the frequency spectrum of native QNNs, which only contain integer multiples of the fundamental frequency. It can also be readily extended to the encoding of multi-dimensional data x := (x(1), x(2), · · · , x(d)) as RZ(w1x (1))RZ(w2x (2)) · · ·RZ(wdx(d)) = RZ(w · x) = e− 1 2 iw·xZ , (21) where w = (w1, . . . , wd) is a vector of trainable weights. Using such an encoding method enables a single-qubit QNN to approximate any continuous multivariate function [29]. We notice that, although the trainable weights enrich the frequency spectrum of the Fourier series, the capability of hybrid QNNs to approximate arbitrary multivariate functions is not obtained using the multivariate Fourier series, but the universal approximation theorem [25, 26] in machine learning theory. In another word, the multivariate UAP of a hybrid QNN mostly comes from the classical structure, and the QNN serves as an activation function σ(x) = e−ix in the universal approximation theorem. This fact might be able to shed some light on the reason why a hybrid QNN does not provide quantum advantages over the classical NN. 5 Numerical experiments In order to better illustrate the expressive power of single-qubit native QNNs, we supplement the theoretical results with numerical experiments. Specifically, we demonstrate the flexibility and approximation capability of single-qubit native QNNs in Section 5.1. The limitations of single-qubit QNNs are illustrated in Section 5.2 through the numerical experiments on approximating multivariate functions. All simulations are carried out with the Paddle Quantum toolkit on the PaddlePaddle Deep Learning Platform, using a desktop with an 8-core i7 CPU and 32GB RAM. 5.1 Univariate function approximation A damping function f(x) = sin (5x)/5x is used to demonstrate the approximation performance of single-qubit native QNN models. The dataset consists of 300 data points uniformly sampled from the interval [0, π], from which 200 are selected for the training set and 100 for the test set. Since the function f(x) is an even function, we use the QNN model as defined in Eq. (10). The parameters of trainable gates are initialized from the uniform distribution on [0, 2π]. We adopt a variational quantum algorithm, where a gradient-based optimizer is used to search and update parameters in the QNN. The mean squared error (MSE) serves as the loss function. Here the Adam optimizer is used with a learning rate of 0.1. We set the training iterations to be 100 with a batch size of 20 for all experiments. While approximating a function f(x) by a truncated Fourier series, the approximation error decreases as the number of expansion terms increases. As shown in Lemma 3, the frequency spectrum and Fourier coefficients will be extended by consecutive repetitions of the encoding gate and trainable gate. The numerical results in Fig. 4 illustrate that the approximation error decreases as the number of layers increases, which are consistent with our theoretical analysis. To further show the flexibility and capability of single-qubit QNNs, we pick a square wave function as the target function. The training set contains 400 data points sampled from the interval [0, 20]. The numerical results are illustrated in Fig. 5. By simply repeating 45 layers, the single-qubit QNN UWZWθ,ϕ,L(x) learns the function hidden beneath the training data. In particular, the approximation works well not only for input variables located between the training data but also outside of the region, because the Fourier series has a natural capability in dealing with periodic functions. 5.2 Multivariate function approximation We numerically demonstrate the limitations of single-qubit native QNNs in approximate multivariate functions. We examine the convergence of the loss as the number of layers of the circuit increases and compare the outcome with the target function. Specifically, we consider a bivariate function f(x, y) = (x2 + y − 1.5π)2 + (x+ y2 − π)2 as the target function. Note that f(x, y) is normalized on the interval [−π, π]2, i.e., −1 ≤ f(x, y) ≤ 1. The training set consists of 400 data points sampled from interval [−π, π]2. We use the singlequbit QNN with various numbers of layers defined as Eq. (18) to learn the target function. The experimental setting is the same as in the univariate function approximation. In order to reduce the effect of randomness, the experimental results are averaged over 5 independent training instances. Fig. 6 shows that the single-qubit native QNN has difficulty in approximating bivariate functions. The approximation result of QNN as shown in Fig. 6b is quite different from the target function, even for a very deep circuit of 40 layers. Also, the training loss in Fig. 6c does not decrease as the number of layers increases. Note that the target function is only bivariate here, the limitations of single-qubit native QNNs will be more obvious in the case of higher dimensions. We further propose a possible strategy that extends single-qubit QNNs to multiple qubits, which could potentially overcome the limitations and handle practical classification tasks, see Appendix C for details. 6 Conclusion and outlook In this work, we presented a systematic investigation of the expressive power of single-qubit native QNNs, which are capable to approximate any square-integrable univariate function with arbitrary precision. We not only give an existence proof but also analytically show an exact mapping between native QNNs and the partial Fourier series from perspectives of both frequency spectrum and Fourier coefficients, which solves an open problem on the UAP of single-qubit QNNs in Ref. [27]. Our proof, inspired by quantum signal processing, explicitly illustrates the correlation between parameters of trainable gates and the Fourier coefficients. Other than the expressivity, we also discuss the limitation of single-qubit QNNs from the perspective of multivariate Fourier series. Both the expressivity and limitation of single-qubit QNNs are validated by numerical simulations. We expect our results provide a fundamental framework to the class of data re-uploading QNNs, which serves as insightful guidance on the design of such QNN models. Although the expressive power of a single-qubit QNN have been well investigated, it may not be an ideal model in practice due to the potential limitations on approximating multivariate functions. Moreover, single-qubit models can be efficiently simulated by classical computers and hence cannot bring any quantum advantage. The multi-qubit QNNs as shown in Ref. [27] and in Appendix C might require exponential circuit depth, which is impractical to implement and also does not fit the systematic analysis for the single-qubit case. Therefore one future step is to efficiently generalize the framework of single-qubit QNNs to multi-qubit cases. One promising approach is to encode data into multiqubit unitaries by block encoding and then mapping higher-dimensional operations on multi-qubit systems to single-qubit gates by qubitization [38]. Such techniques are originally used in multi-qubit extensions of quantum signal processing, such as quantum singular value transformation [35] and quantum phase processing [37]. By the connection between single-qubit QNNs and quantum signal processing, block encoding and qubitization may lead to useful QNN models for multi-qubit cases and establish corresponding systematic analyses. A recent paper presents a method that extends quantum signal processing to multivariate [39], which might also be applicable to single-qubit QNNs. We believe our results and their possible extensions would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. Acknowledgments and Disclosure of Funding We would like to thank Runyao Duan for helpful suggestions on quantum signal processing. We also thank Guangxi Li, Geng Liu, Youle Wang, Haokai Zhang, Lei Zhang, and Chengkai Zhu for useful comments. Z. Y. and H. Y. contributed equally to this work. Part of this work was done when Z. Y., H. Y., and M. L. were research interns at Baidu Research.
1. What is the focus and contribution of the paper on single-qubit quantum neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its simplicity and potential limitations in capturing complex functions? 3. Do you have any concerns or questions regarding the extension of the results to the multi-qubit case and the possibility of establishing a representation theorem? 4. Are there any limitations to the approach that should be considered when interpreting the results?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proves that single-qubit quantum neural networks can approximate any univariate function. Both the expressivity and limitation of single-qubit QNNs are validated by numerical simulations involving also multivariate functions. Strengths And Weaknesses Strength: a controlled setup to study the approximation capabilities of a single-qubit QNN Weakness: the setup seems really too simple to capture any of the complexity of representing multi variate functions and, especially, the benefits of using entangling gates. This single qubit analysis is not really illuminating in this sense. The power of the universal approximation theorem in the classical case is that it can be extended to prove an analogous result for multi variate functions, instead in this case it is not clear how and if the proof carried in this work can be used in the more general (and interesting) case. Questions What would happen for the multi qubit case? Can you establish a representation theorem there? Limitations yes
NIPS
Title Power and limitations of single-qubit native quantum neural networks Abstract Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization. While the applications of QNN have been widely investigated, its theoretical foundation remains less understood. In this paper, we formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks that consist of interleaved encoding circuit blocks and trainable circuit blocks. First, we prove that single-qubit quantum neural networks can approximate any univariate function by mapping the model to a partial Fourier series. We in particular establish the exact correlations between the parameters of the trainable gates and the Fourier coefficients, resolving an open problem on the universal approximation property of QNN. Second, we discuss the limitations of single-qubit native QNNs on approximating multivariate functions by analyzing the frequency spectrum and the flexibility of Fourier coefficients. We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. We believe these results would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. 1 Introduction Quantum computing is a technology that exploits the laws of quantum mechanics to solve complicated problems much faster than classical computers. It has been applied in areas such as breaking cryptographic systems [1], searching databases [2], and quantum simulation [3, 4], in which it gives a quantum speedup over the best known classical algorithms. With the fast development of quantum hardware, recent results [5–7] have shown quantum advantages in specific tasks. An emerging direction is to investigate if quantum computing can offer quantum advantages in artificial intelligence, giving rise to an interdisciplinary area called quantum machine learning [8]. A leading strategy to quantum machine learning uses quantum neural networks (QNNs), which are quantum analogs of artificial neural networks (NNs). Much progress has been made in applications of QNN in various topics [9–11], including quantum autoencoder [12, 13], supervised learning [14–17], dynamic learning [18–20], quantum chemistry [21], and quantum metrology [22–24]. Similar to the field of machine learning, a crucial challenge of quantum machine learning is to design powerful and efficient QNN models for quantum learning tasks, which requires a theoretical understanding of how structural properties of QNN may affect its expressive power. The expressive power of a QNN model can be characterized by the function classes that it can approximate. Recently, the universal approximation property (UAP) of QNN models has been ∗Corresponding author. [email protected] †Z. Y. and H. Y. contributed equally to this work. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). investigated, which is similar to the universal approximation theorem [25, 26] in machine learning theory. The authors of [27] suggested that a QNN model can be written as a partial Fourier series in the data and proved the existence of a multi-qubit QNN model that can realize a universal function approximator. The UAP of single-qubit models remains an open conjecture, due to the difficulties in analyzing the flexibility of Fourier coefficients. Another work [28] considered hybrid classicalquantum neural networks and obtained the UAP by using the Stone-Weierstrass theorem. Ref. [29] proved that even a single-qubit hybrid QNN can approximate any bounded function. The above results of UAP show that the expressivity of QNNs is strong, but it does not reveal the relationship between the structural properties of a QNN and its expressive ability. Therefore the UAP may not be a good guide for constructing QNN models with practical interests. In particular, it is worth noting that the existence proof in Ref. [27] is under the assumption of multi-qubit systems, exponential circuit depth, and arbitrary observables, which does not explicitly give the structure of QNNs. Meanwhile, Refs. [28, 29] demonstrated the construction of QNNs in detail, but it is unclear whether the powerful expressivity comes from the classical part or the quantum part of hybrid models. Moreover, a systematic analysis of how parameters in the QNN affect the classes of functions that it can approximate is missing. The absence of these theoretical foundations hinders the understanding on the expressive power and limitation of QNNs, which makes it highly necessary but challenging to design effective and efficient QNNs. To theoretically investigate the expressivity of QNNs, it is important to study the simplest case of single-qubit QNNs, just like the celebrated universal approximation theorem first showing the expressivity of depth-2 NNs [25, 26]. In this paper, we formulate an analytical framework that correlates the structural properties of a single-qubit native QNN and its expressive power. We consider data re-uploading models that consist of interleaved data encoding circuit blocks and trainable circuit blocks [30]. First, we prove that there exists a single-qubit native QNN that can express any Fourier series, which is a universal approximator for any square-integrable univariate function. It solves the open problem on the UAP of single-qubit QNNs in Ref. [27]. Second, we systematically analyze how parameters in trainable circuit blocks affect the Fourier coefficients. The main results on the expressivity of QNNs are summarized as in Fig. 1. Third, we discuss potential difficulties for singlequbit native QNNs to approximate multivariate functions. Additionally, we compare native QNNs with the hybrid version and show the fundamental difference in their expressive power. We also demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments on approximating univariate and multivariate functions. Our analysis, beyond the UAP of QNNs, improves the understanding of the relationship between the expressive power and the structure of QNNs. This fundamental framework provides a theoretical foundation for data re-uploading QNN models, which is helpful to construct effective and efficient QNNs for quantum machine learning tasks. We will start by giving some background and defining the native QNN models in the next section, and then analyze the expressivity of single-qubit native QNNs in Section 3. In Section 4, we discuss the limitation of single-qubit native QNNs and compare native QNNs with hybrid QNNs, which shows the fundamental difference between their expressive power. The numerical experiments on the expressivity and limitations of single-qubit native QNNs are described in Section 5. 2 Preliminaries 2.1 A primer on quantum computing Quantum state The basic unit of information in quantum computation is one quantum bit, or qubit for short. Just like a classical bit has a state in either 0 or 1, a qubit also has a state. A single-qubit state is a unit vector in a 2-dimensional Hilbert space C2, which is commonly denoted in Dirac notation |ψ⟩ = α |0⟩ + β |1⟩, where |0⟩ = (1, 0)T and |1⟩ = (0, 1)T are known as computational basis states. Here |ψ⟩ denotes a column vector and its conjugate transpose ⟨ψ| := |ψ⟩† is a row vector. Then the inner product ⟨ψ|ψ⟩ = ∥ψ∥2 denotes the square of L2-norm of |ψ⟩. Note that |ψ⟩ is a normalized state so ⟨ψ|ψ⟩ = |α|2 + |β|2 = 1. Having this constraint, a single-qubit state can be represented as a point at surface of a Bloch sphere, written as |ψ⟩ = cos(θ/2) |0⟩+ eiϕ sin(θ/2) |1⟩, where θ and ϕ are re-interpreted as azimuthal angle and polar angle in spherical coordinates. More generally, a quantum state of n qubits can be represented as a normalized vector in the n-fold tensor product Hilbert space C2n . Quantum gate Quantum gates are basic operations used to manipulate qubits. Unlike some classical logical gates, quantum gates are reversible, so they can be represented as unitary transformations in the Hilbert space. A unitary matrix U satisfies U†U = UU† = I . A commonly used group of single-qubit quantum gates is the Pauli gates, which can be written as Pauli matrices: X = [ 0 1 1 0 ] , Y = [ 0 −i i 0 ] , Z = [ 1 0 0 −1 ] . (1) The Pauli X , Y , and Z gates are equivalent to a rotation around the x, y, and z axes of the Bloch sphere by π radians, respectively. A group of more general gates is the rotation operator gates {RP (θ) = e−i θ 2P | P ∈ {X,Y, Z}}, which allows the rotating angle around the x, y and z axes of the Bloch sphere to be customized. They can be written in the matrix form as RX(θ) = [ cos θ2 −i sin θ 2 −i sin θ2 cos θ 2 ] , RY (θ) = [ cos θ2 − sin θ 2 sin θ2 cos θ 2 ] , RZ(θ) = [ e−i θ 2 0 0 ei θ 2 ] . (2) Quantum measurement A measurement is a quantum operation to retrieve classical information from a quantum state. The simplest measurement is the computational basis measurement; for a single-qubit state |ψ⟩ = α |0⟩+β |1⟩, the outcome of such a measurement is either |0⟩ with probability |α|2 or |1⟩ with probability |β|2. Computational basis measurements can be generalized to Pauli measurements, where Pauli matrices are observables that we can measure. For example, measuring Pauli Z is equivalent to the computational basis measurement, since |0⟩ and |1⟩ are eigenvectors of Z with corresponding eigenvalues ±1. Pauli Z measurement returns +1 if the resulting state is |0⟩ and returns −1 if the resulting state is |1⟩. We can calculate the expected value of Pauli Z measurement when the state is |ψ⟩: ⟨ψ|Z |ψ⟩ = (α∗ ⟨0|+ β∗ ⟨1|)Z(α |0⟩+ β |1⟩) = |α|2 − |β|2. (3) Pauli measurements can be extended to the case of multiple qubits by a tensor product of Pauli matrices. 2.2 Data re-uploading quantum neural networks We consider the data re-uploading QNN model [30], which is a generalized framework of quantum machine learning models based on parameterized quantum circuits [31]. A data re-uploading QNN is a quantum circuit that consists of interleaved data encoding circuit blocks S(·) and trainable circuit blocks V (·), Uθ,L(x) = V (θ0) L∏ j=1 S(x)V (θj), (4) where x is the input data, θ = (θ0, . . . ,θL) is a set of trainable parameters, and L denotes the number of layers. It is common to build the data encoding blocks and trainable blocks using the most prevalent parameterized quantum operators {RX , RY , RZ}. We define the output of this model as the expectation value of measuring some observable M , fθ,L(x) = ⟨0|U†θ,L(x)MUθ,L(x) |0⟩ . (5) Note that some data re-uploading QNNs introduce trainable weights in data pre-processing or postprocessing, which are considered as hybrid QNNs. For example, the data encoding block defined as S(w · x) is essentially equivalent to feeding data x into a neuron with weight w and then uploading the output to an encoding block S(·). Such a mixing structure makes it hard to tell whether the expressive power comes from the classical or quantum part. To solely study the expressive power of QNNs, we define the concept of native QNN, where all trainable weights are introduced by parameters of tunable quantum gates so that they can be distinguished from a hybrid QNN. Throughout this paper, we simply refer to the native QNN as QNN for short unless specified otherwise. 3 Expressivity of single-qubit QNNs To better understand the expressive power of QNNs, we start investigating the simplest case of single-qubit models. Ref. [27] investigated the expressive power of QNNs using the Fourier series formalism. In this section, we establish an exact correlation between the single-qubit QNN and the Fourier series in terms of both the frequency spectrum and Fourier coefficients. Note that we consider one-dimensional input data for now, which corresponds to the class of univariate functions. A Fourier series is an expansion of a periodic function f(x) in infinite terms of a sum of sines and cosines which can be written in the exponential form as f(x) = ∞∑ n=−∞ cne i 2πT nx, (6) where cn = 1 T ∫ T f(x)ei 2π T nxdx (7) are the Fourier coefficients. Here T is the period of function f(x). The quantities n 2πT are called the frequencies, which are multiples of the base frequency 2πT . The set of frequency {n 2π T }n is called the frequency spectrum of Fourier series. In approximation theory, a partial Fourier series (or truncated Fourier series) sN (x) = N∑ n=−N cne i πT nx (8) is commonly used to approximate the function f(x). A partial Fourier series can be transformed to a Laurent polynomial P ∈ C[z, z−1] by the substitution z = ei 2πT x, i.e., P (z) = N∑ n=−N cnz n. (9) A Laurent polynomial P ∈ F[z, z−1] is a linear combination of positive and negative powers of the variable z with coefficients in F. The degree of a Laurent polynomial P is the maximum absolute value of any exponent of z with non-zero coefficients, denoted by deg(P ). We say that a Laurent polynomial P has parity 0 if all coefficients corresponding to odd powers of z are 0, and similarly P has parity 1 if all coefficients corresponding to even powers of z are 0. Following the pattern of Fourier series, we first consider using RZ(x) = e−ixZ/2 to encode the input x and let RY (·) be the trainable gate. We can write the QNN as UYZYθ,L(x) = RY (θ0) L∏ j=1 RZ(x)RY (θj), (10) and the quantum circuit is shown in Fig. 2. To characterize the expressivity of this kind of basic QNN, we first rigorously show that the QNN UYZYθ,L(x) can be represented in the form of a partial Fourier series with real coefficients. Lemma 1 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 such that UYZYθ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (11) if and only if real Laurent polynomials P,Q ∈ R[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 1 decomposes the unitary matrix of the QNN UYZYθ,L(x) into Laurent polynomials with real coefficients, which can be used to represent a partial Fourier series with real coefficients. The proof of Lemma 1 uses a method of mathematical induction that is in the similar spirit of the proof of quantum signal processing [32–35], which is a powerful subroutine in Hamiltonian simulation [4] and quantum singular value transformation [35]. The forward direction is straightforward by the definition of UYZYθ,L(x) in Eq. (10). The proof of the backward direction is by induction in L where the base case L = 0 holds trivially. For L > 0, we prove that for any UYZYθ,L(x) where P,Q satisfy the three conditions, there exists a unique block R†Y (θk)R † Z(x) such that polynomials P̂ and Q̂ in UYZYθ,L(x)R † Y (θk)R † Z(x) satisfy the three conditions for L − 1. Lemma 1 explicitly correlates the frequency spectrum of the Fourier series and the number of layers L of the QNN. The proof of Lemma 1 also illustrates the exact correspondence between the Fourier coefficients and parameters of trainable gates. A detailed proof can be found in Appendix A.1. Other than characterizing the QNN with Laurent polynomials, we also need to specify the achievable Laurent polynomials P (x) for which there exists a correspondingQ(x) satisfying the three conditions in Lemma 1. It has been proved in Refs. [32, 34] that the only constraint is |P (x)| ≤ 1 for all x ∈ R. That is, for any P ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2, if |P (x)| ≤ 1 for all x ∈ R, there exists a Q ∈ R[eix/2, e−ix/2] with deg(P ) ≤ L and parity L mod 2 such that |P (x)|2 + |Q(x)|2 = 1 for all x ∈ R. By Lemma 1, the partial Fourier series corresponding to the QNN UYZYθ,L(x) only has real coefficients. With the exponential form of Eq. (6), a Fourier series with real coefficients only has cos(nx) terms, which means UYZYθ,L(x) can be used to approximate any even function on the interval [−π, π]. Thus we establish the following proposition, whose proof is deferred to Appendix A.2. Proposition 2 For any even square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UYZYθ,L(x) such that |ψ(x)⟩ = UYZYθ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (12) for some normalizing constant α. Although the above result states that the QNN UYZYθ,L(x) |0⟩ is able to approximate a class of even functions within arbitrary precision, we can see that the main limitation of the expressive power of QNN UYZYθ,L(x) is the real Fourier coefficients, which may restrict its universal approximation capability. To tackle this issue, our idea is to introduce complex coefficients to the corresponding Laurent polynomials, which can be implemented by adding a trainable Pauli Z rotation operator in each layer. Specifically, we consider the QNN UWZWθ,ϕ,L(x) = RZ(φ)W (θ0, ϕ0) L∏ j=1 RZ(x)W (θj , ϕj), (13) where each trainable block is W (θj , ϕj) := RY (θj)RZ(ϕj). Here we add an extra RZ(φ) gate to adjust the relative phase between P and Q. The quantum circuit of UWZWθ,ϕ,L(x) is illustrated in Fig. 3. To characterize the capability of this QNN, we establish the following Lemma which implies UWZWθ,ϕ,L(x) can express any Fourier partial sum with complex Fourier coefficients. Lemma 3 There exist θ = (θ0, θ1, . . . , θL) ∈ RL+1 and ϕ = (φ, ϕ0, ϕ1, . . . , ϕL) ∈ RL+2 such that UWZWθ,ϕ,L(x) = [ P (x) −Q(x) Q∗(x) P ∗(x) ] (14) if and only if Laurent polynomials P,Q ∈ C[eix/2, e−ix/2] satisfy 1. deg(P ) ≤ L and deg(Q) ≤ L, 2. P and Q have parity L mod 2, 3. ∀x ∈ R, |P (x)|2 + |Q(x)|2 = 1. Lemma 3 demonstrates a decomposition of the QNN UWZWθ,ϕ,L(x) into Laurent polynomials with complex coefficients, which can be used to represent a partial Fourier series with complex coefficients in form of Eq. (8). The proof of Lemma 3 is similar to the proof of Lemma 1 and its details are provided in Appendix A.3. Again, the proof demonstrates the effect of parameterized gates on the control of Fourier coefficients. Similarly, the constraint for the achievable complex Laurent polynomials P (x) in UWZWθ,ϕ,L(x) is that |P (x)| ≤ 1 for all x ∈ R, as proved in Refs. [36, 37]. We then prove in the following Theorem 4 that UWZWθ,ϕ,L(x) is able to approximate any square-integrable function within arbitrary precision, using the well-established result in Fourier analysis. The detailed proof is deferred to Appendix A.4. Theorem 4 (Univariate approximation properties of single-qubit QNNs.) For any univariate square-integrable function f : [−π, π] → R and for all ϵ > 0, there exists a QNN UWZWθ,ϕ,L(x) such that |ψ(x)⟩ = UWZWθ,ϕ,L(x) |0⟩ satisfies ∥ ⟨ψ(x)|Z|ψ(x)⟩ − αf(x)∥ ≤ ϵ (15) for some normalizing constant α. Up till now we only let the encoding gate be the RZ(x) gate, what if we use other rotation operator gates to encode the data? It actually does not matter which one we choose as the encoding gate if the trainable gates are universal. Note that Pauli rotation operators RX(x), RY (x), RZ(x) have two eigenvalues cos(x/2)± i sin(x/2), and they can be diagonalized as Q†RZ(x)Q. Merging unitaries Q† and Q to universal trainable gates gives the QNN that uses RZ(x) as the encoding gate. We hereby define the generic single-qubit QNNs as UUZUθ,ϕ,λ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(x)U3(θj , ϕj , λj), (16) where each trainable block is the generic rotation gate U3(θ, ϕ, λ) = [ cos θ2 −e iλ sin θ2 eiϕ sin θ2 e i(ϕ+λ) cos θ2 ] . (17) By definition, any L-layer single-qubit QNN, including UWZWθ,ϕ,L, can be expressed as U UZU θ,ϕ,λ,L. Thus UUZUθ,ϕ,λ,L is surely a universal approximator. 4 Limitations of single-qubit QNNs We have proved that a single-qubit QNN is a universal approximator for univariate functions, it is natural to consider its limitations. Is there a single-qubit QNN that can approximate arbitrary multivariate functions? We answer this question from the perspective of multivariate Fourier series. Specifically, we consider the generic form of single-qubit QNNs defined in Eq. (16) and upload the classical data x := (x(1), x(2), · · · , x(d)) ∈ Rd as Uθ,L(x) = U3(θ0, ϕ0, λ0) L∏ j=1 RZ(xj)U3(θj , ϕj , λj), (18) where each xj ∈ x and L ∈ N+. Without loss of generality, assume that each dimension x(i) is uploaded the same number of times, denoted by K. Naturally, we have Kd = L. Further, we rewrite the output of QNNs defined in Eq. (5) as the following form. fθ,L(x) = ∑ ω∈Ω cωe iω·x, (19) where Ω = {−K, · · · , 0, · · · ,K}d, and the cω is determined by parameters θ and the observable M . A detailed analysis can be found in Appendix B. We can see that Eq. (19) cannot be represented as a K-truncated multivariate Fourier series. Specifically, by the curse of dimensionality, it requires exponentially many terms in d to approximate a function in d dimensions. However, for fθ,L(x), the degrees of freedom grow linearly with the number of layers L. It implies that single-qubit native QNNs potentially lack the capability to universally approximate arbitrary multivariate functions from the perspective of the Fourier series. Despite the potential limitation of native QNNs in multivariate approximation, it has been proved that a single-qubit hybrid QNN can approximate arbitrary multivariate functions [28, 29]. However, the UAP of hybrid QNNs is fundamentally different from the native model that we investigated. Those hybrid models involve trainable weights either in data pre-processing or post-processing. Specifically, introducing trainable weights in data pre-processing is equivalent to multiplying each frequency of the Fourier series by an arbitrary real coefficient, i.e. S(wx) = RZ(wx) = e −iw x2Z . (20) Therefore it enriches the frequency spectrum of native QNNs, which only contain integer multiples of the fundamental frequency. It can also be readily extended to the encoding of multi-dimensional data x := (x(1), x(2), · · · , x(d)) as RZ(w1x (1))RZ(w2x (2)) · · ·RZ(wdx(d)) = RZ(w · x) = e− 1 2 iw·xZ , (21) where w = (w1, . . . , wd) is a vector of trainable weights. Using such an encoding method enables a single-qubit QNN to approximate any continuous multivariate function [29]. We notice that, although the trainable weights enrich the frequency spectrum of the Fourier series, the capability of hybrid QNNs to approximate arbitrary multivariate functions is not obtained using the multivariate Fourier series, but the universal approximation theorem [25, 26] in machine learning theory. In another word, the multivariate UAP of a hybrid QNN mostly comes from the classical structure, and the QNN serves as an activation function σ(x) = e−ix in the universal approximation theorem. This fact might be able to shed some light on the reason why a hybrid QNN does not provide quantum advantages over the classical NN. 5 Numerical experiments In order to better illustrate the expressive power of single-qubit native QNNs, we supplement the theoretical results with numerical experiments. Specifically, we demonstrate the flexibility and approximation capability of single-qubit native QNNs in Section 5.1. The limitations of single-qubit QNNs are illustrated in Section 5.2 through the numerical experiments on approximating multivariate functions. All simulations are carried out with the Paddle Quantum toolkit on the PaddlePaddle Deep Learning Platform, using a desktop with an 8-core i7 CPU and 32GB RAM. 5.1 Univariate function approximation A damping function f(x) = sin (5x)/5x is used to demonstrate the approximation performance of single-qubit native QNN models. The dataset consists of 300 data points uniformly sampled from the interval [0, π], from which 200 are selected for the training set and 100 for the test set. Since the function f(x) is an even function, we use the QNN model as defined in Eq. (10). The parameters of trainable gates are initialized from the uniform distribution on [0, 2π]. We adopt a variational quantum algorithm, where a gradient-based optimizer is used to search and update parameters in the QNN. The mean squared error (MSE) serves as the loss function. Here the Adam optimizer is used with a learning rate of 0.1. We set the training iterations to be 100 with a batch size of 20 for all experiments. While approximating a function f(x) by a truncated Fourier series, the approximation error decreases as the number of expansion terms increases. As shown in Lemma 3, the frequency spectrum and Fourier coefficients will be extended by consecutive repetitions of the encoding gate and trainable gate. The numerical results in Fig. 4 illustrate that the approximation error decreases as the number of layers increases, which are consistent with our theoretical analysis. To further show the flexibility and capability of single-qubit QNNs, we pick a square wave function as the target function. The training set contains 400 data points sampled from the interval [0, 20]. The numerical results are illustrated in Fig. 5. By simply repeating 45 layers, the single-qubit QNN UWZWθ,ϕ,L(x) learns the function hidden beneath the training data. In particular, the approximation works well not only for input variables located between the training data but also outside of the region, because the Fourier series has a natural capability in dealing with periodic functions. 5.2 Multivariate function approximation We numerically demonstrate the limitations of single-qubit native QNNs in approximate multivariate functions. We examine the convergence of the loss as the number of layers of the circuit increases and compare the outcome with the target function. Specifically, we consider a bivariate function f(x, y) = (x2 + y − 1.5π)2 + (x+ y2 − π)2 as the target function. Note that f(x, y) is normalized on the interval [−π, π]2, i.e., −1 ≤ f(x, y) ≤ 1. The training set consists of 400 data points sampled from interval [−π, π]2. We use the singlequbit QNN with various numbers of layers defined as Eq. (18) to learn the target function. The experimental setting is the same as in the univariate function approximation. In order to reduce the effect of randomness, the experimental results are averaged over 5 independent training instances. Fig. 6 shows that the single-qubit native QNN has difficulty in approximating bivariate functions. The approximation result of QNN as shown in Fig. 6b is quite different from the target function, even for a very deep circuit of 40 layers. Also, the training loss in Fig. 6c does not decrease as the number of layers increases. Note that the target function is only bivariate here, the limitations of single-qubit native QNNs will be more obvious in the case of higher dimensions. We further propose a possible strategy that extends single-qubit QNNs to multiple qubits, which could potentially overcome the limitations and handle practical classification tasks, see Appendix C for details. 6 Conclusion and outlook In this work, we presented a systematic investigation of the expressive power of single-qubit native QNNs, which are capable to approximate any square-integrable univariate function with arbitrary precision. We not only give an existence proof but also analytically show an exact mapping between native QNNs and the partial Fourier series from perspectives of both frequency spectrum and Fourier coefficients, which solves an open problem on the UAP of single-qubit QNNs in Ref. [27]. Our proof, inspired by quantum signal processing, explicitly illustrates the correlation between parameters of trainable gates and the Fourier coefficients. Other than the expressivity, we also discuss the limitation of single-qubit QNNs from the perspective of multivariate Fourier series. Both the expressivity and limitation of single-qubit QNNs are validated by numerical simulations. We expect our results provide a fundamental framework to the class of data re-uploading QNNs, which serves as insightful guidance on the design of such QNN models. Although the expressive power of a single-qubit QNN have been well investigated, it may not be an ideal model in practice due to the potential limitations on approximating multivariate functions. Moreover, single-qubit models can be efficiently simulated by classical computers and hence cannot bring any quantum advantage. The multi-qubit QNNs as shown in Ref. [27] and in Appendix C might require exponential circuit depth, which is impractical to implement and also does not fit the systematic analysis for the single-qubit case. Therefore one future step is to efficiently generalize the framework of single-qubit QNNs to multi-qubit cases. One promising approach is to encode data into multiqubit unitaries by block encoding and then mapping higher-dimensional operations on multi-qubit systems to single-qubit gates by qubitization [38]. Such techniques are originally used in multi-qubit extensions of quantum signal processing, such as quantum singular value transformation [35] and quantum phase processing [37]. By the connection between single-qubit QNNs and quantum signal processing, block encoding and qubitization may lead to useful QNN models for multi-qubit cases and establish corresponding systematic analyses. A recent paper presents a method that extends quantum signal processing to multivariate [39], which might also be applicable to single-qubit QNNs. We believe our results and their possible extensions would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks. Acknowledgments and Disclosure of Funding We would like to thank Runyao Duan for helpful suggestions on quantum signal processing. We also thank Guangxi Li, Geng Liu, Youle Wang, Haokai Zhang, Lei Zhang, and Chengkai Zhu for useful comments. Z. Y. and H. Y. contributed equally to this work. Part of this work was done when Z. Y., H. Y., and M. L. were research interns at Baidu Research.
1. What is the focus and contribution of the paper on quantum neural networks (QNNs)? 2. What are the strengths of the proposed approach, particularly in terms of its foundation and structure? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. Do you have any concerns or questions about the methodology, such as the choice of dataset size or the use of a damping function? 5. Are there any limitations to the QNN model that the authors have proposed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors show that a data reuploading QNN with only R Z ( x ) encoding gates and R Y ( ⋅ ) as trainable gates can be represented as a truncated Fourier series with real coefficients. Thus QNNs of this form can approximate even functions arbitrarily well. The authors extend the QNN model with trainable gates of the from W ( θ j , ϕ j ) := R Y ( θ j ) R Z ( ϕ j ) to represent complex Fourier coefficients. Using established results from Fourier analysis the authors can show that the extended QNN model is a universal function approximator in the class of square-integrable functions. Within the Fourie series framework the authors provides arguments for the inadequacy of their QNN ansatz to universally approximate multivariate functions. Numerical experiments were performed to show the increasing approximation capability with increasing number of layers of the QNN model and also the extrapolation ability of the QNN was demonstrated for a simple periodic function. In the end the single qubit QNN is tested on a bivariate function, where the performance is as expected not optimal. Strengths And Weaknesses Pro the foundational presentation of the topic was clearly structured Cons: the universal approximations theorem for QNNs was already proven in M.Schuld et 2021 the concept of data reuploading was also already established in A.Perez-Salinas et al 2020 thus the originality of the paper is not clear though the prior work was cited, the results were not discussed and were not set into perspective with this work Questions What is the difference between Lemma 1 and 3 except for the additionals parameters? Why do you use only 200+100 datapoints for training/testing of your 1000 point dataset? Why do you use the damping function? Have you considered a more diverse set of functions? will the code be available to reproduce the findings? Limitations The authors adequately show the limitation of their model in the bivariate case. No societal impact was discussed.
NIPS
Title LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning Abstract A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward. Prior studies have paid many efforts on reward shaping or designing a centralized critic that can discriminatively credit the agents. In this paper, we propose to merge the two directions and learn each agent an intrinsic reward function which diversely stimulates the agents at each time step. Specifically, the intrinsic reward for a specific agent will be involved in computing a distinct proxy critic for the agent to direct the updating of its individual policy. Meanwhile, the parameterized intrinsic reward function will be updated towards maximizing the expected accumulated team reward from the environment so that the objective is consistent with the original MARL problem. The proposed method is referred to as learning individual intrinsic reward (LIIR) in MARL. We compare LIIR with a number of state-of-the-art MARL methods on battle games in StarCraft II. The results demonstrate the effectiveness of LIIR, and we show LIIR can assign each individual agent an insightful intrinsic reward per time step. 1 Introduction Many real-world problems, such as traffic light control [1], coordination of autonomous vehicles [2], resources management [3] and multi-player video games [4, 5], can be naturally formulated into cooperative multi-agent systems, where the objective is to maximize the return in the perspective of a team of agents. When the agents are manipulated with a centralized controller which could access the joint or global state of all the agents, coordination among the agents is easier and the main effort of the controller is usually paid on finding an effective communication scheme among the agents. Examples include a wide range of approaches on designing effective centralized MARL architectures [5, 6, 7, 8]. ∗Equal contribution. Correspondence to the first two authors. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Unfortunately, when the agents are independently deployed and communications are disabled or prohibitive, each agent has to predict its own action conditioning on its partial observation trajectory. Without a centralized controller, each agent is responsible to collaborate with others on its own decision. This pushes much burden on the capability of each agent’s policy. Worse still, in most of the real-world MARL applications, the agents only receive a team reward, from which it is difficult to deduce each agent’s contribution to the team’s success, making the learning and collaboration among agents nontrivial. There have been many efforts paid on distinguishing the agents’ credit, known as the credit assignment problem in MARL [9, 10]. A general way is reward shaping [5, 11, 12], which, however, requires abundant human labor to assign precise rewards to each individual agent. Under some real-world tasks, such as reducing the latency in a traffic network, there might even not exist any clear choice of the reward functions for an individual agent (vehicle in the example). Another branch of commonly adopted methods try to design a centralized critic that is capable to distinguish the state-action values of the agents during training [9, 10], and then perform decentralized executions during testing. Our approach builds a connection between reward shaping and critic learning. That is, we propose to learn each agent a parameterized individual intrinsic reward function by maximizing a centralized critic. The optimal intrinsic reward problem has been introduced in [13] for single agent learning scenarios and studied in some recent RL approaches [14, 15, 16]. Inspired by the concept, we propose to introduce the intrinsic reward design into multi-agent systems to distinguish the contributions of the agents when the environment only returns a team reward. Specifically, we learn each agent a parameterized intrinsic reward function that outputs an intrinsic reward for that agent at each time step to induce diversified behaviors. With these intrinsic rewards, we define each agent a distinct proxy expected discounted return that is a combination of the real team reward from the environment and the learned intrinsic reward. Using the actor-critic method, the individual policy of each agent is updated under the direction of the corresponding proxy critic. The parameters of the intrinsic reward functions are updated to maximize the standard accumulated discounted team return from the environment. Therefore, the objective of the entire procedure is consistent with that of the original MARL problem. Insightfully, from an optimization perspective, the proposed method can be categorized to the bilevel optimization, where the problem of solving individual proxy objectives is nested within the outer optimization task which maximizes the standard multi-agent return. The parameters of the policy and the intrinsic reward function are treated as the parameters of the inner and outer optimization problems, respectively. We refer the proposed method to as learning individual intrinsic reward (LIIR) in MARL. Empirically, we show that LIIR outperforms a number of state-of-the-art MARL approaches on extensive settings in the battle game of StarCraft II. We also conduct insightful case studies to visualize the learned intrinsic reward, and the results demonstrate that the learned intrinsic reward function can generate diverse reward signals for the agents and the agents can also act diversely in a collaborative way. 2 Related Work When considering a centralized controller in MARL, the controller takes the joint or global observation of the agents as input and outputs multiple actions for the agents in one step. Many studies have been proposed on pursuing effective communication architecture among the agents within a centralized controller. For example, densely connected communication layers or modules have been embedded in a centralized controller that directly outputs multi-head predictions for the agents [6, 5]. Recurrent neural networks (RNN) have also been introduced to enable a sequence of agents to communicate through the recurrent module [7]. However, in many MARL applications, the agents have to be separately deployed that each agent has to make its own decision conditioning on its partial observation. Decentralized methods naturally deal with the above situation. The simplest approach is learning an individual policy or Q-function for each agent. This was first attempted with Q-learning [17], which was then extended with deep neural networks applied [18, 19]. Fully decentralized methods are limited under the case where only a team reward is given, since distinguishing the agents’ contributions is difficult. To address the credit assignment problem in decentralized MARL, many existing methods utilize the framework with a centralized critic and decentralized policy. That is, the policies are deployed independently by taking individual observation as input, while the centralized critic focuses on quantifying the differences among the agents. For example, the counterfactual multi-agent policy gradient [9] uses a counterfactual baseline to assign credits for the agents; the value decomposition network [20] decomposes the centralized value into a sum of individual agent values to discriminate their contributions; the QMIX [10] method adopts a similar idea that assumes the centralized Q-value function is monotonically increasing with the individual Q-values. Most of the existing methods focus on the architecture design of the critic, even strong assumptions on the value functions are unavoidable. Our method differs from these approaches that rather than working on the value functions, we propose to learn each agent an intrinsic reward at each time step. The benefits are that no assumptions are attached on the value functions and the agents are allocated an explicit immediate intrinsic reward at each time step to assign their credits. Our work is also related to the optimal intrinsic reward design problem in single agent setting [21, 22, 23, 16, 24]. Some prior works have used heuristic metrics to define the intrinsic reward. For example, in [22] the intrinsic reward is defined as the squared difference between two consecutive states, and in [23] a metric named curiosity is used as the intrinsic reward. In [24] the learning of intrinsic reward is integrated with the update of the policy. A recent approach [16] proposes to parameterize the intrinsic reward function and alternatively updates the policy parameters and the intrinsic reward parameters. In this paper, we extend the setting to multi-agent system and use individual intrinsic reward function to distinguish the credits of the agents. 3 Background 3.1 Cooperative Multi-Agent Reinforcement Learning We consider a fully cooperative multi-agent system, in which the agents need to be independently deployed without a central controller. The system can be described as a tuple as 〈A, S, U, P, r, γ, ρ0〉. Let A = {1, 2, · · · , n} denote the set of n agents. Denote observation space of the agents as S = {S1, S2, · · · , Sn} and the action space of the agents as U = {U1, U2, · · · , Un} respectively. At time step t, let st = {sit}ni=1 with each sit ∈ Si being the partial observation from agent i. Accordingly, let ut = {uit}ni=1 with each uit ∈ Ui indicating the action taken by the agent i. We overload notations and use st ∈ S to refer to the true state of the environment. P (st+1|st,ut) : S × U × S → [0, 1] is the state transition function. r(st,ut) : S × U → R indicates the team reward function from the environment. In order to differentiate the team reward from the environment and the intrinsic reward that will be learned, we refer the team reward to as the extrinsic team reward rex(st,ut), following the usage in [16]. γ ∈ [0, 1) is a discount factor and ρ0 : S → R is the distribution of the initial state s0. Let πi(uit|sit) : Si × Ui → [0, 1] be a stochastic policy for agent i and denote π = {πi}ni=1. Let J ex(π) = Es0,u0,··· [Rex0 ] with Rext = ∑∞ l=0 γ lrext+l denoting the expected discounted extrinsic reward, where s0 ∼ ρ0(s0), uit ∼ πi(uit|sit) for i ∈ A, and st+1 ∼ P (st+1|st,ut). Define the extrinsic value function as V exπ (st) = Eut,st+1,··· [Rext ]. We aim to find optimal policies π∗ = {π∗i }ni=1 that achieve the maximum expected extrinsic team reward J ex(π∗). 3.2 Centralized Learning with Decentralized Execution Centralized learning with decentralized execution (CLDE) is a commonly used architecture to learn a centralized critic to update the decentralized policies during training. In CLDE, actor-critic (AC) style methods [25, 26, 27, 28, 29] are often selected. In our case, AC algorithms use n independent parameterized policies πθi for i ∈ A and update θi by maximizing the expected extrinsic reward J ex(θ1, θ2, · · · , θn) = Es,u [Rex] using the policy gradient ∇θiJ ex(θ1, θ2, · · · , θn) = Es,u [∇θi log πθi(ui|si)Aπ(s,u)] , (1) where Aπ(s,u) is the centralized critic. There are several ways to estimate Aπ(s,u). For example, Aπ(s,u) = r ex(s,u) + V ex(s′)− V ex(s) is the standard advantage function [27, 28], where s′ is the successive state of the agents. In [9], Aπ(s,u) is defined as an estimated state-action value function minus a counterfactual baseline. 3.3 Parameterized Intrinsic Reward A recent study [16] has investigated learning a parameterized intrinsic reward function in single agent setting. The idea is to explicitly define the intrinsic reward function as rinη (s, a) for a state-action pair (s, a) of the agent, and it is summed up with the extrinsic reward rex(s, a) from the environment to serve as the return signal for updating the policy. The intrinsic reward parameter η is updated towards maximizing the expected extrinsic reward J ex. The intuition for updating η is to find the effect that the change on η would influence the extrinsic value through the change in the policy parameters. This technique can be viewed as an instance of meta learning [30, 31, 32]; the intrinsic reward function serves as a meta-learner that learns to improve the agents objective. In our case, we extend the intrinsic reward learning method to deal with decentralized MARL problem and we use the intrinsic rewards to diversely stimulate the agents to learn from the environment. 4 Method In this section, we formally propose the LIIR method. We first provide a formal definition of the considered problem based on what have been introduced in Section 3, then we introduce a bilevel optimization algorithm for solving the proposed objective. 4.1 The Objective By defining an intrinsic reward function rinηi(si, ui) which is parameterized by ηi and takes a stateaction pair (si, ui) of an individual agent i as input, we propose to assign agent i a distinct proxy reward rproxyi,t = r ex t + λr in i,t, (2) at time step t. In (2), we have omitted the arguments of the reward functions for simplicity, and λ is a hyper-parameter that balances the extrinsic team reward and the distinct intrinsic reward. Note that in the standard MARL problem with a team reward, there does not exist any distinct reward for each agent. Now, after creating each agent a proxy reward rproxyi,t at time step t, we accordingly define a discounted proxy reward for each agent i as Rproxyi,t = ∞∑ l=0 γl(rext+l + λr in i,t+l), (3) and the proxy value function for agent i as V proxyi (si,t) = Eui,t,si,t+1,···[R proxy i,t ]. (4) Different from the extrinsic (standard) value V ex, these proxy value functions V proxyi ’s do not have any physical meanings and they will be only used for updating the individual policy parameters θi’s. Now, the considered overall objective is defined as max η,θ J ex(η), (5) s.t. θi = argmax θ Jproxyi (θ,η), ∀i ∈ [1, 2, · · · , n] where Jproxyi := Esi,0,ui,0,··· [ Rproxyi,0 ] depending on θi and η, η indicates the intrinsic reward parameter set {η1, η2, · · · , ηn} and θ indicates the policy parameter set {θ1, θ2, · · · , θn}. In problem (5), the goal is to maximize J ex through optimizing η, while the policy parameter θi is optimized by maximizing the proxy expected discounted return Jproxyi for agent i. The advantage is that by learning a distinct intrinsic reward for each agent per time step, the agents will be diversely stimulated and this will accumulatively influence the policy learning via the policy gradient. Moreover, from an optimization perspective, problem (5) can be viewed as a bilevel optimization problem, since the problem of maximizing the individual proxy expected returns is nested within the outer optimization task, which is maximizing the extrinsic expected return. In the next subsection, we will discuss how J ex is connected with the intrinsic reward parameter η. 4.2 Algorithm As a bilevel optimization problem, at each iteration, the policy parameters are updated with respect to the inner proxy tasks, while the intrinsic reward parameters are updated to maximize the extrinsic expected return. Specifically, the policy parameter of each agent is updated by the policy gradient with its proxy critic. Given a trajectory generated by the policy πθi , θi can be updated by applying the policy gradient defined in (1): ∇θi log πθi(ui|si)A proxy i (si, ui), (6) where Aproxyi (si, ui) is the proxy critic that can be chosen in a variety of ways [25, 26, 27, 28]. For example, Aproxyi (si, ui) = R proxy i leads to the REINFORCE algorithm [26]. In this paper, we choose Aproxyi (si, ui) = r proxy i (si, ui) + V proxy ϕi (s ′ i) − V proxy ϕi (si) as the advantage function [27, 28], where V proxyϕi is the proxy value parameterized by ϕi and s′i is the next state of agent i in the trajectory. Given (6) and a policy learning rate α, the updated policy parameter θ′i can be represented as θ′i = θi + α∇θi log πθi(ui|si)A proxy i (si, ui). Then, we build the connection between η and J ex and specify the updating procedure for η. Given the updated policy parameters θ′i’s, using the chain rule, we have ∇ηiJ ex = ∇θ′iJ ex∇ηiθ′i. (7) The spirit of (7) is to formulate the effect of the change of ηi on influencing J ex through its influence in the updated policy parameter θ′i. This is a commonly adopted technique in meta-gradient learning [30, 31, 32, 33]. Computing the meta-gradient∇ηiJ ex requires new samples generated by the updated policy parameter θ′i, while this can be avoid by reusing the samples generated by θi with importance sampling [16]. In (7),∇θ′iJ ex can be estimated by stochastic gradient as ∇θ′i log πθ′i(ui|si)A ex(s,u), (8) where Aex(s,u) is the centralized extrinsic critic. Similar to proxy critics, we choose Aex(s,u) = rex(s,u) + V exφ (s ′)− V exφ (s), where V exφ (s) is the extrinsic value parameterized by φ. The second term in (7) can be derived as ∇ηiθ′i = ∇ηi [θi + α∇θi log πθi(ui|si)A proxy i (si, ui)] = αλ∇θi log πθi(ai|si)∇ηir proxy i (si, ui). (9) Fig. 1 gives an illustration of the entire architecture of the LIIR method. A sketch of the optimization algorithm is presented in Algorithm 1. 5 Experiments In this section, we first evaluate LIIR on a simple 1D pursuit game specifically designed for the considered settings to see whether LIIR can learn reasonable distinct intrinsic rewards. Then, we Algorithm 1 The optimization algorithm for LIIR. Input: policy learning rate α and intrinsic reward learning rate β. Output: policy parameters θ and intrinsic reward parameters η. 1: Init: initialize θ and η; 2: while termination is not reached do 3: Sample a trajectory D = {s0,u0, s1,u1, · · · } by executing actions with the decentralized policies {πθ1 , · · · , πθn}; 4: Update θ according to (6) with learning rate α; 5: Compute (8) using new samples from {πθ′1 , πθ′2 , · · · , πθ′n} or reuse D to replace (8) with ∇θ′ i πθ′ i (ui|si) πθi (ui|si) Aex(s,u); 6: Update η according to (7), step 5 and (9) with learning rate β; 7: end while comprehensively study LIIR in several challenging micromanagement games in the game of StarCraft II, and compare LIIR with a number of state-of-the-art MARL methods.2 5.1 A Simple 1D Pursuit Study We design a simple game named 1D Pursuit to provide a fast verification for the quality of the intrinsic reward learned by LIIR. In 1D pursuit, a team of two agents are initially assigned with some random integers denoted by x and y respectively, and each agent could take actions from {+1,−1, 0} to either increase, decrease or keep its value to approach a target value z that is unknown to the agents. For a collaborative setting, the team reward for the two agents is set to be inversely proportional to the summation of their absolute differences between their values and the target value. That is, both the two agents should adjust their values towards the target value. The observation of each agent is a two-dimension vector containing its current integer value and another agent’s integer value. The team reward is set to be +0.01 if both agents take actions that approaching the target value, −0.01 if both agents take actions that moving away from the target value, and 0 otherwise. The target value is set to be 0. The initial integers for the two agents are randomly generated from {−10, ..., 10}. We implement LIIR based on the architecture depicted in Fig. 1. The detailed network structure is provided in the supplementary material. In Fig. 2, we plot the histogram of the distributions of the intrinsic reward averaged from 1000 episodes. We denote actions approaching the target as “Good” actions and actions moving away from the target as “Bad” actions. The result shows that LIIR can assign reasonable intrinsic reward to the agents. 5.2 StarCraft II Micromanagement In this subsection, we comprehensively evaluate the proposed LIIR method in the game of StarCraft II based on the learning environment SC2LE [34] and mini-game settings in SMAC [35]. We compare the LIIR method with a number of state-of-the-art MARL methods that use the CLDE architecture. We also provide some insightful case studies to visualize the learned intrinsic rewards. StarCraft II is a popular real-time strategy game and it has been studied under MARL settings [9, 10, 7, 36, 37]. In the experiments, we consider symmetric battle games in StarCraft II , where both single type agents and mixed type agents are considered. Specifically, the considered scenarios contain 3 Marines vs. 3 Marines (3M), 8 Marines vs. 8 Marines (8M), 2 Stalkers & 3 Zealots vs. 2 Stalkers & 3 Zealots (2S3Z), and 3 Stalkers & 5 Zealots vs. 3 2The source codes of LIIR are available through https://github.com/yalidu/liir. Stalkers & 5 Zealots (3S5Z). In these settings, Marine and Stalker are units of Terran and Protoss, respectively, and both of them can attack enemies at a distance, while Zealot is a melee unit of Protoss and it can only attack enemies who stand close to it. In all these games, only the units from self side are treated as agents. Each agent is described by several attributes including the health point (HP), weapon cooling down (CD), shield (for 2S3Z and 3S5Z), unit type, last action and the relative distance of the observed units. The enemy unit is described in the same way except that CD is excluded. The partial observation of an agent is composed by the attributes of the units, including both the agents and the enemy units, shown up within its view range that is a circle with a certain radius. The action space contains 4 move directions, k attack actions where k is the fixed maximum number of the enemy units in a map, stop and none-operation. The input dimension and the output action dimension are fixed with a certain ordering over the agents and enemy units. Dead enemy units will be masked out from the action space to ensure the executed action is valid. At each time step, the agents receive a joint team reward which is defined by the total damage of the agents and the total damage from the enemy side. In all the scenarios, following the configurations in [9, 10], we train the agents against the build-in AI opponent. More detailed settings can be acquired from the SMAC environment [35]. 5.2.1 Compared Methods and Training Details The considered methods for evaluation include • independent Q-learning (IQL) [17]: IQL trains decentralized Q-functions for each agent. Since the observation and action spaces of the agents are the same within a specific environmental setting, a policy will be shared across all the agents; • independent actor-critic (IAC) [9]: IAC is similar to IQL except that it adopts the actor-critic method; • Central-V [9]: the method learns a centralized critic with decentralized policies. Similarly, all agents share the same policy network; • COMA [9]: the method learns a centralized critic that is the state-action value minus a counterfactual baseline; • QMIX [10]: the method learns decentralized Q-function for each agent with the assumption that the centralized Q-value is monotonically increasing with the individual Q-values. In the implementations, the agents share the same Q-function; • LIIR: the proposed method. In the experiments, the agents share the same policy, intrinsic reward function and proxy critic. Since each agent has its own partial observation, sharing policy parameters does not imply that they act the same. For COMA and QMIX, we use their original implementations, in which the main policy network orQnetwork consist of some fully connected (FC) layers and a GRU module.3 All the other methods adopt similar network structures compared to COMA and QMIX. As depicted in Fig. 1, the parameters of LIIR contain 4 components corresponding to the shared policy parameter θ, intrinsic reward parameter η, proxy value parameter ϕ and extrinsic value parameter φ. To achieve fair comparison, we set the policy network structure, i.e., θ, as what is exactly used for COMA’s policy network. Then, we compress the other parameters η, ϕ and φ to let their total size equal to the parameter size of the remaining part in COMA. More details can be found in the supplementary material. All the methods are trained with 3 millions of steps in 3M and 8M, and with 10 millions of steps for 2S3Z and 3S5Z. The hyper-parameter λ in (2) is set to 0.01 throughout the experiments (we tried different choices of λ while we found that the results did not differ much). We use a fixed learning rate of 5e-4 and use batches of 32 episodes for all the methods. We use 32 actors to generate the trajectories in parallel, and use one NVIDIA Tesla M40 GPU for training. 5.2.2 Results To evaluate the performance of each method, we freeze the training every 100 episodes and test the model over 20 episodes to compute an average test winning rate. The entire training procedure is 3https://github.com/oxwhirl/pymarl repeated for 5 times to plot the winning rate curve with standard deviation. The results are reported in Fig. 3, where the averaged winning rates vs. the training steps on all the battle scenarios are given. In 3M which is the simplest game, all the test winning rates keep increasing as the training steps increase. In 8M, 2S3Z and 3S5Z, the independent learning methods, i.e., IQL and IAC, fail to learn a good policy for the agents and the methods using a CLDE architecture always outperform the independent learning methods. In 3M and 8M, COMA and Central-V show comparable performance, while in 2S3Z and 3S5Z, Central-V outperforms QMIX and COMA. For all these scenarios, the LIIR method consistently shows the best performance, and it achieves around 90% winning rate in all the scenarios. This demonstrates that learning the intrinsic reward function can ultimately induce better trained policies. 5.2.3 Visualizing the Learned Intrinsic Reward In addition to evaluate the performance of the trained policy in Section 5.2.2, we are more curious about how much effect the learned intrinsic reward function actually contributes to the policy learning. In order to figure out what has been learned in the intrinsic reward function, we propose to explicitly visualize these rewards. That is, we plot the learned intrinsic reward of each agent at each time step in a complete trajectory during testing. It is worth mentioning that during testing the intrinsic rewards are independent with the learned policy, and these rewards will not be used at all when generating the trajectory. For clarity, we randomly choose two test replays in 3M and 2S3Z which contain fewer agents to plot all the agents’ intrinsic rewards. Figs. 4 and 5 show the intrinsic rewards in 3M and 2S3Z, respectively. We also attach some auxiliary snapshots to explain some interesting segments in the curves. In all the snapshots, the red colored units indicate the agents controlled by LIIR. In Fig. 4(a), agent 1 is dead at time step 9, and we can observe that its intrinsic reward turns to be very low after time step 6 compared to the other two agents. As revealed by Figs. 4(b) and (c), at time step 6, all the three agents focus fire on one of the enemy Marine, while agent 1 has the lowest HP; after that, agent 1 still keeps firing instead of running away from the enemies and the intrinsic reward function predicts a low rin1 , indicating that u1 = attack is not a good action at that time; finally, agent 1 dies at time step 9 and the corresponding intrinsic reward is very low. In Fig. 5(a), after time step 27, we see that agent 2’s intrinsic reward increases a lot compared to the other agents. Figs. 5(b) and (c) provides a clear explanation that at time step 27, agent 2 (with low HP) stops firing and runs along the red arrows (the move actions only take 4 directions here) to avoid the attack from the enemy Zealot; until reaching an enemy Stalker at time step 32, agent 2 starts attacking the Stalker which is finally killed. Moreover, the overall trend of both the curves in Figs. 4(a) and 5(a) keeps increasing, indicating that the controlled team finally wins the game. Besides visualizing the two episodes illustrated above, we also provide overall statistics of the learned intrinsic reward. We collect the intrinsic reward for the action “attack” when the corresponding health points are lower than 50% from 100 test episodes. We then compute the cosine similarity (a value in [-1, 1]) between the health point and the intrinsic reward. The averaged cosine similarity is 0.55 for 2S3Z and 0.67 for 3M. The results show that the health point and intrinsic reward are positively correlated. That is, when the health point is low, the intrinsic reward is generally low for taking the “attack” action as well, which is reasonable in this scenario. The above case studies demonstrate that the learned intrinsic reward can indeed provide diverse feedback signals for the agents and these signals are very informative in evaluating the agents’ immediate behaviors. 6 Conclusion We have proposed a novel multi-agent reinforcement learning algorithm, which learns an individual intrinsic reward for each agent. The method can assign each agent a distinct intrinsic reward so that the agents are stimulated differently, even when the environment only feedbacks a team reward. Given the intrinsic reward for each agent, we define each of them a proxy critic to direct their policy learning via actor-critic algorithms. We show that the formulated multi-agent learning problem can be viewed as a bilevel optimization problem. Our empirical results carried on the battle games in StarCraft II demonstrate that learning the intrinsic reward function could eventually induce better trained policy compared with a number of state-of-the-art competitors. We further perform two case studies to visualize the learned intrinsic reward values, and the results provide clear explanations on the effects of the learned intrinsic rewards. For future work, we are interested in applying the LIIR method to more challenging scenarios, such as real-world traffic control with many agents and competitive multi-agent systems. Moreover, in addition to the simple summation form in (2), it is also interesting to investigate the optimal form of the proxy reward function. Acknowledgments The authors would like to thank anonymous reviewers for their constructive comments. Yali Du is during an internship at Tencent AI Lab when working on this project.
1. What is the focus of the paper regarding cooperative multi-agent settings? 2. What are the strengths of the proposed approach, particularly in terms of its connection to prior works? 3. What are the weaknesses of the paper, specifically regarding its technical contributions? 4. Do you have any concerns or questions about the choice of tasks used in the study?
Review
Review The paper is well written. I do not have any clarity issues. To a large extent, the paper is a successor of the work by Zeyu et al. [16]: it is a straightforward extension of learning intrinsic reward to the cooperative multi-agent setting. Therefore, the technical contributions are somewhat limited. PS: Another paper on multi-agent intrinsic reward. Liu, Bingyao, Satinder Singh, Richard L. Lewis, and Shiyin Qin. "Optimal rewards for cooperative agents." I have given my list of three most significant contributions and suggestions for improvement in other sections of the review. Here I have some minor questions: Any particular reason why the authors did not choose all the tasks used in the COMA paper, for the purpose of comparison? In the COMA paper [9], the tasks are 3M, 5M, 5W, and 2D3Z. In this paper, we have 3M, 8M, 2S3Z, 3S5Z.
NIPS
Title LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning Abstract A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward. Prior studies have paid many efforts on reward shaping or designing a centralized critic that can discriminatively credit the agents. In this paper, we propose to merge the two directions and learn each agent an intrinsic reward function which diversely stimulates the agents at each time step. Specifically, the intrinsic reward for a specific agent will be involved in computing a distinct proxy critic for the agent to direct the updating of its individual policy. Meanwhile, the parameterized intrinsic reward function will be updated towards maximizing the expected accumulated team reward from the environment so that the objective is consistent with the original MARL problem. The proposed method is referred to as learning individual intrinsic reward (LIIR) in MARL. We compare LIIR with a number of state-of-the-art MARL methods on battle games in StarCraft II. The results demonstrate the effectiveness of LIIR, and we show LIIR can assign each individual agent an insightful intrinsic reward per time step. 1 Introduction Many real-world problems, such as traffic light control [1], coordination of autonomous vehicles [2], resources management [3] and multi-player video games [4, 5], can be naturally formulated into cooperative multi-agent systems, where the objective is to maximize the return in the perspective of a team of agents. When the agents are manipulated with a centralized controller which could access the joint or global state of all the agents, coordination among the agents is easier and the main effort of the controller is usually paid on finding an effective communication scheme among the agents. Examples include a wide range of approaches on designing effective centralized MARL architectures [5, 6, 7, 8]. ∗Equal contribution. Correspondence to the first two authors. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Unfortunately, when the agents are independently deployed and communications are disabled or prohibitive, each agent has to predict its own action conditioning on its partial observation trajectory. Without a centralized controller, each agent is responsible to collaborate with others on its own decision. This pushes much burden on the capability of each agent’s policy. Worse still, in most of the real-world MARL applications, the agents only receive a team reward, from which it is difficult to deduce each agent’s contribution to the team’s success, making the learning and collaboration among agents nontrivial. There have been many efforts paid on distinguishing the agents’ credit, known as the credit assignment problem in MARL [9, 10]. A general way is reward shaping [5, 11, 12], which, however, requires abundant human labor to assign precise rewards to each individual agent. Under some real-world tasks, such as reducing the latency in a traffic network, there might even not exist any clear choice of the reward functions for an individual agent (vehicle in the example). Another branch of commonly adopted methods try to design a centralized critic that is capable to distinguish the state-action values of the agents during training [9, 10], and then perform decentralized executions during testing. Our approach builds a connection between reward shaping and critic learning. That is, we propose to learn each agent a parameterized individual intrinsic reward function by maximizing a centralized critic. The optimal intrinsic reward problem has been introduced in [13] for single agent learning scenarios and studied in some recent RL approaches [14, 15, 16]. Inspired by the concept, we propose to introduce the intrinsic reward design into multi-agent systems to distinguish the contributions of the agents when the environment only returns a team reward. Specifically, we learn each agent a parameterized intrinsic reward function that outputs an intrinsic reward for that agent at each time step to induce diversified behaviors. With these intrinsic rewards, we define each agent a distinct proxy expected discounted return that is a combination of the real team reward from the environment and the learned intrinsic reward. Using the actor-critic method, the individual policy of each agent is updated under the direction of the corresponding proxy critic. The parameters of the intrinsic reward functions are updated to maximize the standard accumulated discounted team return from the environment. Therefore, the objective of the entire procedure is consistent with that of the original MARL problem. Insightfully, from an optimization perspective, the proposed method can be categorized to the bilevel optimization, where the problem of solving individual proxy objectives is nested within the outer optimization task which maximizes the standard multi-agent return. The parameters of the policy and the intrinsic reward function are treated as the parameters of the inner and outer optimization problems, respectively. We refer the proposed method to as learning individual intrinsic reward (LIIR) in MARL. Empirically, we show that LIIR outperforms a number of state-of-the-art MARL approaches on extensive settings in the battle game of StarCraft II. We also conduct insightful case studies to visualize the learned intrinsic reward, and the results demonstrate that the learned intrinsic reward function can generate diverse reward signals for the agents and the agents can also act diversely in a collaborative way. 2 Related Work When considering a centralized controller in MARL, the controller takes the joint or global observation of the agents as input and outputs multiple actions for the agents in one step. Many studies have been proposed on pursuing effective communication architecture among the agents within a centralized controller. For example, densely connected communication layers or modules have been embedded in a centralized controller that directly outputs multi-head predictions for the agents [6, 5]. Recurrent neural networks (RNN) have also been introduced to enable a sequence of agents to communicate through the recurrent module [7]. However, in many MARL applications, the agents have to be separately deployed that each agent has to make its own decision conditioning on its partial observation. Decentralized methods naturally deal with the above situation. The simplest approach is learning an individual policy or Q-function for each agent. This was first attempted with Q-learning [17], which was then extended with deep neural networks applied [18, 19]. Fully decentralized methods are limited under the case where only a team reward is given, since distinguishing the agents’ contributions is difficult. To address the credit assignment problem in decentralized MARL, many existing methods utilize the framework with a centralized critic and decentralized policy. That is, the policies are deployed independently by taking individual observation as input, while the centralized critic focuses on quantifying the differences among the agents. For example, the counterfactual multi-agent policy gradient [9] uses a counterfactual baseline to assign credits for the agents; the value decomposition network [20] decomposes the centralized value into a sum of individual agent values to discriminate their contributions; the QMIX [10] method adopts a similar idea that assumes the centralized Q-value function is monotonically increasing with the individual Q-values. Most of the existing methods focus on the architecture design of the critic, even strong assumptions on the value functions are unavoidable. Our method differs from these approaches that rather than working on the value functions, we propose to learn each agent an intrinsic reward at each time step. The benefits are that no assumptions are attached on the value functions and the agents are allocated an explicit immediate intrinsic reward at each time step to assign their credits. Our work is also related to the optimal intrinsic reward design problem in single agent setting [21, 22, 23, 16, 24]. Some prior works have used heuristic metrics to define the intrinsic reward. For example, in [22] the intrinsic reward is defined as the squared difference between two consecutive states, and in [23] a metric named curiosity is used as the intrinsic reward. In [24] the learning of intrinsic reward is integrated with the update of the policy. A recent approach [16] proposes to parameterize the intrinsic reward function and alternatively updates the policy parameters and the intrinsic reward parameters. In this paper, we extend the setting to multi-agent system and use individual intrinsic reward function to distinguish the credits of the agents. 3 Background 3.1 Cooperative Multi-Agent Reinforcement Learning We consider a fully cooperative multi-agent system, in which the agents need to be independently deployed without a central controller. The system can be described as a tuple as 〈A, S, U, P, r, γ, ρ0〉. Let A = {1, 2, · · · , n} denote the set of n agents. Denote observation space of the agents as S = {S1, S2, · · · , Sn} and the action space of the agents as U = {U1, U2, · · · , Un} respectively. At time step t, let st = {sit}ni=1 with each sit ∈ Si being the partial observation from agent i. Accordingly, let ut = {uit}ni=1 with each uit ∈ Ui indicating the action taken by the agent i. We overload notations and use st ∈ S to refer to the true state of the environment. P (st+1|st,ut) : S × U × S → [0, 1] is the state transition function. r(st,ut) : S × U → R indicates the team reward function from the environment. In order to differentiate the team reward from the environment and the intrinsic reward that will be learned, we refer the team reward to as the extrinsic team reward rex(st,ut), following the usage in [16]. γ ∈ [0, 1) is a discount factor and ρ0 : S → R is the distribution of the initial state s0. Let πi(uit|sit) : Si × Ui → [0, 1] be a stochastic policy for agent i and denote π = {πi}ni=1. Let J ex(π) = Es0,u0,··· [Rex0 ] with Rext = ∑∞ l=0 γ lrext+l denoting the expected discounted extrinsic reward, where s0 ∼ ρ0(s0), uit ∼ πi(uit|sit) for i ∈ A, and st+1 ∼ P (st+1|st,ut). Define the extrinsic value function as V exπ (st) = Eut,st+1,··· [Rext ]. We aim to find optimal policies π∗ = {π∗i }ni=1 that achieve the maximum expected extrinsic team reward J ex(π∗). 3.2 Centralized Learning with Decentralized Execution Centralized learning with decentralized execution (CLDE) is a commonly used architecture to learn a centralized critic to update the decentralized policies during training. In CLDE, actor-critic (AC) style methods [25, 26, 27, 28, 29] are often selected. In our case, AC algorithms use n independent parameterized policies πθi for i ∈ A and update θi by maximizing the expected extrinsic reward J ex(θ1, θ2, · · · , θn) = Es,u [Rex] using the policy gradient ∇θiJ ex(θ1, θ2, · · · , θn) = Es,u [∇θi log πθi(ui|si)Aπ(s,u)] , (1) where Aπ(s,u) is the centralized critic. There are several ways to estimate Aπ(s,u). For example, Aπ(s,u) = r ex(s,u) + V ex(s′)− V ex(s) is the standard advantage function [27, 28], where s′ is the successive state of the agents. In [9], Aπ(s,u) is defined as an estimated state-action value function minus a counterfactual baseline. 3.3 Parameterized Intrinsic Reward A recent study [16] has investigated learning a parameterized intrinsic reward function in single agent setting. The idea is to explicitly define the intrinsic reward function as rinη (s, a) for a state-action pair (s, a) of the agent, and it is summed up with the extrinsic reward rex(s, a) from the environment to serve as the return signal for updating the policy. The intrinsic reward parameter η is updated towards maximizing the expected extrinsic reward J ex. The intuition for updating η is to find the effect that the change on η would influence the extrinsic value through the change in the policy parameters. This technique can be viewed as an instance of meta learning [30, 31, 32]; the intrinsic reward function serves as a meta-learner that learns to improve the agents objective. In our case, we extend the intrinsic reward learning method to deal with decentralized MARL problem and we use the intrinsic rewards to diversely stimulate the agents to learn from the environment. 4 Method In this section, we formally propose the LIIR method. We first provide a formal definition of the considered problem based on what have been introduced in Section 3, then we introduce a bilevel optimization algorithm for solving the proposed objective. 4.1 The Objective By defining an intrinsic reward function rinηi(si, ui) which is parameterized by ηi and takes a stateaction pair (si, ui) of an individual agent i as input, we propose to assign agent i a distinct proxy reward rproxyi,t = r ex t + λr in i,t, (2) at time step t. In (2), we have omitted the arguments of the reward functions for simplicity, and λ is a hyper-parameter that balances the extrinsic team reward and the distinct intrinsic reward. Note that in the standard MARL problem with a team reward, there does not exist any distinct reward for each agent. Now, after creating each agent a proxy reward rproxyi,t at time step t, we accordingly define a discounted proxy reward for each agent i as Rproxyi,t = ∞∑ l=0 γl(rext+l + λr in i,t+l), (3) and the proxy value function for agent i as V proxyi (si,t) = Eui,t,si,t+1,···[R proxy i,t ]. (4) Different from the extrinsic (standard) value V ex, these proxy value functions V proxyi ’s do not have any physical meanings and they will be only used for updating the individual policy parameters θi’s. Now, the considered overall objective is defined as max η,θ J ex(η), (5) s.t. θi = argmax θ Jproxyi (θ,η), ∀i ∈ [1, 2, · · · , n] where Jproxyi := Esi,0,ui,0,··· [ Rproxyi,0 ] depending on θi and η, η indicates the intrinsic reward parameter set {η1, η2, · · · , ηn} and θ indicates the policy parameter set {θ1, θ2, · · · , θn}. In problem (5), the goal is to maximize J ex through optimizing η, while the policy parameter θi is optimized by maximizing the proxy expected discounted return Jproxyi for agent i. The advantage is that by learning a distinct intrinsic reward for each agent per time step, the agents will be diversely stimulated and this will accumulatively influence the policy learning via the policy gradient. Moreover, from an optimization perspective, problem (5) can be viewed as a bilevel optimization problem, since the problem of maximizing the individual proxy expected returns is nested within the outer optimization task, which is maximizing the extrinsic expected return. In the next subsection, we will discuss how J ex is connected with the intrinsic reward parameter η. 4.2 Algorithm As a bilevel optimization problem, at each iteration, the policy parameters are updated with respect to the inner proxy tasks, while the intrinsic reward parameters are updated to maximize the extrinsic expected return. Specifically, the policy parameter of each agent is updated by the policy gradient with its proxy critic. Given a trajectory generated by the policy πθi , θi can be updated by applying the policy gradient defined in (1): ∇θi log πθi(ui|si)A proxy i (si, ui), (6) where Aproxyi (si, ui) is the proxy critic that can be chosen in a variety of ways [25, 26, 27, 28]. For example, Aproxyi (si, ui) = R proxy i leads to the REINFORCE algorithm [26]. In this paper, we choose Aproxyi (si, ui) = r proxy i (si, ui) + V proxy ϕi (s ′ i) − V proxy ϕi (si) as the advantage function [27, 28], where V proxyϕi is the proxy value parameterized by ϕi and s′i is the next state of agent i in the trajectory. Given (6) and a policy learning rate α, the updated policy parameter θ′i can be represented as θ′i = θi + α∇θi log πθi(ui|si)A proxy i (si, ui). Then, we build the connection between η and J ex and specify the updating procedure for η. Given the updated policy parameters θ′i’s, using the chain rule, we have ∇ηiJ ex = ∇θ′iJ ex∇ηiθ′i. (7) The spirit of (7) is to formulate the effect of the change of ηi on influencing J ex through its influence in the updated policy parameter θ′i. This is a commonly adopted technique in meta-gradient learning [30, 31, 32, 33]. Computing the meta-gradient∇ηiJ ex requires new samples generated by the updated policy parameter θ′i, while this can be avoid by reusing the samples generated by θi with importance sampling [16]. In (7),∇θ′iJ ex can be estimated by stochastic gradient as ∇θ′i log πθ′i(ui|si)A ex(s,u), (8) where Aex(s,u) is the centralized extrinsic critic. Similar to proxy critics, we choose Aex(s,u) = rex(s,u) + V exφ (s ′)− V exφ (s), where V exφ (s) is the extrinsic value parameterized by φ. The second term in (7) can be derived as ∇ηiθ′i = ∇ηi [θi + α∇θi log πθi(ui|si)A proxy i (si, ui)] = αλ∇θi log πθi(ai|si)∇ηir proxy i (si, ui). (9) Fig. 1 gives an illustration of the entire architecture of the LIIR method. A sketch of the optimization algorithm is presented in Algorithm 1. 5 Experiments In this section, we first evaluate LIIR on a simple 1D pursuit game specifically designed for the considered settings to see whether LIIR can learn reasonable distinct intrinsic rewards. Then, we Algorithm 1 The optimization algorithm for LIIR. Input: policy learning rate α and intrinsic reward learning rate β. Output: policy parameters θ and intrinsic reward parameters η. 1: Init: initialize θ and η; 2: while termination is not reached do 3: Sample a trajectory D = {s0,u0, s1,u1, · · · } by executing actions with the decentralized policies {πθ1 , · · · , πθn}; 4: Update θ according to (6) with learning rate α; 5: Compute (8) using new samples from {πθ′1 , πθ′2 , · · · , πθ′n} or reuse D to replace (8) with ∇θ′ i πθ′ i (ui|si) πθi (ui|si) Aex(s,u); 6: Update η according to (7), step 5 and (9) with learning rate β; 7: end while comprehensively study LIIR in several challenging micromanagement games in the game of StarCraft II, and compare LIIR with a number of state-of-the-art MARL methods.2 5.1 A Simple 1D Pursuit Study We design a simple game named 1D Pursuit to provide a fast verification for the quality of the intrinsic reward learned by LIIR. In 1D pursuit, a team of two agents are initially assigned with some random integers denoted by x and y respectively, and each agent could take actions from {+1,−1, 0} to either increase, decrease or keep its value to approach a target value z that is unknown to the agents. For a collaborative setting, the team reward for the two agents is set to be inversely proportional to the summation of their absolute differences between their values and the target value. That is, both the two agents should adjust their values towards the target value. The observation of each agent is a two-dimension vector containing its current integer value and another agent’s integer value. The team reward is set to be +0.01 if both agents take actions that approaching the target value, −0.01 if both agents take actions that moving away from the target value, and 0 otherwise. The target value is set to be 0. The initial integers for the two agents are randomly generated from {−10, ..., 10}. We implement LIIR based on the architecture depicted in Fig. 1. The detailed network structure is provided in the supplementary material. In Fig. 2, we plot the histogram of the distributions of the intrinsic reward averaged from 1000 episodes. We denote actions approaching the target as “Good” actions and actions moving away from the target as “Bad” actions. The result shows that LIIR can assign reasonable intrinsic reward to the agents. 5.2 StarCraft II Micromanagement In this subsection, we comprehensively evaluate the proposed LIIR method in the game of StarCraft II based on the learning environment SC2LE [34] and mini-game settings in SMAC [35]. We compare the LIIR method with a number of state-of-the-art MARL methods that use the CLDE architecture. We also provide some insightful case studies to visualize the learned intrinsic rewards. StarCraft II is a popular real-time strategy game and it has been studied under MARL settings [9, 10, 7, 36, 37]. In the experiments, we consider symmetric battle games in StarCraft II , where both single type agents and mixed type agents are considered. Specifically, the considered scenarios contain 3 Marines vs. 3 Marines (3M), 8 Marines vs. 8 Marines (8M), 2 Stalkers & 3 Zealots vs. 2 Stalkers & 3 Zealots (2S3Z), and 3 Stalkers & 5 Zealots vs. 3 2The source codes of LIIR are available through https://github.com/yalidu/liir. Stalkers & 5 Zealots (3S5Z). In these settings, Marine and Stalker are units of Terran and Protoss, respectively, and both of them can attack enemies at a distance, while Zealot is a melee unit of Protoss and it can only attack enemies who stand close to it. In all these games, only the units from self side are treated as agents. Each agent is described by several attributes including the health point (HP), weapon cooling down (CD), shield (for 2S3Z and 3S5Z), unit type, last action and the relative distance of the observed units. The enemy unit is described in the same way except that CD is excluded. The partial observation of an agent is composed by the attributes of the units, including both the agents and the enemy units, shown up within its view range that is a circle with a certain radius. The action space contains 4 move directions, k attack actions where k is the fixed maximum number of the enemy units in a map, stop and none-operation. The input dimension and the output action dimension are fixed with a certain ordering over the agents and enemy units. Dead enemy units will be masked out from the action space to ensure the executed action is valid. At each time step, the agents receive a joint team reward which is defined by the total damage of the agents and the total damage from the enemy side. In all the scenarios, following the configurations in [9, 10], we train the agents against the build-in AI opponent. More detailed settings can be acquired from the SMAC environment [35]. 5.2.1 Compared Methods and Training Details The considered methods for evaluation include • independent Q-learning (IQL) [17]: IQL trains decentralized Q-functions for each agent. Since the observation and action spaces of the agents are the same within a specific environmental setting, a policy will be shared across all the agents; • independent actor-critic (IAC) [9]: IAC is similar to IQL except that it adopts the actor-critic method; • Central-V [9]: the method learns a centralized critic with decentralized policies. Similarly, all agents share the same policy network; • COMA [9]: the method learns a centralized critic that is the state-action value minus a counterfactual baseline; • QMIX [10]: the method learns decentralized Q-function for each agent with the assumption that the centralized Q-value is monotonically increasing with the individual Q-values. In the implementations, the agents share the same Q-function; • LIIR: the proposed method. In the experiments, the agents share the same policy, intrinsic reward function and proxy critic. Since each agent has its own partial observation, sharing policy parameters does not imply that they act the same. For COMA and QMIX, we use their original implementations, in which the main policy network orQnetwork consist of some fully connected (FC) layers and a GRU module.3 All the other methods adopt similar network structures compared to COMA and QMIX. As depicted in Fig. 1, the parameters of LIIR contain 4 components corresponding to the shared policy parameter θ, intrinsic reward parameter η, proxy value parameter ϕ and extrinsic value parameter φ. To achieve fair comparison, we set the policy network structure, i.e., θ, as what is exactly used for COMA’s policy network. Then, we compress the other parameters η, ϕ and φ to let their total size equal to the parameter size of the remaining part in COMA. More details can be found in the supplementary material. All the methods are trained with 3 millions of steps in 3M and 8M, and with 10 millions of steps for 2S3Z and 3S5Z. The hyper-parameter λ in (2) is set to 0.01 throughout the experiments (we tried different choices of λ while we found that the results did not differ much). We use a fixed learning rate of 5e-4 and use batches of 32 episodes for all the methods. We use 32 actors to generate the trajectories in parallel, and use one NVIDIA Tesla M40 GPU for training. 5.2.2 Results To evaluate the performance of each method, we freeze the training every 100 episodes and test the model over 20 episodes to compute an average test winning rate. The entire training procedure is 3https://github.com/oxwhirl/pymarl repeated for 5 times to plot the winning rate curve with standard deviation. The results are reported in Fig. 3, where the averaged winning rates vs. the training steps on all the battle scenarios are given. In 3M which is the simplest game, all the test winning rates keep increasing as the training steps increase. In 8M, 2S3Z and 3S5Z, the independent learning methods, i.e., IQL and IAC, fail to learn a good policy for the agents and the methods using a CLDE architecture always outperform the independent learning methods. In 3M and 8M, COMA and Central-V show comparable performance, while in 2S3Z and 3S5Z, Central-V outperforms QMIX and COMA. For all these scenarios, the LIIR method consistently shows the best performance, and it achieves around 90% winning rate in all the scenarios. This demonstrates that learning the intrinsic reward function can ultimately induce better trained policies. 5.2.3 Visualizing the Learned Intrinsic Reward In addition to evaluate the performance of the trained policy in Section 5.2.2, we are more curious about how much effect the learned intrinsic reward function actually contributes to the policy learning. In order to figure out what has been learned in the intrinsic reward function, we propose to explicitly visualize these rewards. That is, we plot the learned intrinsic reward of each agent at each time step in a complete trajectory during testing. It is worth mentioning that during testing the intrinsic rewards are independent with the learned policy, and these rewards will not be used at all when generating the trajectory. For clarity, we randomly choose two test replays in 3M and 2S3Z which contain fewer agents to plot all the agents’ intrinsic rewards. Figs. 4 and 5 show the intrinsic rewards in 3M and 2S3Z, respectively. We also attach some auxiliary snapshots to explain some interesting segments in the curves. In all the snapshots, the red colored units indicate the agents controlled by LIIR. In Fig. 4(a), agent 1 is dead at time step 9, and we can observe that its intrinsic reward turns to be very low after time step 6 compared to the other two agents. As revealed by Figs. 4(b) and (c), at time step 6, all the three agents focus fire on one of the enemy Marine, while agent 1 has the lowest HP; after that, agent 1 still keeps firing instead of running away from the enemies and the intrinsic reward function predicts a low rin1 , indicating that u1 = attack is not a good action at that time; finally, agent 1 dies at time step 9 and the corresponding intrinsic reward is very low. In Fig. 5(a), after time step 27, we see that agent 2’s intrinsic reward increases a lot compared to the other agents. Figs. 5(b) and (c) provides a clear explanation that at time step 27, agent 2 (with low HP) stops firing and runs along the red arrows (the move actions only take 4 directions here) to avoid the attack from the enemy Zealot; until reaching an enemy Stalker at time step 32, agent 2 starts attacking the Stalker which is finally killed. Moreover, the overall trend of both the curves in Figs. 4(a) and 5(a) keeps increasing, indicating that the controlled team finally wins the game. Besides visualizing the two episodes illustrated above, we also provide overall statistics of the learned intrinsic reward. We collect the intrinsic reward for the action “attack” when the corresponding health points are lower than 50% from 100 test episodes. We then compute the cosine similarity (a value in [-1, 1]) between the health point and the intrinsic reward. The averaged cosine similarity is 0.55 for 2S3Z and 0.67 for 3M. The results show that the health point and intrinsic reward are positively correlated. That is, when the health point is low, the intrinsic reward is generally low for taking the “attack” action as well, which is reasonable in this scenario. The above case studies demonstrate that the learned intrinsic reward can indeed provide diverse feedback signals for the agents and these signals are very informative in evaluating the agents’ immediate behaviors. 6 Conclusion We have proposed a novel multi-agent reinforcement learning algorithm, which learns an individual intrinsic reward for each agent. The method can assign each agent a distinct intrinsic reward so that the agents are stimulated differently, even when the environment only feedbacks a team reward. Given the intrinsic reward for each agent, we define each of them a proxy critic to direct their policy learning via actor-critic algorithms. We show that the formulated multi-agent learning problem can be viewed as a bilevel optimization problem. Our empirical results carried on the battle games in StarCraft II demonstrate that learning the intrinsic reward function could eventually induce better trained policy compared with a number of state-of-the-art competitors. We further perform two case studies to visualize the learned intrinsic reward values, and the results provide clear explanations on the effects of the learned intrinsic rewards. For future work, we are interested in applying the LIIR method to more challenging scenarios, such as real-world traffic control with many agents and competitive multi-agent systems. Moreover, in addition to the simple summation form in (2), it is also interesting to investigate the optimal form of the proxy reward function. Acknowledgments The authors would like to thank anonymous reviewers for their constructive comments. Yali Du is during an internship at Tencent AI Lab when working on this project.
1. What is the focus of the paper in terms of research avenues? 2. How does the proposed approach extend previous ideas in single-agent RL to multi-agent RL settings? 3. What are the strengths of the paper regarding its quality and significance? 4. What are the weaknesses of the paper regarding its clarity, experimental setup, and reproducibility? 5. Are there any questions or concerns that the reviewer has after reading the paper?
Review
Review Originality: The ideas introduced here are certainly not new, but extending intrinsic rewards to multi-agent RL settings is for sure an interesting research avenue, especially when one is interested in decentralized multi-agent settings were no communication is possible between agents. Quality: The paper is well written. Clarity: It is a fairly convoluted method, with many components. A better overview of the algorithm could be useful (perhaps in supplementary materials). Furthermore, not all the details regarding the experimental setup and parameter choice is specified. This information is important for reproducibility reasons, and could also be included in supplementary materials. A few examples include the beta learning rate, was there any parameter search performed for lambda, more in-depth view of the networks' architectures. Significance: The method is compared against state-of-the-art methods and does show improvements in the selected scenarios. The authors also perform a short analysis of what intrinsic reward the agents learn and how it affect their behaviour. ------------------------------------ Post-rebuttal: I appreciate the authors efforts to answer the raised concerns and I think the additional experiments, analysis and explanations will improve the work. I will maintain my score, given the novelty level of the work.
NIPS
Title LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning Abstract A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward. Prior studies have paid many efforts on reward shaping or designing a centralized critic that can discriminatively credit the agents. In this paper, we propose to merge the two directions and learn each agent an intrinsic reward function which diversely stimulates the agents at each time step. Specifically, the intrinsic reward for a specific agent will be involved in computing a distinct proxy critic for the agent to direct the updating of its individual policy. Meanwhile, the parameterized intrinsic reward function will be updated towards maximizing the expected accumulated team reward from the environment so that the objective is consistent with the original MARL problem. The proposed method is referred to as learning individual intrinsic reward (LIIR) in MARL. We compare LIIR with a number of state-of-the-art MARL methods on battle games in StarCraft II. The results demonstrate the effectiveness of LIIR, and we show LIIR can assign each individual agent an insightful intrinsic reward per time step. 1 Introduction Many real-world problems, such as traffic light control [1], coordination of autonomous vehicles [2], resources management [3] and multi-player video games [4, 5], can be naturally formulated into cooperative multi-agent systems, where the objective is to maximize the return in the perspective of a team of agents. When the agents are manipulated with a centralized controller which could access the joint or global state of all the agents, coordination among the agents is easier and the main effort of the controller is usually paid on finding an effective communication scheme among the agents. Examples include a wide range of approaches on designing effective centralized MARL architectures [5, 6, 7, 8]. ∗Equal contribution. Correspondence to the first two authors. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Unfortunately, when the agents are independently deployed and communications are disabled or prohibitive, each agent has to predict its own action conditioning on its partial observation trajectory. Without a centralized controller, each agent is responsible to collaborate with others on its own decision. This pushes much burden on the capability of each agent’s policy. Worse still, in most of the real-world MARL applications, the agents only receive a team reward, from which it is difficult to deduce each agent’s contribution to the team’s success, making the learning and collaboration among agents nontrivial. There have been many efforts paid on distinguishing the agents’ credit, known as the credit assignment problem in MARL [9, 10]. A general way is reward shaping [5, 11, 12], which, however, requires abundant human labor to assign precise rewards to each individual agent. Under some real-world tasks, such as reducing the latency in a traffic network, there might even not exist any clear choice of the reward functions for an individual agent (vehicle in the example). Another branch of commonly adopted methods try to design a centralized critic that is capable to distinguish the state-action values of the agents during training [9, 10], and then perform decentralized executions during testing. Our approach builds a connection between reward shaping and critic learning. That is, we propose to learn each agent a parameterized individual intrinsic reward function by maximizing a centralized critic. The optimal intrinsic reward problem has been introduced in [13] for single agent learning scenarios and studied in some recent RL approaches [14, 15, 16]. Inspired by the concept, we propose to introduce the intrinsic reward design into multi-agent systems to distinguish the contributions of the agents when the environment only returns a team reward. Specifically, we learn each agent a parameterized intrinsic reward function that outputs an intrinsic reward for that agent at each time step to induce diversified behaviors. With these intrinsic rewards, we define each agent a distinct proxy expected discounted return that is a combination of the real team reward from the environment and the learned intrinsic reward. Using the actor-critic method, the individual policy of each agent is updated under the direction of the corresponding proxy critic. The parameters of the intrinsic reward functions are updated to maximize the standard accumulated discounted team return from the environment. Therefore, the objective of the entire procedure is consistent with that of the original MARL problem. Insightfully, from an optimization perspective, the proposed method can be categorized to the bilevel optimization, where the problem of solving individual proxy objectives is nested within the outer optimization task which maximizes the standard multi-agent return. The parameters of the policy and the intrinsic reward function are treated as the parameters of the inner and outer optimization problems, respectively. We refer the proposed method to as learning individual intrinsic reward (LIIR) in MARL. Empirically, we show that LIIR outperforms a number of state-of-the-art MARL approaches on extensive settings in the battle game of StarCraft II. We also conduct insightful case studies to visualize the learned intrinsic reward, and the results demonstrate that the learned intrinsic reward function can generate diverse reward signals for the agents and the agents can also act diversely in a collaborative way. 2 Related Work When considering a centralized controller in MARL, the controller takes the joint or global observation of the agents as input and outputs multiple actions for the agents in one step. Many studies have been proposed on pursuing effective communication architecture among the agents within a centralized controller. For example, densely connected communication layers or modules have been embedded in a centralized controller that directly outputs multi-head predictions for the agents [6, 5]. Recurrent neural networks (RNN) have also been introduced to enable a sequence of agents to communicate through the recurrent module [7]. However, in many MARL applications, the agents have to be separately deployed that each agent has to make its own decision conditioning on its partial observation. Decentralized methods naturally deal with the above situation. The simplest approach is learning an individual policy or Q-function for each agent. This was first attempted with Q-learning [17], which was then extended with deep neural networks applied [18, 19]. Fully decentralized methods are limited under the case where only a team reward is given, since distinguishing the agents’ contributions is difficult. To address the credit assignment problem in decentralized MARL, many existing methods utilize the framework with a centralized critic and decentralized policy. That is, the policies are deployed independently by taking individual observation as input, while the centralized critic focuses on quantifying the differences among the agents. For example, the counterfactual multi-agent policy gradient [9] uses a counterfactual baseline to assign credits for the agents; the value decomposition network [20] decomposes the centralized value into a sum of individual agent values to discriminate their contributions; the QMIX [10] method adopts a similar idea that assumes the centralized Q-value function is monotonically increasing with the individual Q-values. Most of the existing methods focus on the architecture design of the critic, even strong assumptions on the value functions are unavoidable. Our method differs from these approaches that rather than working on the value functions, we propose to learn each agent an intrinsic reward at each time step. The benefits are that no assumptions are attached on the value functions and the agents are allocated an explicit immediate intrinsic reward at each time step to assign their credits. Our work is also related to the optimal intrinsic reward design problem in single agent setting [21, 22, 23, 16, 24]. Some prior works have used heuristic metrics to define the intrinsic reward. For example, in [22] the intrinsic reward is defined as the squared difference between two consecutive states, and in [23] a metric named curiosity is used as the intrinsic reward. In [24] the learning of intrinsic reward is integrated with the update of the policy. A recent approach [16] proposes to parameterize the intrinsic reward function and alternatively updates the policy parameters and the intrinsic reward parameters. In this paper, we extend the setting to multi-agent system and use individual intrinsic reward function to distinguish the credits of the agents. 3 Background 3.1 Cooperative Multi-Agent Reinforcement Learning We consider a fully cooperative multi-agent system, in which the agents need to be independently deployed without a central controller. The system can be described as a tuple as 〈A, S, U, P, r, γ, ρ0〉. Let A = {1, 2, · · · , n} denote the set of n agents. Denote observation space of the agents as S = {S1, S2, · · · , Sn} and the action space of the agents as U = {U1, U2, · · · , Un} respectively. At time step t, let st = {sit}ni=1 with each sit ∈ Si being the partial observation from agent i. Accordingly, let ut = {uit}ni=1 with each uit ∈ Ui indicating the action taken by the agent i. We overload notations and use st ∈ S to refer to the true state of the environment. P (st+1|st,ut) : S × U × S → [0, 1] is the state transition function. r(st,ut) : S × U → R indicates the team reward function from the environment. In order to differentiate the team reward from the environment and the intrinsic reward that will be learned, we refer the team reward to as the extrinsic team reward rex(st,ut), following the usage in [16]. γ ∈ [0, 1) is a discount factor and ρ0 : S → R is the distribution of the initial state s0. Let πi(uit|sit) : Si × Ui → [0, 1] be a stochastic policy for agent i and denote π = {πi}ni=1. Let J ex(π) = Es0,u0,··· [Rex0 ] with Rext = ∑∞ l=0 γ lrext+l denoting the expected discounted extrinsic reward, where s0 ∼ ρ0(s0), uit ∼ πi(uit|sit) for i ∈ A, and st+1 ∼ P (st+1|st,ut). Define the extrinsic value function as V exπ (st) = Eut,st+1,··· [Rext ]. We aim to find optimal policies π∗ = {π∗i }ni=1 that achieve the maximum expected extrinsic team reward J ex(π∗). 3.2 Centralized Learning with Decentralized Execution Centralized learning with decentralized execution (CLDE) is a commonly used architecture to learn a centralized critic to update the decentralized policies during training. In CLDE, actor-critic (AC) style methods [25, 26, 27, 28, 29] are often selected. In our case, AC algorithms use n independent parameterized policies πθi for i ∈ A and update θi by maximizing the expected extrinsic reward J ex(θ1, θ2, · · · , θn) = Es,u [Rex] using the policy gradient ∇θiJ ex(θ1, θ2, · · · , θn) = Es,u [∇θi log πθi(ui|si)Aπ(s,u)] , (1) where Aπ(s,u) is the centralized critic. There are several ways to estimate Aπ(s,u). For example, Aπ(s,u) = r ex(s,u) + V ex(s′)− V ex(s) is the standard advantage function [27, 28], where s′ is the successive state of the agents. In [9], Aπ(s,u) is defined as an estimated state-action value function minus a counterfactual baseline. 3.3 Parameterized Intrinsic Reward A recent study [16] has investigated learning a parameterized intrinsic reward function in single agent setting. The idea is to explicitly define the intrinsic reward function as rinη (s, a) for a state-action pair (s, a) of the agent, and it is summed up with the extrinsic reward rex(s, a) from the environment to serve as the return signal for updating the policy. The intrinsic reward parameter η is updated towards maximizing the expected extrinsic reward J ex. The intuition for updating η is to find the effect that the change on η would influence the extrinsic value through the change in the policy parameters. This technique can be viewed as an instance of meta learning [30, 31, 32]; the intrinsic reward function serves as a meta-learner that learns to improve the agents objective. In our case, we extend the intrinsic reward learning method to deal with decentralized MARL problem and we use the intrinsic rewards to diversely stimulate the agents to learn from the environment. 4 Method In this section, we formally propose the LIIR method. We first provide a formal definition of the considered problem based on what have been introduced in Section 3, then we introduce a bilevel optimization algorithm for solving the proposed objective. 4.1 The Objective By defining an intrinsic reward function rinηi(si, ui) which is parameterized by ηi and takes a stateaction pair (si, ui) of an individual agent i as input, we propose to assign agent i a distinct proxy reward rproxyi,t = r ex t + λr in i,t, (2) at time step t. In (2), we have omitted the arguments of the reward functions for simplicity, and λ is a hyper-parameter that balances the extrinsic team reward and the distinct intrinsic reward. Note that in the standard MARL problem with a team reward, there does not exist any distinct reward for each agent. Now, after creating each agent a proxy reward rproxyi,t at time step t, we accordingly define a discounted proxy reward for each agent i as Rproxyi,t = ∞∑ l=0 γl(rext+l + λr in i,t+l), (3) and the proxy value function for agent i as V proxyi (si,t) = Eui,t,si,t+1,···[R proxy i,t ]. (4) Different from the extrinsic (standard) value V ex, these proxy value functions V proxyi ’s do not have any physical meanings and they will be only used for updating the individual policy parameters θi’s. Now, the considered overall objective is defined as max η,θ J ex(η), (5) s.t. θi = argmax θ Jproxyi (θ,η), ∀i ∈ [1, 2, · · · , n] where Jproxyi := Esi,0,ui,0,··· [ Rproxyi,0 ] depending on θi and η, η indicates the intrinsic reward parameter set {η1, η2, · · · , ηn} and θ indicates the policy parameter set {θ1, θ2, · · · , θn}. In problem (5), the goal is to maximize J ex through optimizing η, while the policy parameter θi is optimized by maximizing the proxy expected discounted return Jproxyi for agent i. The advantage is that by learning a distinct intrinsic reward for each agent per time step, the agents will be diversely stimulated and this will accumulatively influence the policy learning via the policy gradient. Moreover, from an optimization perspective, problem (5) can be viewed as a bilevel optimization problem, since the problem of maximizing the individual proxy expected returns is nested within the outer optimization task, which is maximizing the extrinsic expected return. In the next subsection, we will discuss how J ex is connected with the intrinsic reward parameter η. 4.2 Algorithm As a bilevel optimization problem, at each iteration, the policy parameters are updated with respect to the inner proxy tasks, while the intrinsic reward parameters are updated to maximize the extrinsic expected return. Specifically, the policy parameter of each agent is updated by the policy gradient with its proxy critic. Given a trajectory generated by the policy πθi , θi can be updated by applying the policy gradient defined in (1): ∇θi log πθi(ui|si)A proxy i (si, ui), (6) where Aproxyi (si, ui) is the proxy critic that can be chosen in a variety of ways [25, 26, 27, 28]. For example, Aproxyi (si, ui) = R proxy i leads to the REINFORCE algorithm [26]. In this paper, we choose Aproxyi (si, ui) = r proxy i (si, ui) + V proxy ϕi (s ′ i) − V proxy ϕi (si) as the advantage function [27, 28], where V proxyϕi is the proxy value parameterized by ϕi and s′i is the next state of agent i in the trajectory. Given (6) and a policy learning rate α, the updated policy parameter θ′i can be represented as θ′i = θi + α∇θi log πθi(ui|si)A proxy i (si, ui). Then, we build the connection between η and J ex and specify the updating procedure for η. Given the updated policy parameters θ′i’s, using the chain rule, we have ∇ηiJ ex = ∇θ′iJ ex∇ηiθ′i. (7) The spirit of (7) is to formulate the effect of the change of ηi on influencing J ex through its influence in the updated policy parameter θ′i. This is a commonly adopted technique in meta-gradient learning [30, 31, 32, 33]. Computing the meta-gradient∇ηiJ ex requires new samples generated by the updated policy parameter θ′i, while this can be avoid by reusing the samples generated by θi with importance sampling [16]. In (7),∇θ′iJ ex can be estimated by stochastic gradient as ∇θ′i log πθ′i(ui|si)A ex(s,u), (8) where Aex(s,u) is the centralized extrinsic critic. Similar to proxy critics, we choose Aex(s,u) = rex(s,u) + V exφ (s ′)− V exφ (s), where V exφ (s) is the extrinsic value parameterized by φ. The second term in (7) can be derived as ∇ηiθ′i = ∇ηi [θi + α∇θi log πθi(ui|si)A proxy i (si, ui)] = αλ∇θi log πθi(ai|si)∇ηir proxy i (si, ui). (9) Fig. 1 gives an illustration of the entire architecture of the LIIR method. A sketch of the optimization algorithm is presented in Algorithm 1. 5 Experiments In this section, we first evaluate LIIR on a simple 1D pursuit game specifically designed for the considered settings to see whether LIIR can learn reasonable distinct intrinsic rewards. Then, we Algorithm 1 The optimization algorithm for LIIR. Input: policy learning rate α and intrinsic reward learning rate β. Output: policy parameters θ and intrinsic reward parameters η. 1: Init: initialize θ and η; 2: while termination is not reached do 3: Sample a trajectory D = {s0,u0, s1,u1, · · · } by executing actions with the decentralized policies {πθ1 , · · · , πθn}; 4: Update θ according to (6) with learning rate α; 5: Compute (8) using new samples from {πθ′1 , πθ′2 , · · · , πθ′n} or reuse D to replace (8) with ∇θ′ i πθ′ i (ui|si) πθi (ui|si) Aex(s,u); 6: Update η according to (7), step 5 and (9) with learning rate β; 7: end while comprehensively study LIIR in several challenging micromanagement games in the game of StarCraft II, and compare LIIR with a number of state-of-the-art MARL methods.2 5.1 A Simple 1D Pursuit Study We design a simple game named 1D Pursuit to provide a fast verification for the quality of the intrinsic reward learned by LIIR. In 1D pursuit, a team of two agents are initially assigned with some random integers denoted by x and y respectively, and each agent could take actions from {+1,−1, 0} to either increase, decrease or keep its value to approach a target value z that is unknown to the agents. For a collaborative setting, the team reward for the two agents is set to be inversely proportional to the summation of their absolute differences between their values and the target value. That is, both the two agents should adjust their values towards the target value. The observation of each agent is a two-dimension vector containing its current integer value and another agent’s integer value. The team reward is set to be +0.01 if both agents take actions that approaching the target value, −0.01 if both agents take actions that moving away from the target value, and 0 otherwise. The target value is set to be 0. The initial integers for the two agents are randomly generated from {−10, ..., 10}. We implement LIIR based on the architecture depicted in Fig. 1. The detailed network structure is provided in the supplementary material. In Fig. 2, we plot the histogram of the distributions of the intrinsic reward averaged from 1000 episodes. We denote actions approaching the target as “Good” actions and actions moving away from the target as “Bad” actions. The result shows that LIIR can assign reasonable intrinsic reward to the agents. 5.2 StarCraft II Micromanagement In this subsection, we comprehensively evaluate the proposed LIIR method in the game of StarCraft II based on the learning environment SC2LE [34] and mini-game settings in SMAC [35]. We compare the LIIR method with a number of state-of-the-art MARL methods that use the CLDE architecture. We also provide some insightful case studies to visualize the learned intrinsic rewards. StarCraft II is a popular real-time strategy game and it has been studied under MARL settings [9, 10, 7, 36, 37]. In the experiments, we consider symmetric battle games in StarCraft II , where both single type agents and mixed type agents are considered. Specifically, the considered scenarios contain 3 Marines vs. 3 Marines (3M), 8 Marines vs. 8 Marines (8M), 2 Stalkers & 3 Zealots vs. 2 Stalkers & 3 Zealots (2S3Z), and 3 Stalkers & 5 Zealots vs. 3 2The source codes of LIIR are available through https://github.com/yalidu/liir. Stalkers & 5 Zealots (3S5Z). In these settings, Marine and Stalker are units of Terran and Protoss, respectively, and both of them can attack enemies at a distance, while Zealot is a melee unit of Protoss and it can only attack enemies who stand close to it. In all these games, only the units from self side are treated as agents. Each agent is described by several attributes including the health point (HP), weapon cooling down (CD), shield (for 2S3Z and 3S5Z), unit type, last action and the relative distance of the observed units. The enemy unit is described in the same way except that CD is excluded. The partial observation of an agent is composed by the attributes of the units, including both the agents and the enemy units, shown up within its view range that is a circle with a certain radius. The action space contains 4 move directions, k attack actions where k is the fixed maximum number of the enemy units in a map, stop and none-operation. The input dimension and the output action dimension are fixed with a certain ordering over the agents and enemy units. Dead enemy units will be masked out from the action space to ensure the executed action is valid. At each time step, the agents receive a joint team reward which is defined by the total damage of the agents and the total damage from the enemy side. In all the scenarios, following the configurations in [9, 10], we train the agents against the build-in AI opponent. More detailed settings can be acquired from the SMAC environment [35]. 5.2.1 Compared Methods and Training Details The considered methods for evaluation include • independent Q-learning (IQL) [17]: IQL trains decentralized Q-functions for each agent. Since the observation and action spaces of the agents are the same within a specific environmental setting, a policy will be shared across all the agents; • independent actor-critic (IAC) [9]: IAC is similar to IQL except that it adopts the actor-critic method; • Central-V [9]: the method learns a centralized critic with decentralized policies. Similarly, all agents share the same policy network; • COMA [9]: the method learns a centralized critic that is the state-action value minus a counterfactual baseline; • QMIX [10]: the method learns decentralized Q-function for each agent with the assumption that the centralized Q-value is monotonically increasing with the individual Q-values. In the implementations, the agents share the same Q-function; • LIIR: the proposed method. In the experiments, the agents share the same policy, intrinsic reward function and proxy critic. Since each agent has its own partial observation, sharing policy parameters does not imply that they act the same. For COMA and QMIX, we use their original implementations, in which the main policy network orQnetwork consist of some fully connected (FC) layers and a GRU module.3 All the other methods adopt similar network structures compared to COMA and QMIX. As depicted in Fig. 1, the parameters of LIIR contain 4 components corresponding to the shared policy parameter θ, intrinsic reward parameter η, proxy value parameter ϕ and extrinsic value parameter φ. To achieve fair comparison, we set the policy network structure, i.e., θ, as what is exactly used for COMA’s policy network. Then, we compress the other parameters η, ϕ and φ to let their total size equal to the parameter size of the remaining part in COMA. More details can be found in the supplementary material. All the methods are trained with 3 millions of steps in 3M and 8M, and with 10 millions of steps for 2S3Z and 3S5Z. The hyper-parameter λ in (2) is set to 0.01 throughout the experiments (we tried different choices of λ while we found that the results did not differ much). We use a fixed learning rate of 5e-4 and use batches of 32 episodes for all the methods. We use 32 actors to generate the trajectories in parallel, and use one NVIDIA Tesla M40 GPU for training. 5.2.2 Results To evaluate the performance of each method, we freeze the training every 100 episodes and test the model over 20 episodes to compute an average test winning rate. The entire training procedure is 3https://github.com/oxwhirl/pymarl repeated for 5 times to plot the winning rate curve with standard deviation. The results are reported in Fig. 3, where the averaged winning rates vs. the training steps on all the battle scenarios are given. In 3M which is the simplest game, all the test winning rates keep increasing as the training steps increase. In 8M, 2S3Z and 3S5Z, the independent learning methods, i.e., IQL and IAC, fail to learn a good policy for the agents and the methods using a CLDE architecture always outperform the independent learning methods. In 3M and 8M, COMA and Central-V show comparable performance, while in 2S3Z and 3S5Z, Central-V outperforms QMIX and COMA. For all these scenarios, the LIIR method consistently shows the best performance, and it achieves around 90% winning rate in all the scenarios. This demonstrates that learning the intrinsic reward function can ultimately induce better trained policies. 5.2.3 Visualizing the Learned Intrinsic Reward In addition to evaluate the performance of the trained policy in Section 5.2.2, we are more curious about how much effect the learned intrinsic reward function actually contributes to the policy learning. In order to figure out what has been learned in the intrinsic reward function, we propose to explicitly visualize these rewards. That is, we plot the learned intrinsic reward of each agent at each time step in a complete trajectory during testing. It is worth mentioning that during testing the intrinsic rewards are independent with the learned policy, and these rewards will not be used at all when generating the trajectory. For clarity, we randomly choose two test replays in 3M and 2S3Z which contain fewer agents to plot all the agents’ intrinsic rewards. Figs. 4 and 5 show the intrinsic rewards in 3M and 2S3Z, respectively. We also attach some auxiliary snapshots to explain some interesting segments in the curves. In all the snapshots, the red colored units indicate the agents controlled by LIIR. In Fig. 4(a), agent 1 is dead at time step 9, and we can observe that its intrinsic reward turns to be very low after time step 6 compared to the other two agents. As revealed by Figs. 4(b) and (c), at time step 6, all the three agents focus fire on one of the enemy Marine, while agent 1 has the lowest HP; after that, agent 1 still keeps firing instead of running away from the enemies and the intrinsic reward function predicts a low rin1 , indicating that u1 = attack is not a good action at that time; finally, agent 1 dies at time step 9 and the corresponding intrinsic reward is very low. In Fig. 5(a), after time step 27, we see that agent 2’s intrinsic reward increases a lot compared to the other agents. Figs. 5(b) and (c) provides a clear explanation that at time step 27, agent 2 (with low HP) stops firing and runs along the red arrows (the move actions only take 4 directions here) to avoid the attack from the enemy Zealot; until reaching an enemy Stalker at time step 32, agent 2 starts attacking the Stalker which is finally killed. Moreover, the overall trend of both the curves in Figs. 4(a) and 5(a) keeps increasing, indicating that the controlled team finally wins the game. Besides visualizing the two episodes illustrated above, we also provide overall statistics of the learned intrinsic reward. We collect the intrinsic reward for the action “attack” when the corresponding health points are lower than 50% from 100 test episodes. We then compute the cosine similarity (a value in [-1, 1]) between the health point and the intrinsic reward. The averaged cosine similarity is 0.55 for 2S3Z and 0.67 for 3M. The results show that the health point and intrinsic reward are positively correlated. That is, when the health point is low, the intrinsic reward is generally low for taking the “attack” action as well, which is reasonable in this scenario. The above case studies demonstrate that the learned intrinsic reward can indeed provide diverse feedback signals for the agents and these signals are very informative in evaluating the agents’ immediate behaviors. 6 Conclusion We have proposed a novel multi-agent reinforcement learning algorithm, which learns an individual intrinsic reward for each agent. The method can assign each agent a distinct intrinsic reward so that the agents are stimulated differently, even when the environment only feedbacks a team reward. Given the intrinsic reward for each agent, we define each of them a proxy critic to direct their policy learning via actor-critic algorithms. We show that the formulated multi-agent learning problem can be viewed as a bilevel optimization problem. Our empirical results carried on the battle games in StarCraft II demonstrate that learning the intrinsic reward function could eventually induce better trained policy compared with a number of state-of-the-art competitors. We further perform two case studies to visualize the learned intrinsic reward values, and the results provide clear explanations on the effects of the learned intrinsic rewards. For future work, we are interested in applying the LIIR method to more challenging scenarios, such as real-world traffic control with many agents and competitive multi-agent systems. Moreover, in addition to the simple summation form in (2), it is also interesting to investigate the optimal form of the proxy reward function. Acknowledgments The authors would like to thank anonymous reviewers for their constructive comments. Yali Du is during an internship at Tencent AI Lab when working on this project.
1. What is the focus and contribution of the paper on multi-agent reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its application to MARL settings? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the motivation behind the work, specifically concerning individual intrinsic rewards? 6. Are there any concerns about the straightforward application of a known IR method to MARL? 7. Can you provide more details about the modifications required to apply LIRPG to the multi-agent setting? 8. Can you clarify the meaning of "share the same policy" in the experimental section? 9. How is the parameter lambda tuned for each agent? 10. Can you improve the result section by considering more domains or tasks and demonstrating the method's versatility?
Review
Review This work deals with learning individual intrinsic rewards (IR) for muti-agent RL (MARL). Overall, the method provided is a straightforward application of a known IR method to MARL, the results are promising and the writing is clear. As such, this work has limited novelty but provides good empirical contributions, though these too could be improved by considering more domains. A more detailed review of the paper, along with feedback and clarifications required are provided below. The work is motivated by the claim that providing individual IRs to different agents in a population (in a MARL setting) will allow diverse behaviours. * Is it not possible that the IR policies learnt all look similar and the thus the behaviour that emerges is similar? The analysis at the end of the paper shows that a lot of the learned IR curves do overlap. Please provide more justification for this motivation. The work clearly describes related work and how the approach here differs. The main contribution is to apply the meta-gradient based approach in “On learning intrinsic rewards for policy gradient methods” ([16] as per the paper) to the multi-agent setting. * This looks to be a straightforward application where each agent has the LIRPG approach applied. Please provide succinct details of any modifications that are required to apply this and any differences in implementation. The method section can be shortened, as most of the algorithm and objective are the same as the original LIRPG algorithm uses. A range of methods are compared to the in the experimental section: independent q-learning/actor-critic, central critics, counterfactual critics, QMIX and the proposed approach (LIIR). * Please clarify what is meant by “share the same policy”: do they share the same policy network weights or also the exact same policy output? Do all agents get the same observation? If so, what is the difference between IAC and central-v? Is the only change how the V is updated, whereas policy is the same? * How is the parameter \lambda tuned for the agent? Lastly, the result sections show clear benefits of this approach. This method, along with several baseline is applied to a set of mini games for Starcraft. The analysis is promising and show that the method learns an interesting policy that captures the dynamics of the game. Overall this is a good contribution but for an empirical paper this could be strengthened by considering more domains or tasks and demonstrating the ability of this method to work across the board.
NIPS
Title Adaptable Agent Populations via a Generative Model of Policies Abstract In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/. 1 Introduction Quick thought experiment: imagine our world was such that all people acted, thought, and looked exactly the same in every situation. Would we ever have found the influential dissenters that sparked scientific, political, and cultural revolutions? In reinforcement learning (RL), it is common to learn a single policy that fits an environment. However, it is often desirable to instead find an entire array of high performing policies. To this end, we propose learning a generative model of policies. At a high level, we aim to show that purposefully learning a diverse policy space for a given environment can be competitive to learning a single policy, while better encompassing a range of skillful behaviors that are adaptable and robust to changes in the task and environment. We name our method of learning a space of adaptable agent polices: ADAP. Why should we bother with finding more than one policy per environment? We propose two primary reasons. First, RL environments are continually approaching greater levels of open-endedness and complexity. For a given environment, there might be an entire manifold of valid and near-equally high performing strategies. By finding points across this manifold, we avoid ‘having all eggs in one basket,’ granting robustness and adaptability to environmental changes. In the event of a change, we are able to adapt our generated population to select individuals that can still survive given the ablation, much like natural selection drives evolution in the real world. Secondly, using a generative model of policies as a population of agents makes intuitive sense in multi-agent environments, in which different agents should have the capacity to act like they are unique individuals. However, it is common in many multi-agent reinforcement learning settings to deploy the same policy across all agents, such that they are essentially distributed clones. Doing so may reduce the multi-modality of the agent population, resulting in a single ‘average’ agent. Previous work has touched on ideas akin to a generative model of policies. In hierarchical RL, the high-level policy controller can be considered a generator of sub-policies that are ‘options’ [1, 2, 3]. But these methods are designed to find decomposable skills that aid in the construction of just one 35th Conference on Neural Information Processing Systems (NeurIPS 2021). downstream controller policy. A core idea of our work is that of quality diversity [4], which aims to optimize a population of agents along the axes of both reward and diversity. Traditional methods often use evolutionary search over a discrete-sized population of separate agents, each with their own policy parameters. This consumes more time and training resources, and limits the number of potential behaviors. Our work integrates the goals of quality diversity into time and memory efficient deep RL by simulating an entire population of agents via a generative model of policies, with diversity bounded only by capacity of the generator. The rest of the paper is organized as follows. First we introduce our generative model of policies and the diversity objective that guides its learning. Next, we explore the potentials of learning a population of agents by ablating environments and then searching for suitable policies, directly in latent space. We primarily study two environments: Markov Soccer [5] and Farmworld. Farmworld is a new environment we have developed for testing diversity in a multi-agent, open-ended gridworld. At the website linked in the abstract, one can find qualitative results of experiments presented in this paper, as well as additional results on toy environments of CartPole [6] and a standard multi-goal environment. 2 Method Let Z be a sample space of n dimensional vectors, and Z be a random variable defined uniformly over Z . Then, we learn a mapping, G : ϕ,Z, from generator weights ϕ and latent distribution Z to a space of policies Π. The generator Gϕ itself is not a policy. It must be conditioned on a draw z ∼ Z in order to define a learned set of behaviors. In this sense, z is a stochastic parameter of Gϕ, and is sampled once at the beginning of each agent episode. In our experiments, Z is the sample space of all three dimensional vectors with magnitude one (i.e. the surface of the unit sphere). Practically, we use the low dimension of three, so that we can perform a key subject of this paper: rapid optimization, or adaptation, of G by changing Z rather than ϕ (fine tuning ϕ would be more typical in literature). We require magnitude one so that there is at least one non-zero element for any z ∼ Z, which we found important for providing signal and stability in the training of G. It is possible that with higher dimensions, this stipulation could be relaxed. Diversity Regularization In order to learn a diverse space of unique policies, we introduce a diversity regularization objective. Since policies define a space of actions taken over different states, we propose that in order for two policies to be distinct, they must have different action distributions given the same state. To this end, we define the objective Ldiv (1): Ldiv(ϕ) = E s∈S [ E zi,zj∼Z exp ( −DKL(πϕ,zi;b(s)∥πϕ,zj ;b(s)) )] (1) in which DKL is the KL-divergence between the two policy action distributions πϕ,zi and πϕ,zj , and b is a smoothing constant over the action distributions. Optimization of G In our experiments, we optimize the diversity objective in an online fashion using gradient descent, in conjunction with a PPO [7] clipped-surrogate objective and an entropy regularization objective. Our full optimization problem is max ϕ LPPO(ϕ)− αLdiv(ϕ) where LPPO is Equation 9 in [7] and α is a coefficient to scale the diversity regularization objective. See Algorithm 1 in the supplement for additional details. Adaptation via Optimization in the Latent Space of G By learning an entire space of policies Π, we are able to search our policy space for the highest performing policy, whether dealing with the training environment or an ablated future environment. In contrast to searching over policy parameters through transfer learning or fine-tuning, we are able to quickly search over the low-dimensional latent space (dimensionality 3 in our experiments). In fact, we can quickly adapt back and forth to various situations: the search procedure often takes less than 30 seconds, or 100 episode rollouts, to find any high quality solutions that exist. Over the course of a small number of generations, we evaluate randomly sampled latents, and keep higher performing ones with greater probability. In the event that episodes have a high degree of variablility per run – such as in the Markov Soccer environment – it may be necessary to run several episodes per latent vector and average the returns. Details can be found in Algorithm 2 of the supplement. Model Architecture Similarly to prior work [3], we have found that richer integrations between the latent vector and the observation can yield a more multi-modal policy space. To induce this richer integration, we introduce a multiplicative model denoted "(x)" for latent integration, and compare the results to a baseline of concatenating "(+)" the latent sample to the observation. We describe this architecture in the supplement. 3 Related Work Quality Diversity The evolutionary computing community has developed various quality diversity (QD) algorithms that aim to find a balance of novel and high-performing individuals within a population. Some methods can even be considered policy generators: NEAT and HyperNEAT [8, 9] use an indirect encoding to construct a network architecture. To encourage diversity, these methods use an idea known as fitness sharing: if genotypes are too similar, then they will split reward. While NEAT and HyperNEAT encourage diversity of parameters, other methods encourage diversity of behavior. Novelty Search (NS) [10] learns individuals that have high novelty along some user defined behavioral distance metric. For example, in a maze navigation task, the behavioral characteristic could be the final resting location of the individual, and agents are selected based on how far away they end up from an archive of past individuals. Unfortunately, as shown in [11], the choice of this characteristic can critical, and domain dependent. Additionally, NS focuses mainly on finding novel solutions, and ignores fitness, or reward. NS with Local Competition [12] and MapElites [13] aim to solve this problem by selecting for individuals with high fitness, but only against individuals in the same phenotypic or genotypic region, respectively. There are several prior and concurrent works that aim to connect ideas of quality diversity with deep reinforcement learning. Like quality diversity algorithms, these methods optimize a fixedsize population or archive of policies to be distinct from each other. [14, 15] aim to find a set of policies that yield diverse trajectories. [15] in particular focuses on the application to multi-agent environments and zero-shot coordination. [16] uses a KL-divergence over policies; but a policy’s diversity is optimized over previous SGD updates of itself, thus limiting the potential multi-modality of solutions. [17] optimizes for diversity of the total population via maximizing the determinant of a population distance matrix, but works best only with small populations of size three or five. [18] uses a method reminiscent of DIAYN, but introduces ideas to balance quality with diversity. It is especially similar to ADAP in optimizing the latent space to achieve robustness, but only searches over a fixed-size set of latent vectors and focuses on single-agent environments. Other methods have explored indirectly influencing diversity via differing training hyperparameters as in Population-Based Training [19], or using reward randomization as in [20]. Importantly, both classical QD algorithms [10, 12, 13] and most deep RL methods [14, 15, 16, 17, 19, 20] use sets of distinct agent parameters to learn a diverse population. ADAP makes the connection that we can encode unique policies into a latent space (an idea that also appears in a few recent works [2, 3, 21, 18]), and frames learning a diverse population as a generative modelling problem. Additionally, in distinction from classical QD methods that use a non-differential genetic algorithm or evolutionary search for optimization, ADAP is able to directly optimize for diversity and policy credit assignment via gradient descent. Option Discovery for Hierarchical RL The option framework introduced by [1] could be thought of as learning a generator of skills, which are temporal abstractions over actions that can be used by a downstream, higher-level controller. Recent works like DIAYN [2] and others [3, 21] in option discovery learn a fixed set of diverse skills that are discriminable by observed state or trajectory: such as learning to move left, or move right. These skills are generally not meant to be the final agent policy, DIAYN even learns skills without any extrinsic environmental reward. However, these methods are most similar to ADAP in terms of mapping a latent sample to final agent policies. Goal-Conditioned Reinforcement Learning Yet another way to induce diverse policy behaviors is through using goal-conditioned policies [22, 23, 24] that use a family of task-defined value or Q functions or expert trajectories [25] to incentivize diversity. These methods require structure in how to define diversity, such as defining a value function family over states [24]. Multi-Agent Roles Recent works generate specialized agent policies in a multi-agent setting, building on QMIX [26]. ROMA [27] learns agent roles that are not static through agent trajectories, require optimizing several additional objectives, and are learned jointly with other roles via a joint action-value function. Similarly, MAVEN [28] optimizes the mutual information between joint agent actions and a latent variable. While a single latent sample in ADAP encodes a single agent ‘species‘, a latent sample in these works encode how a group of agents should behave together: thus we cannot employ adaptation based on individual selection. 4 Introduction to Farmworld We test our learning G in a new open-ended grid-world environment called Farmworld, that supports multi-agent interaction and partially observable observations. The idea behind Farmworld is simple: agents move about the map to gather food from various resources, such as chickens and towers that spawn in random locations. In out experiments, agents only optimize their own reward: a single agent gets exactly 0.1 reward for each timestep it is alive. Thus, lifetime is directly proportional to reward. Agents can live longer by attacking other agents, chickens, and towers: for example, a chickens might take two timesteps of sword hits to yield five timesteps worth of health. To avoid cannibalism in our experiments, we set agents to gain zero health from other agents. Of course, these numbers are configurable to achieve different environment dynamics. Furthermore, Farmworld is a partially-observable environment: agents see only what is in a certain tile radius from their location. In our experiments, the observation is a vector representation of the units and tiles. Additional details of the Farmworld are provided in the supplement. 5 Baselines We use compare the ADAP algorithm to two algorithmic baselines. For each of the baselines, as well as ADAP, we experiment with both concatenation (+) and multiplicative model (x) types, and use consistent observation spaces, action spaces, and latent distributions - so the only difference is the diversity algorithm itself. The first baseline is Vanilla PPO, which we call the "Vanilla" baseline. The only difference between Vanilla and ADAP is that the former does not use the diversity regularization loss in Equation 1. Vanilla policies still receive samples from latent distribution Z - there is simply no objective term that enforces a diverse policy actions conditional on these samples. Our second baseline was adapted from DIAYN. DIAYN is formulated as a unsupervised skill generator, rather than a policy generator. However, we believe that it remains one of the technically closest works, and with slight modifications, we attempt to make a comparison between DIAYN and ADAP. First, we highlight some differences between the methods. ADAP uses a KL-divergence based diversity term rather than learning a skill discriminator network. This enables ADAP’s policy diversity to be optimized directly through gradient descent with respect to parameters ϕ, rather than be optimized through RL as with the skill diversity of DIAYN. Additionally, the ADAP latent distribution is defined over a continuous sample space, in contrast to the categorical sample space of DIAYN. We tried the standard DIAYN algorithm with categorical sample spaces and unsupervised skill discovery, however this performed poorly on all of our Farmworld and Markov Soccer experiments. Thus, to place the algorithms on more equal footing, we modify DIAYN: 1.) add extrinsic environmental reward to DIAYN training (this is briefly mentioned in the DIAYN paper itself) 2.) to use the continuous sample space 3.) train a skill regessor that minimizes predicted latent error, instead of a skill discriminator that outputs latent class probabilities. We describe the new skill regressor in the supplement. We call this method DIAYN*. Training and Hyperparameters We train each method for the same number of timesteps (30 million), and generally keep hyperparameters constant across methods. These are described in the supplement. Adaptation Comparisons When we apply Algorithm 2 to ADAP, we apply the same algorithm to each of the baselines. We can do this because ADAP and baselines all share the same input latent distribution Z - the only difference is how well they encode a diverse policy space within Z. 6 Adaptation to Environmental Ablations via Optimizing Z In nature, differences between species and even within species lend robustness to life as a whole. It becomes less likely that any single perturbation in the environment will break the overall system. In the same manner, differences between policies can lend robustness to the policy space as a whole. Experiment We aim to test how having a diverse policy space allows us to search in latent space for policies that better fit unexpected environmental ablations. Doing so would demonstrate the robustness of a population of policies, and simultaneously provide information about different types of diversity that are learned by G. To this end, we train G on a normal Farmworld environment as shown in Section 4. We then ablate the environment, changing features such as map size and features, location of food sources, and even re-spawn times and food-yield. Lastly, we deploy G into the ablated environment and without changing the parameters ϕ, we optimize the latent distribution for policies that are successful in the new environment, using the search Algorithm 2. Ablations and descriptions are available in Table 1. Results Rather to our surprise, in each experiment trial, learning G using ADAP created a policy space Π containing ‘species’ that could thrive in nearly every environmental ablation (see Figure 3). The important thing to note is the development of these species was emergent from the training environment – a product of optimizing G for both policy diversity and reward maximization. How is it possible that ADAP produced a policy space capable of adapting to nearly every ablation? The training environment was relatively abundant with resources scattered about a large map. Thus, there were many degrees-of-freedom in the rules of survival, and by optimizing for diversity, we found a policy space that filled these degrees-of-freedom while still yielding high reward. While these ablations reflect some of the possible axes of diversity, there are certainly more. For example, an agent’s direction of ‘preference’ does not have to be the bottom-right, as in the Far Corner ablation. Indeed, as a sanity check, we tested placing food locations in various other spots on an enlarged map, and found that for every cardinal location, there was a species of agent in G that could exploit that new food location. What came as a surprise was that agents also used their health indicator to diversify: since agents diversify conditional on state, species developed in which agents would prefer to go upwards when their health is high, but downwards when their health is low. This particular agent species was the one that managed to thrive in the Wall Barrier ablation. Similarly, in the Patience ablation, ADAP learned a certain species of agent that waited until its health was low before farming a tower. The Poison Chickens ablation was the one hold-out in which latent optimization on ADAP could not find a profoundly successful species. It is possible that, there would have been too large of a trade-off between diversity and potential reward in the training environment in order to learn a policy that ignored half of its potential food sources. We come back to this ablation in the next experiment. Finally, we should note that ADAP beat the Vanilla baseline in all ablations aside from Speed. We hypothesize this ablation is the most in-distribution to the training environment. Since the Vanilla baseline optimized for solely for expected rewards, it makes no diversity tradeoffs and performs well in in-distribution environments. As visible from the plots, DIAYN* also did not learn to speciate in a manner that was successful on the majority of ablations. 7 Measurement of Agent Individuality and Diversity in a Population A good generative model of policies should be able to represent a multi-modal space of behaviors. That is: different agent policies should be able to act with individuality. Our generative model uses a shared parameter set across all agents, and naively using shared parameters could result in learning just one ‘average’ agent – which is precisely what we wish to avoid. Niche Specialization Experiment To test the abilities of our policy generator, we set up the Farmworld environment with a hidden rule specific to this experiment: when an agent spawns, it is able harness resources from either towers or chickens. However, once it gets health from one unit type, it becomes ‘locked-into’ that unit type, and cannot gain health from the other unit type. Information about an agent’s ‘locked-into’ state is not provided as part of the agent observation, and since agents have no memory, they would have to look to their latent z to determine their niche. Since there are equal numbers of chickens and towers on our map, a reasonable generative model algorithm should be able to map half the latent space to each of these two specializations, or niches. Results So we can see how well the entire latent space falls maps to a niche, we report rewards and other metrics in Table 3 without running latent space optimization on ADAP or baselines. In summary, ADAP consistently learned a more multi-modal policy space than any of the other baselines. Our results also indicate that using a multiplicative model can yield a higher degree of policy space multi-modality, and therefore greater success in this environment. We can see in Table 3 that ADAP (x) is able to attain the highest average agent lifetime. This, however, is not necessarily the most interesting point. ADAP, learns a policy generator with the highest mutual information I(T |Z) between an agent "niche" T and the latent distribution Z. Intuitively, this means that ADAP was able to learn a population of agents that were composed of two clear species – on one hand: agents that focus on chickens, and on the other: agents that focus on towers. Formally, let T be a discrete random variable where pT (t) is the probability that an agent attacks target t, for t ∈ {chicken, tower}. Then I(T ;Z) is high when individual agents are specialized in a niche, and we see diverse niches across our population. This is because I(T ;Z) = H(T )−H(T |Z) and is maximized by both increasing H(T ) and decreasing H(T |Z). H(T ) measures the diversity of niches across all agents in the population, and H(T |Z) measures how rigidly an agent falls into a single niche (i.e. specialization). As an example, suppose agents were highly specialized but not diverse, e.g., all agents were chickenonly attackers. Then H(T ) = H(T |Z) = I(T ;Z) = 0. On the other hand, suppose that all z ∼ Z yield an agent policy that attacks chickens and towers with equal probability. Then in this case H(T ) = H(T |Z) = 1 and I(T ;Z) = 0. Intuitively, this means half of the time agents are wasting timesteps to attack a target that they are unable to even damage! Qualitatively, we have seen that the latter case occurs with the Vanilla and (most seeds of) DIAYN* baselines: notice that their H(T |Z) is significantly higher than that of ADAP. For fun, we performed latent distribution optimization on generators trained using the Niche Specialization environment to fit the Poison Chickens environment. One would expect algorithms with high H(T |Z) to fare well, since Algorithm 2 can find optimized Z∗ such that pT (chicken|z ∼ Z∗) = 0. Sure enough, we see this result in Figure 7: ADAP (x) is most suc- cessful at consistently producing a generative model that can produce policies that not only avoid chickens, but also successfully attack only towers. 8 Adaptation and Self-Play in a Zero-Sum Two-Player Environment Environment This experiment uses Markov Soccer, introduced in [5]. Two agents, A and B, play on a gridworld and must ‘carry’ a ball into the opposing goal to score. Agents walk in cardinal directions or stand in place. Possession is randomly initialized, and switches if one an agent bumps into the other. Actions of A and B occur on the same timestep, execution order is randomized, and each timestep ends in a draw with some ϵ probability. Markov Soccer is an interesting environment, because the best policy for one agent depends on the policy of the other agent. As described in [5], there exists a worse-case-optimal probabilistic policy for Markov Soccer, which maximizes the minimum possible score against any adversary. This strategy tends to be conservative, preferring to act towards a draw where a different policy could have obtained a higher score. On the other hand, non-worse-case-optimal strategies may be less conservative and may achieve very high scores against some opponents, but very low scores against others. Analogous to real soccer, different players have varying abilities and play styles, and a given player p1 may be optimal against p2, but not against p3. If any single policy has its drawbacks, can we instead learn an entire space of diverse policies Π := {π1, π2, ..., πinf}, where for any opponent, we can select a policy πi ∈ Π that achieves the maximum score against that opponent? Ideally, this space includes the worse-case-optimal policy, as well as other more aggressive policies. Then, just as a coach might swap out a soccer player, we can mix and match our champion as suited. Experiment Can we learn a population of individuals that is holistically strong against all types of opponents? We evaluate adaptability to various adversaries using two methods. First, we test baselines and our method against a set of hand-coded soccer bots. These bots are designed to represent a wide gamut of strategies, some of which are more exploitable than others. Secondly, we evaluate each G by playing ADAP (x), ADAP (+), Vanilla (x), and Vanilla (+) in a round-robin tournament against each other. All scores is determined by wins minus losses over 1000 simulated games. Against Hard-Coded Bots: Each bot always starts on the left side, and the learned policy starts on the right side (although the environment is coded such that observations are side-invariant). Bot types fall into three categories: offense (bots start with possession), defense (policy starts with possession), and mixed (random starting possession). See Table 4 for more details. Round-Robin Against Each Other: We also pit each generative model in a round robin tournament against the other models. The manner in which we do this is described in the supplement. Training and Baselines We use self-play to train both ADAP and baselines. We use the same Vanilla baseline as described in Section 5, and we omit the DIAYN* baseline for brevity. Note that at no point in the training process did any of our algorithms train against any bots, or against each other. Results As in the Farmworld adaptability experiment, we see from Figure 8 that ADAP is able to learn a G during the train phase that emergently contains members that are successful against a variety of unexpected adversaries - including naive bots and other policies. Compared to Vanilla, the ADAP policy space generalizes better against all adversaries. Going back to the soccer team example, we were able to select individuals from the ADAP population that were well suited for a specific strategies. For example, against the Oscillate 1 adversary, ADAP latent optimization found a member of the population that side-stepped the oscillating adversary simply by moving to the top row, and then down to the goal. Additionally, against the Straight adversary, successful ADAP individuals stole possession by deterministically standing in-front of the opponent to block, and then moving around and into the goal. On the other hand in both of these situations, Vanilla could not find individuals that exploited the naive deterministic nature of their opponents. Using ADAP did not just allow us to optimize against naive opponents. ADAP learned the best G in the round-robin tournament, and was the only method that was able to consistently beat our rule-based bot. It is possible that by using ADAP during the self-play training, individuals encountered a wide variety of strategies that bettered overall performance. 9 Limitations Bad Apples When using ADAP, not every member of the policy space is going to be an optimal policy. In fact, some generated policies might be bad apples: policies that were incentivized by the diversity regularizer to take actions that were not rewarding. Naturally, some individuals might be better or worse than others. These individuals can be removed by optimizing the latent distribution. However, the bad apples may come with a plus side. Even though they do not perform well in the current environment, they might happen to perform well in a future ablated environment! Continuous-Action Space Environments The results presented so far focus entirely on environments with discrete categorical action spaces, in which we have observed that our diversity regularizer in Equation 1 empirically performs well. However, not all environments in RL use discrete action spaces - continuous action spaces are widely used in RL control tasks. While we believe that our regularizer can work in these environments, we have not rigorously tested in these environments. 10 Conclusion We have presented a framework to learn a generative model of policies. Rather than learning just one policy, we aim to find as many high-performing and individually distinct policies as possible, all compressed within the parameters of our generator. Learning a space of policies pays off in an open-ended environment such as Farmworld, in which there may be more than one path to success. We show in Section 6 that we can adapt to ablations by quickly choosing ‘species’ from our learned policy space that are successful in the new environment. We also learn a policy space in a competitive, two-player, zero-sum game in Section 8. Here, no single deterministic policy is optimal against all adversaries. Instead, we show how to train a family of policies that can be naturally adaptable to a wide array of both challenging and naive adversaries. Overall, we hope to show how it can be beneficial in RL to optimize not just for reward, but also for diversity of behavior. As environments continue to increase in complexity and open-endedness – filled with branching paths to success – it makes sense to learn not just one, but many, solutions. 11 Acknowledgements This research was supported in part by IBM through the MIT-IBM Watson AI Lab.
1. What is the main contribution of the paper regarding adapting policies to different environments? 2. What are the strengths of the proposed approach, particularly in its simplicity and potential effectiveness? 3. Do you have any concerns or suggestions regarding the paper's references to related work, specifically MAP elites and multitask MDPs? 4. What is unclear in Equation (1) and how can it be clarified? 5. How is the smoothing constant b used in the regularizer, and could you provide more details on this aspect of the method? 6. Are there any typos or unclear sentences in the text that need correction?
Summary Of The Paper Review
Summary Of The Paper The paper constructs a latent space diversity-based family of policies able to adapt to different environments under different circumstances. The idea is to have a relatively low-dimensional set of adaptation parameters which essentially are all that needs to be selected to choose between pre-trained policies. Review I do like the premise of the paper, a simple, but promising idea. It addresses the issue that training is slow, but sometimes one still needs to adapt quickly to a new situation. The idea of using a low-dimensional latent space to select from otherwise pre-trained policies is nice. The reviewer was wondering why, since they already cited work by Stanley, they didn't refer to MAP elites which is in spirit even closer (albeit on the EA side of the formalisms) to the present paper. More generally, it might be good to relate to multiobjective optimization, and also to multitask MDPs to make clear how the present algorithm relates to existing solutions. It is clear that the diversification is not based on the concrete task set, but on diversification in latent space, still the latent space is a function of the training in given sample spaces. Eq. (1): what distribution is the inner expectation computed over? The notation as a set is really not clear, and z_i != z_j is weird, because in continuous distributions that probability were 0. Please make clear what you are doing here. you talk about a smoothing constant b, but we have no idea what your regularizer looks like. How does the smoothing look like? line 227: "using the search algorithm described in the 2." - incomplete sentence line 235: "emergent from the train environment" -> "training" Figure 5: colors need explaining line 256: -> training line 288: not entirely clear what you do here: are you measuring the distance between the two sets of possible policies? Or what else? Perhaps write a formula, to avoid ambiguities? line 293: the second G_1 should be a G_2
NIPS
Title Adaptable Agent Populations via a Generative Model of Policies Abstract In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/. 1 Introduction Quick thought experiment: imagine our world was such that all people acted, thought, and looked exactly the same in every situation. Would we ever have found the influential dissenters that sparked scientific, political, and cultural revolutions? In reinforcement learning (RL), it is common to learn a single policy that fits an environment. However, it is often desirable to instead find an entire array of high performing policies. To this end, we propose learning a generative model of policies. At a high level, we aim to show that purposefully learning a diverse policy space for a given environment can be competitive to learning a single policy, while better encompassing a range of skillful behaviors that are adaptable and robust to changes in the task and environment. We name our method of learning a space of adaptable agent polices: ADAP. Why should we bother with finding more than one policy per environment? We propose two primary reasons. First, RL environments are continually approaching greater levels of open-endedness and complexity. For a given environment, there might be an entire manifold of valid and near-equally high performing strategies. By finding points across this manifold, we avoid ‘having all eggs in one basket,’ granting robustness and adaptability to environmental changes. In the event of a change, we are able to adapt our generated population to select individuals that can still survive given the ablation, much like natural selection drives evolution in the real world. Secondly, using a generative model of policies as a population of agents makes intuitive sense in multi-agent environments, in which different agents should have the capacity to act like they are unique individuals. However, it is common in many multi-agent reinforcement learning settings to deploy the same policy across all agents, such that they are essentially distributed clones. Doing so may reduce the multi-modality of the agent population, resulting in a single ‘average’ agent. Previous work has touched on ideas akin to a generative model of policies. In hierarchical RL, the high-level policy controller can be considered a generator of sub-policies that are ‘options’ [1, 2, 3]. But these methods are designed to find decomposable skills that aid in the construction of just one 35th Conference on Neural Information Processing Systems (NeurIPS 2021). downstream controller policy. A core idea of our work is that of quality diversity [4], which aims to optimize a population of agents along the axes of both reward and diversity. Traditional methods often use evolutionary search over a discrete-sized population of separate agents, each with their own policy parameters. This consumes more time and training resources, and limits the number of potential behaviors. Our work integrates the goals of quality diversity into time and memory efficient deep RL by simulating an entire population of agents via a generative model of policies, with diversity bounded only by capacity of the generator. The rest of the paper is organized as follows. First we introduce our generative model of policies and the diversity objective that guides its learning. Next, we explore the potentials of learning a population of agents by ablating environments and then searching for suitable policies, directly in latent space. We primarily study two environments: Markov Soccer [5] and Farmworld. Farmworld is a new environment we have developed for testing diversity in a multi-agent, open-ended gridworld. At the website linked in the abstract, one can find qualitative results of experiments presented in this paper, as well as additional results on toy environments of CartPole [6] and a standard multi-goal environment. 2 Method Let Z be a sample space of n dimensional vectors, and Z be a random variable defined uniformly over Z . Then, we learn a mapping, G : ϕ,Z, from generator weights ϕ and latent distribution Z to a space of policies Π. The generator Gϕ itself is not a policy. It must be conditioned on a draw z ∼ Z in order to define a learned set of behaviors. In this sense, z is a stochastic parameter of Gϕ, and is sampled once at the beginning of each agent episode. In our experiments, Z is the sample space of all three dimensional vectors with magnitude one (i.e. the surface of the unit sphere). Practically, we use the low dimension of three, so that we can perform a key subject of this paper: rapid optimization, or adaptation, of G by changing Z rather than ϕ (fine tuning ϕ would be more typical in literature). We require magnitude one so that there is at least one non-zero element for any z ∼ Z, which we found important for providing signal and stability in the training of G. It is possible that with higher dimensions, this stipulation could be relaxed. Diversity Regularization In order to learn a diverse space of unique policies, we introduce a diversity regularization objective. Since policies define a space of actions taken over different states, we propose that in order for two policies to be distinct, they must have different action distributions given the same state. To this end, we define the objective Ldiv (1): Ldiv(ϕ) = E s∈S [ E zi,zj∼Z exp ( −DKL(πϕ,zi;b(s)∥πϕ,zj ;b(s)) )] (1) in which DKL is the KL-divergence between the two policy action distributions πϕ,zi and πϕ,zj , and b is a smoothing constant over the action distributions. Optimization of G In our experiments, we optimize the diversity objective in an online fashion using gradient descent, in conjunction with a PPO [7] clipped-surrogate objective and an entropy regularization objective. Our full optimization problem is max ϕ LPPO(ϕ)− αLdiv(ϕ) where LPPO is Equation 9 in [7] and α is a coefficient to scale the diversity regularization objective. See Algorithm 1 in the supplement for additional details. Adaptation via Optimization in the Latent Space of G By learning an entire space of policies Π, we are able to search our policy space for the highest performing policy, whether dealing with the training environment or an ablated future environment. In contrast to searching over policy parameters through transfer learning or fine-tuning, we are able to quickly search over the low-dimensional latent space (dimensionality 3 in our experiments). In fact, we can quickly adapt back and forth to various situations: the search procedure often takes less than 30 seconds, or 100 episode rollouts, to find any high quality solutions that exist. Over the course of a small number of generations, we evaluate randomly sampled latents, and keep higher performing ones with greater probability. In the event that episodes have a high degree of variablility per run – such as in the Markov Soccer environment – it may be necessary to run several episodes per latent vector and average the returns. Details can be found in Algorithm 2 of the supplement. Model Architecture Similarly to prior work [3], we have found that richer integrations between the latent vector and the observation can yield a more multi-modal policy space. To induce this richer integration, we introduce a multiplicative model denoted "(x)" for latent integration, and compare the results to a baseline of concatenating "(+)" the latent sample to the observation. We describe this architecture in the supplement. 3 Related Work Quality Diversity The evolutionary computing community has developed various quality diversity (QD) algorithms that aim to find a balance of novel and high-performing individuals within a population. Some methods can even be considered policy generators: NEAT and HyperNEAT [8, 9] use an indirect encoding to construct a network architecture. To encourage diversity, these methods use an idea known as fitness sharing: if genotypes are too similar, then they will split reward. While NEAT and HyperNEAT encourage diversity of parameters, other methods encourage diversity of behavior. Novelty Search (NS) [10] learns individuals that have high novelty along some user defined behavioral distance metric. For example, in a maze navigation task, the behavioral characteristic could be the final resting location of the individual, and agents are selected based on how far away they end up from an archive of past individuals. Unfortunately, as shown in [11], the choice of this characteristic can critical, and domain dependent. Additionally, NS focuses mainly on finding novel solutions, and ignores fitness, or reward. NS with Local Competition [12] and MapElites [13] aim to solve this problem by selecting for individuals with high fitness, but only against individuals in the same phenotypic or genotypic region, respectively. There are several prior and concurrent works that aim to connect ideas of quality diversity with deep reinforcement learning. Like quality diversity algorithms, these methods optimize a fixedsize population or archive of policies to be distinct from each other. [14, 15] aim to find a set of policies that yield diverse trajectories. [15] in particular focuses on the application to multi-agent environments and zero-shot coordination. [16] uses a KL-divergence over policies; but a policy’s diversity is optimized over previous SGD updates of itself, thus limiting the potential multi-modality of solutions. [17] optimizes for diversity of the total population via maximizing the determinant of a population distance matrix, but works best only with small populations of size three or five. [18] uses a method reminiscent of DIAYN, but introduces ideas to balance quality with diversity. It is especially similar to ADAP in optimizing the latent space to achieve robustness, but only searches over a fixed-size set of latent vectors and focuses on single-agent environments. Other methods have explored indirectly influencing diversity via differing training hyperparameters as in Population-Based Training [19], or using reward randomization as in [20]. Importantly, both classical QD algorithms [10, 12, 13] and most deep RL methods [14, 15, 16, 17, 19, 20] use sets of distinct agent parameters to learn a diverse population. ADAP makes the connection that we can encode unique policies into a latent space (an idea that also appears in a few recent works [2, 3, 21, 18]), and frames learning a diverse population as a generative modelling problem. Additionally, in distinction from classical QD methods that use a non-differential genetic algorithm or evolutionary search for optimization, ADAP is able to directly optimize for diversity and policy credit assignment via gradient descent. Option Discovery for Hierarchical RL The option framework introduced by [1] could be thought of as learning a generator of skills, which are temporal abstractions over actions that can be used by a downstream, higher-level controller. Recent works like DIAYN [2] and others [3, 21] in option discovery learn a fixed set of diverse skills that are discriminable by observed state or trajectory: such as learning to move left, or move right. These skills are generally not meant to be the final agent policy, DIAYN even learns skills without any extrinsic environmental reward. However, these methods are most similar to ADAP in terms of mapping a latent sample to final agent policies. Goal-Conditioned Reinforcement Learning Yet another way to induce diverse policy behaviors is through using goal-conditioned policies [22, 23, 24] that use a family of task-defined value or Q functions or expert trajectories [25] to incentivize diversity. These methods require structure in how to define diversity, such as defining a value function family over states [24]. Multi-Agent Roles Recent works generate specialized agent policies in a multi-agent setting, building on QMIX [26]. ROMA [27] learns agent roles that are not static through agent trajectories, require optimizing several additional objectives, and are learned jointly with other roles via a joint action-value function. Similarly, MAVEN [28] optimizes the mutual information between joint agent actions and a latent variable. While a single latent sample in ADAP encodes a single agent ‘species‘, a latent sample in these works encode how a group of agents should behave together: thus we cannot employ adaptation based on individual selection. 4 Introduction to Farmworld We test our learning G in a new open-ended grid-world environment called Farmworld, that supports multi-agent interaction and partially observable observations. The idea behind Farmworld is simple: agents move about the map to gather food from various resources, such as chickens and towers that spawn in random locations. In out experiments, agents only optimize their own reward: a single agent gets exactly 0.1 reward for each timestep it is alive. Thus, lifetime is directly proportional to reward. Agents can live longer by attacking other agents, chickens, and towers: for example, a chickens might take two timesteps of sword hits to yield five timesteps worth of health. To avoid cannibalism in our experiments, we set agents to gain zero health from other agents. Of course, these numbers are configurable to achieve different environment dynamics. Furthermore, Farmworld is a partially-observable environment: agents see only what is in a certain tile radius from their location. In our experiments, the observation is a vector representation of the units and tiles. Additional details of the Farmworld are provided in the supplement. 5 Baselines We use compare the ADAP algorithm to two algorithmic baselines. For each of the baselines, as well as ADAP, we experiment with both concatenation (+) and multiplicative model (x) types, and use consistent observation spaces, action spaces, and latent distributions - so the only difference is the diversity algorithm itself. The first baseline is Vanilla PPO, which we call the "Vanilla" baseline. The only difference between Vanilla and ADAP is that the former does not use the diversity regularization loss in Equation 1. Vanilla policies still receive samples from latent distribution Z - there is simply no objective term that enforces a diverse policy actions conditional on these samples. Our second baseline was adapted from DIAYN. DIAYN is formulated as a unsupervised skill generator, rather than a policy generator. However, we believe that it remains one of the technically closest works, and with slight modifications, we attempt to make a comparison between DIAYN and ADAP. First, we highlight some differences between the methods. ADAP uses a KL-divergence based diversity term rather than learning a skill discriminator network. This enables ADAP’s policy diversity to be optimized directly through gradient descent with respect to parameters ϕ, rather than be optimized through RL as with the skill diversity of DIAYN. Additionally, the ADAP latent distribution is defined over a continuous sample space, in contrast to the categorical sample space of DIAYN. We tried the standard DIAYN algorithm with categorical sample spaces and unsupervised skill discovery, however this performed poorly on all of our Farmworld and Markov Soccer experiments. Thus, to place the algorithms on more equal footing, we modify DIAYN: 1.) add extrinsic environmental reward to DIAYN training (this is briefly mentioned in the DIAYN paper itself) 2.) to use the continuous sample space 3.) train a skill regessor that minimizes predicted latent error, instead of a skill discriminator that outputs latent class probabilities. We describe the new skill regressor in the supplement. We call this method DIAYN*. Training and Hyperparameters We train each method for the same number of timesteps (30 million), and generally keep hyperparameters constant across methods. These are described in the supplement. Adaptation Comparisons When we apply Algorithm 2 to ADAP, we apply the same algorithm to each of the baselines. We can do this because ADAP and baselines all share the same input latent distribution Z - the only difference is how well they encode a diverse policy space within Z. 6 Adaptation to Environmental Ablations via Optimizing Z In nature, differences between species and even within species lend robustness to life as a whole. It becomes less likely that any single perturbation in the environment will break the overall system. In the same manner, differences between policies can lend robustness to the policy space as a whole. Experiment We aim to test how having a diverse policy space allows us to search in latent space for policies that better fit unexpected environmental ablations. Doing so would demonstrate the robustness of a population of policies, and simultaneously provide information about different types of diversity that are learned by G. To this end, we train G on a normal Farmworld environment as shown in Section 4. We then ablate the environment, changing features such as map size and features, location of food sources, and even re-spawn times and food-yield. Lastly, we deploy G into the ablated environment and without changing the parameters ϕ, we optimize the latent distribution for policies that are successful in the new environment, using the search Algorithm 2. Ablations and descriptions are available in Table 1. Results Rather to our surprise, in each experiment trial, learning G using ADAP created a policy space Π containing ‘species’ that could thrive in nearly every environmental ablation (see Figure 3). The important thing to note is the development of these species was emergent from the training environment – a product of optimizing G for both policy diversity and reward maximization. How is it possible that ADAP produced a policy space capable of adapting to nearly every ablation? The training environment was relatively abundant with resources scattered about a large map. Thus, there were many degrees-of-freedom in the rules of survival, and by optimizing for diversity, we found a policy space that filled these degrees-of-freedom while still yielding high reward. While these ablations reflect some of the possible axes of diversity, there are certainly more. For example, an agent’s direction of ‘preference’ does not have to be the bottom-right, as in the Far Corner ablation. Indeed, as a sanity check, we tested placing food locations in various other spots on an enlarged map, and found that for every cardinal location, there was a species of agent in G that could exploit that new food location. What came as a surprise was that agents also used their health indicator to diversify: since agents diversify conditional on state, species developed in which agents would prefer to go upwards when their health is high, but downwards when their health is low. This particular agent species was the one that managed to thrive in the Wall Barrier ablation. Similarly, in the Patience ablation, ADAP learned a certain species of agent that waited until its health was low before farming a tower. The Poison Chickens ablation was the one hold-out in which latent optimization on ADAP could not find a profoundly successful species. It is possible that, there would have been too large of a trade-off between diversity and potential reward in the training environment in order to learn a policy that ignored half of its potential food sources. We come back to this ablation in the next experiment. Finally, we should note that ADAP beat the Vanilla baseline in all ablations aside from Speed. We hypothesize this ablation is the most in-distribution to the training environment. Since the Vanilla baseline optimized for solely for expected rewards, it makes no diversity tradeoffs and performs well in in-distribution environments. As visible from the plots, DIAYN* also did not learn to speciate in a manner that was successful on the majority of ablations. 7 Measurement of Agent Individuality and Diversity in a Population A good generative model of policies should be able to represent a multi-modal space of behaviors. That is: different agent policies should be able to act with individuality. Our generative model uses a shared parameter set across all agents, and naively using shared parameters could result in learning just one ‘average’ agent – which is precisely what we wish to avoid. Niche Specialization Experiment To test the abilities of our policy generator, we set up the Farmworld environment with a hidden rule specific to this experiment: when an agent spawns, it is able harness resources from either towers or chickens. However, once it gets health from one unit type, it becomes ‘locked-into’ that unit type, and cannot gain health from the other unit type. Information about an agent’s ‘locked-into’ state is not provided as part of the agent observation, and since agents have no memory, they would have to look to their latent z to determine their niche. Since there are equal numbers of chickens and towers on our map, a reasonable generative model algorithm should be able to map half the latent space to each of these two specializations, or niches. Results So we can see how well the entire latent space falls maps to a niche, we report rewards and other metrics in Table 3 without running latent space optimization on ADAP or baselines. In summary, ADAP consistently learned a more multi-modal policy space than any of the other baselines. Our results also indicate that using a multiplicative model can yield a higher degree of policy space multi-modality, and therefore greater success in this environment. We can see in Table 3 that ADAP (x) is able to attain the highest average agent lifetime. This, however, is not necessarily the most interesting point. ADAP, learns a policy generator with the highest mutual information I(T |Z) between an agent "niche" T and the latent distribution Z. Intuitively, this means that ADAP was able to learn a population of agents that were composed of two clear species – on one hand: agents that focus on chickens, and on the other: agents that focus on towers. Formally, let T be a discrete random variable where pT (t) is the probability that an agent attacks target t, for t ∈ {chicken, tower}. Then I(T ;Z) is high when individual agents are specialized in a niche, and we see diverse niches across our population. This is because I(T ;Z) = H(T )−H(T |Z) and is maximized by both increasing H(T ) and decreasing H(T |Z). H(T ) measures the diversity of niches across all agents in the population, and H(T |Z) measures how rigidly an agent falls into a single niche (i.e. specialization). As an example, suppose agents were highly specialized but not diverse, e.g., all agents were chickenonly attackers. Then H(T ) = H(T |Z) = I(T ;Z) = 0. On the other hand, suppose that all z ∼ Z yield an agent policy that attacks chickens and towers with equal probability. Then in this case H(T ) = H(T |Z) = 1 and I(T ;Z) = 0. Intuitively, this means half of the time agents are wasting timesteps to attack a target that they are unable to even damage! Qualitatively, we have seen that the latter case occurs with the Vanilla and (most seeds of) DIAYN* baselines: notice that their H(T |Z) is significantly higher than that of ADAP. For fun, we performed latent distribution optimization on generators trained using the Niche Specialization environment to fit the Poison Chickens environment. One would expect algorithms with high H(T |Z) to fare well, since Algorithm 2 can find optimized Z∗ such that pT (chicken|z ∼ Z∗) = 0. Sure enough, we see this result in Figure 7: ADAP (x) is most suc- cessful at consistently producing a generative model that can produce policies that not only avoid chickens, but also successfully attack only towers. 8 Adaptation and Self-Play in a Zero-Sum Two-Player Environment Environment This experiment uses Markov Soccer, introduced in [5]. Two agents, A and B, play on a gridworld and must ‘carry’ a ball into the opposing goal to score. Agents walk in cardinal directions or stand in place. Possession is randomly initialized, and switches if one an agent bumps into the other. Actions of A and B occur on the same timestep, execution order is randomized, and each timestep ends in a draw with some ϵ probability. Markov Soccer is an interesting environment, because the best policy for one agent depends on the policy of the other agent. As described in [5], there exists a worse-case-optimal probabilistic policy for Markov Soccer, which maximizes the minimum possible score against any adversary. This strategy tends to be conservative, preferring to act towards a draw where a different policy could have obtained a higher score. On the other hand, non-worse-case-optimal strategies may be less conservative and may achieve very high scores against some opponents, but very low scores against others. Analogous to real soccer, different players have varying abilities and play styles, and a given player p1 may be optimal against p2, but not against p3. If any single policy has its drawbacks, can we instead learn an entire space of diverse policies Π := {π1, π2, ..., πinf}, where for any opponent, we can select a policy πi ∈ Π that achieves the maximum score against that opponent? Ideally, this space includes the worse-case-optimal policy, as well as other more aggressive policies. Then, just as a coach might swap out a soccer player, we can mix and match our champion as suited. Experiment Can we learn a population of individuals that is holistically strong against all types of opponents? We evaluate adaptability to various adversaries using two methods. First, we test baselines and our method against a set of hand-coded soccer bots. These bots are designed to represent a wide gamut of strategies, some of which are more exploitable than others. Secondly, we evaluate each G by playing ADAP (x), ADAP (+), Vanilla (x), and Vanilla (+) in a round-robin tournament against each other. All scores is determined by wins minus losses over 1000 simulated games. Against Hard-Coded Bots: Each bot always starts on the left side, and the learned policy starts on the right side (although the environment is coded such that observations are side-invariant). Bot types fall into three categories: offense (bots start with possession), defense (policy starts with possession), and mixed (random starting possession). See Table 4 for more details. Round-Robin Against Each Other: We also pit each generative model in a round robin tournament against the other models. The manner in which we do this is described in the supplement. Training and Baselines We use self-play to train both ADAP and baselines. We use the same Vanilla baseline as described in Section 5, and we omit the DIAYN* baseline for brevity. Note that at no point in the training process did any of our algorithms train against any bots, or against each other. Results As in the Farmworld adaptability experiment, we see from Figure 8 that ADAP is able to learn a G during the train phase that emergently contains members that are successful against a variety of unexpected adversaries - including naive bots and other policies. Compared to Vanilla, the ADAP policy space generalizes better against all adversaries. Going back to the soccer team example, we were able to select individuals from the ADAP population that were well suited for a specific strategies. For example, against the Oscillate 1 adversary, ADAP latent optimization found a member of the population that side-stepped the oscillating adversary simply by moving to the top row, and then down to the goal. Additionally, against the Straight adversary, successful ADAP individuals stole possession by deterministically standing in-front of the opponent to block, and then moving around and into the goal. On the other hand in both of these situations, Vanilla could not find individuals that exploited the naive deterministic nature of their opponents. Using ADAP did not just allow us to optimize against naive opponents. ADAP learned the best G in the round-robin tournament, and was the only method that was able to consistently beat our rule-based bot. It is possible that by using ADAP during the self-play training, individuals encountered a wide variety of strategies that bettered overall performance. 9 Limitations Bad Apples When using ADAP, not every member of the policy space is going to be an optimal policy. In fact, some generated policies might be bad apples: policies that were incentivized by the diversity regularizer to take actions that were not rewarding. Naturally, some individuals might be better or worse than others. These individuals can be removed by optimizing the latent distribution. However, the bad apples may come with a plus side. Even though they do not perform well in the current environment, they might happen to perform well in a future ablated environment! Continuous-Action Space Environments The results presented so far focus entirely on environments with discrete categorical action spaces, in which we have observed that our diversity regularizer in Equation 1 empirically performs well. However, not all environments in RL use discrete action spaces - continuous action spaces are widely used in RL control tasks. While we believe that our regularizer can work in these environments, we have not rigorously tested in these environments. 10 Conclusion We have presented a framework to learn a generative model of policies. Rather than learning just one policy, we aim to find as many high-performing and individually distinct policies as possible, all compressed within the parameters of our generator. Learning a space of policies pays off in an open-ended environment such as Farmworld, in which there may be more than one path to success. We show in Section 6 that we can adapt to ablations by quickly choosing ‘species’ from our learned policy space that are successful in the new environment. We also learn a policy space in a competitive, two-player, zero-sum game in Section 8. Here, no single deterministic policy is optimal against all adversaries. Instead, we show how to train a family of policies that can be naturally adaptable to a wide array of both challenging and naive adversaries. Overall, we hope to show how it can be beneficial in RL to optimize not just for reward, but also for diversity of behavior. As environments continue to increase in complexity and open-endedness – filled with branching paths to success – it makes sense to learn not just one, but many, solutions. 11 Acknowledgements This research was supported in part by IBM through the MIT-IBM Watson AI Lab.
1. What is the focus of the paper, and what are the proposed method's strengths and weaknesses? 2. How does the reviewer assess the paper's presentation of results and empirical considerations? 3. What additional experiments or comparisons would help clarify the utility of the method in practice? 4. How does the reviewer interpret the results, specifically regarding the diversity of learned latent policies? 5. What are some suggestions for improving the paper, such as better understanding the impact of modulating the ADAP diversity bonus and latent encoding dimensionality?
Summary Of The Paper Review
Summary Of The Paper This paper presents ADAP, a method for learning a generative model of diverse policies by conditioning a standard PPO policy on an additional latent z input, such that KL divergence between the z-conditioned policies for pairs of different z's is maximized. The experiments show that ADAP is able to learn a diverse set of policies Review ADAP is a simple method that seems to produce a diverse of set of policies. The experiments are on conducted on two interesting environments that are not commonly studied in deep RL. While the method itself seems quite interesting and the experimental results are promising, I believe the paper could be strengthened considerably by improving the presentation of results, as well as filling in a few key missing empirical considerations that would clarify the utility of the method in practice: In the niche specialization experiment, it seems that ADAP leads to higher mean agent specialization, but there is no indication of whether the specializations themselves are more or less diverse than those produced by DIAYN. Some quantitative measure of this would add confidence to the idea that ADAP is effectively optimizing for diversity. It would be informative to report how many episodes are required for adapting the pretrained policy by searching over the latent vectors z. A useful comparison here would then be to give Vanilla PPO the same budget of environment interactions for fine-tuning its policy to the new ablated environment. Similarly for adapting to Markov Soccer opponents. Further, since searching over latents can be seen as a form of weight optimization, another fair comparison would be to train vanilla PPO with a dummy latent z concatenated to its input, then performing the same search procedure in Algorithm 2 to adapt the vanilla PPO policy via search over of its latent input vector. It would raise confidence in the results to report results for additional seeds (5-10 seeds). It is hard to parse the relative diversity of the learned latent policies between DIAYN and ADAP from the current results. Some way of globally summarizing the differences in diversity among the agent populations trained via ADAP and DIAYN would be informative. For example, perhaps training agents in a 2D goal-reaching grid world environment, and showing the average state occupancy of rollouts sampled from each latent population, after various time steps. It would be valuable to better understand how modulating the weight of the ADAP diversity bonus and the dimensionality of the latent encoding z impacts the transferability of the optimal latent policies to transfer environments. Additional comments: It is unclear to me what the dotted lines in Figure 3 represent. Are these standard deviations around the mean? Was Algorithm 2 also applied to finding the optimal skills from DIAYN for adapting to Farmworld variants in the experiment baselines? A highly related work should be included in related works: Lupu et al, 2021, http://proceedings.mlr.press/v139/lupu21a.html. Post-rebuttal update Based on the authors' clarifications and reporting results for additional experiment seeds, I have decided to upgrade my score to a 6.
NIPS
Title Adaptable Agent Populations via a Generative Model of Policies Abstract In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/. 1 Introduction Quick thought experiment: imagine our world was such that all people acted, thought, and looked exactly the same in every situation. Would we ever have found the influential dissenters that sparked scientific, political, and cultural revolutions? In reinforcement learning (RL), it is common to learn a single policy that fits an environment. However, it is often desirable to instead find an entire array of high performing policies. To this end, we propose learning a generative model of policies. At a high level, we aim to show that purposefully learning a diverse policy space for a given environment can be competitive to learning a single policy, while better encompassing a range of skillful behaviors that are adaptable and robust to changes in the task and environment. We name our method of learning a space of adaptable agent polices: ADAP. Why should we bother with finding more than one policy per environment? We propose two primary reasons. First, RL environments are continually approaching greater levels of open-endedness and complexity. For a given environment, there might be an entire manifold of valid and near-equally high performing strategies. By finding points across this manifold, we avoid ‘having all eggs in one basket,’ granting robustness and adaptability to environmental changes. In the event of a change, we are able to adapt our generated population to select individuals that can still survive given the ablation, much like natural selection drives evolution in the real world. Secondly, using a generative model of policies as a population of agents makes intuitive sense in multi-agent environments, in which different agents should have the capacity to act like they are unique individuals. However, it is common in many multi-agent reinforcement learning settings to deploy the same policy across all agents, such that they are essentially distributed clones. Doing so may reduce the multi-modality of the agent population, resulting in a single ‘average’ agent. Previous work has touched on ideas akin to a generative model of policies. In hierarchical RL, the high-level policy controller can be considered a generator of sub-policies that are ‘options’ [1, 2, 3]. But these methods are designed to find decomposable skills that aid in the construction of just one 35th Conference on Neural Information Processing Systems (NeurIPS 2021). downstream controller policy. A core idea of our work is that of quality diversity [4], which aims to optimize a population of agents along the axes of both reward and diversity. Traditional methods often use evolutionary search over a discrete-sized population of separate agents, each with their own policy parameters. This consumes more time and training resources, and limits the number of potential behaviors. Our work integrates the goals of quality diversity into time and memory efficient deep RL by simulating an entire population of agents via a generative model of policies, with diversity bounded only by capacity of the generator. The rest of the paper is organized as follows. First we introduce our generative model of policies and the diversity objective that guides its learning. Next, we explore the potentials of learning a population of agents by ablating environments and then searching for suitable policies, directly in latent space. We primarily study two environments: Markov Soccer [5] and Farmworld. Farmworld is a new environment we have developed for testing diversity in a multi-agent, open-ended gridworld. At the website linked in the abstract, one can find qualitative results of experiments presented in this paper, as well as additional results on toy environments of CartPole [6] and a standard multi-goal environment. 2 Method Let Z be a sample space of n dimensional vectors, and Z be a random variable defined uniformly over Z . Then, we learn a mapping, G : ϕ,Z, from generator weights ϕ and latent distribution Z to a space of policies Π. The generator Gϕ itself is not a policy. It must be conditioned on a draw z ∼ Z in order to define a learned set of behaviors. In this sense, z is a stochastic parameter of Gϕ, and is sampled once at the beginning of each agent episode. In our experiments, Z is the sample space of all three dimensional vectors with magnitude one (i.e. the surface of the unit sphere). Practically, we use the low dimension of three, so that we can perform a key subject of this paper: rapid optimization, or adaptation, of G by changing Z rather than ϕ (fine tuning ϕ would be more typical in literature). We require magnitude one so that there is at least one non-zero element for any z ∼ Z, which we found important for providing signal and stability in the training of G. It is possible that with higher dimensions, this stipulation could be relaxed. Diversity Regularization In order to learn a diverse space of unique policies, we introduce a diversity regularization objective. Since policies define a space of actions taken over different states, we propose that in order for two policies to be distinct, they must have different action distributions given the same state. To this end, we define the objective Ldiv (1): Ldiv(ϕ) = E s∈S [ E zi,zj∼Z exp ( −DKL(πϕ,zi;b(s)∥πϕ,zj ;b(s)) )] (1) in which DKL is the KL-divergence between the two policy action distributions πϕ,zi and πϕ,zj , and b is a smoothing constant over the action distributions. Optimization of G In our experiments, we optimize the diversity objective in an online fashion using gradient descent, in conjunction with a PPO [7] clipped-surrogate objective and an entropy regularization objective. Our full optimization problem is max ϕ LPPO(ϕ)− αLdiv(ϕ) where LPPO is Equation 9 in [7] and α is a coefficient to scale the diversity regularization objective. See Algorithm 1 in the supplement for additional details. Adaptation via Optimization in the Latent Space of G By learning an entire space of policies Π, we are able to search our policy space for the highest performing policy, whether dealing with the training environment or an ablated future environment. In contrast to searching over policy parameters through transfer learning or fine-tuning, we are able to quickly search over the low-dimensional latent space (dimensionality 3 in our experiments). In fact, we can quickly adapt back and forth to various situations: the search procedure often takes less than 30 seconds, or 100 episode rollouts, to find any high quality solutions that exist. Over the course of a small number of generations, we evaluate randomly sampled latents, and keep higher performing ones with greater probability. In the event that episodes have a high degree of variablility per run – such as in the Markov Soccer environment – it may be necessary to run several episodes per latent vector and average the returns. Details can be found in Algorithm 2 of the supplement. Model Architecture Similarly to prior work [3], we have found that richer integrations between the latent vector and the observation can yield a more multi-modal policy space. To induce this richer integration, we introduce a multiplicative model denoted "(x)" for latent integration, and compare the results to a baseline of concatenating "(+)" the latent sample to the observation. We describe this architecture in the supplement. 3 Related Work Quality Diversity The evolutionary computing community has developed various quality diversity (QD) algorithms that aim to find a balance of novel and high-performing individuals within a population. Some methods can even be considered policy generators: NEAT and HyperNEAT [8, 9] use an indirect encoding to construct a network architecture. To encourage diversity, these methods use an idea known as fitness sharing: if genotypes are too similar, then they will split reward. While NEAT and HyperNEAT encourage diversity of parameters, other methods encourage diversity of behavior. Novelty Search (NS) [10] learns individuals that have high novelty along some user defined behavioral distance metric. For example, in a maze navigation task, the behavioral characteristic could be the final resting location of the individual, and agents are selected based on how far away they end up from an archive of past individuals. Unfortunately, as shown in [11], the choice of this characteristic can critical, and domain dependent. Additionally, NS focuses mainly on finding novel solutions, and ignores fitness, or reward. NS with Local Competition [12] and MapElites [13] aim to solve this problem by selecting for individuals with high fitness, but only against individuals in the same phenotypic or genotypic region, respectively. There are several prior and concurrent works that aim to connect ideas of quality diversity with deep reinforcement learning. Like quality diversity algorithms, these methods optimize a fixedsize population or archive of policies to be distinct from each other. [14, 15] aim to find a set of policies that yield diverse trajectories. [15] in particular focuses on the application to multi-agent environments and zero-shot coordination. [16] uses a KL-divergence over policies; but a policy’s diversity is optimized over previous SGD updates of itself, thus limiting the potential multi-modality of solutions. [17] optimizes for diversity of the total population via maximizing the determinant of a population distance matrix, but works best only with small populations of size three or five. [18] uses a method reminiscent of DIAYN, but introduces ideas to balance quality with diversity. It is especially similar to ADAP in optimizing the latent space to achieve robustness, but only searches over a fixed-size set of latent vectors and focuses on single-agent environments. Other methods have explored indirectly influencing diversity via differing training hyperparameters as in Population-Based Training [19], or using reward randomization as in [20]. Importantly, both classical QD algorithms [10, 12, 13] and most deep RL methods [14, 15, 16, 17, 19, 20] use sets of distinct agent parameters to learn a diverse population. ADAP makes the connection that we can encode unique policies into a latent space (an idea that also appears in a few recent works [2, 3, 21, 18]), and frames learning a diverse population as a generative modelling problem. Additionally, in distinction from classical QD methods that use a non-differential genetic algorithm or evolutionary search for optimization, ADAP is able to directly optimize for diversity and policy credit assignment via gradient descent. Option Discovery for Hierarchical RL The option framework introduced by [1] could be thought of as learning a generator of skills, which are temporal abstractions over actions that can be used by a downstream, higher-level controller. Recent works like DIAYN [2] and others [3, 21] in option discovery learn a fixed set of diverse skills that are discriminable by observed state or trajectory: such as learning to move left, or move right. These skills are generally not meant to be the final agent policy, DIAYN even learns skills without any extrinsic environmental reward. However, these methods are most similar to ADAP in terms of mapping a latent sample to final agent policies. Goal-Conditioned Reinforcement Learning Yet another way to induce diverse policy behaviors is through using goal-conditioned policies [22, 23, 24] that use a family of task-defined value or Q functions or expert trajectories [25] to incentivize diversity. These methods require structure in how to define diversity, such as defining a value function family over states [24]. Multi-Agent Roles Recent works generate specialized agent policies in a multi-agent setting, building on QMIX [26]. ROMA [27] learns agent roles that are not static through agent trajectories, require optimizing several additional objectives, and are learned jointly with other roles via a joint action-value function. Similarly, MAVEN [28] optimizes the mutual information between joint agent actions and a latent variable. While a single latent sample in ADAP encodes a single agent ‘species‘, a latent sample in these works encode how a group of agents should behave together: thus we cannot employ adaptation based on individual selection. 4 Introduction to Farmworld We test our learning G in a new open-ended grid-world environment called Farmworld, that supports multi-agent interaction and partially observable observations. The idea behind Farmworld is simple: agents move about the map to gather food from various resources, such as chickens and towers that spawn in random locations. In out experiments, agents only optimize their own reward: a single agent gets exactly 0.1 reward for each timestep it is alive. Thus, lifetime is directly proportional to reward. Agents can live longer by attacking other agents, chickens, and towers: for example, a chickens might take two timesteps of sword hits to yield five timesteps worth of health. To avoid cannibalism in our experiments, we set agents to gain zero health from other agents. Of course, these numbers are configurable to achieve different environment dynamics. Furthermore, Farmworld is a partially-observable environment: agents see only what is in a certain tile radius from their location. In our experiments, the observation is a vector representation of the units and tiles. Additional details of the Farmworld are provided in the supplement. 5 Baselines We use compare the ADAP algorithm to two algorithmic baselines. For each of the baselines, as well as ADAP, we experiment with both concatenation (+) and multiplicative model (x) types, and use consistent observation spaces, action spaces, and latent distributions - so the only difference is the diversity algorithm itself. The first baseline is Vanilla PPO, which we call the "Vanilla" baseline. The only difference between Vanilla and ADAP is that the former does not use the diversity regularization loss in Equation 1. Vanilla policies still receive samples from latent distribution Z - there is simply no objective term that enforces a diverse policy actions conditional on these samples. Our second baseline was adapted from DIAYN. DIAYN is formulated as a unsupervised skill generator, rather than a policy generator. However, we believe that it remains one of the technically closest works, and with slight modifications, we attempt to make a comparison between DIAYN and ADAP. First, we highlight some differences between the methods. ADAP uses a KL-divergence based diversity term rather than learning a skill discriminator network. This enables ADAP’s policy diversity to be optimized directly through gradient descent with respect to parameters ϕ, rather than be optimized through RL as with the skill diversity of DIAYN. Additionally, the ADAP latent distribution is defined over a continuous sample space, in contrast to the categorical sample space of DIAYN. We tried the standard DIAYN algorithm with categorical sample spaces and unsupervised skill discovery, however this performed poorly on all of our Farmworld and Markov Soccer experiments. Thus, to place the algorithms on more equal footing, we modify DIAYN: 1.) add extrinsic environmental reward to DIAYN training (this is briefly mentioned in the DIAYN paper itself) 2.) to use the continuous sample space 3.) train a skill regessor that minimizes predicted latent error, instead of a skill discriminator that outputs latent class probabilities. We describe the new skill regressor in the supplement. We call this method DIAYN*. Training and Hyperparameters We train each method for the same number of timesteps (30 million), and generally keep hyperparameters constant across methods. These are described in the supplement. Adaptation Comparisons When we apply Algorithm 2 to ADAP, we apply the same algorithm to each of the baselines. We can do this because ADAP and baselines all share the same input latent distribution Z - the only difference is how well they encode a diverse policy space within Z. 6 Adaptation to Environmental Ablations via Optimizing Z In nature, differences between species and even within species lend robustness to life as a whole. It becomes less likely that any single perturbation in the environment will break the overall system. In the same manner, differences between policies can lend robustness to the policy space as a whole. Experiment We aim to test how having a diverse policy space allows us to search in latent space for policies that better fit unexpected environmental ablations. Doing so would demonstrate the robustness of a population of policies, and simultaneously provide information about different types of diversity that are learned by G. To this end, we train G on a normal Farmworld environment as shown in Section 4. We then ablate the environment, changing features such as map size and features, location of food sources, and even re-spawn times and food-yield. Lastly, we deploy G into the ablated environment and without changing the parameters ϕ, we optimize the latent distribution for policies that are successful in the new environment, using the search Algorithm 2. Ablations and descriptions are available in Table 1. Results Rather to our surprise, in each experiment trial, learning G using ADAP created a policy space Π containing ‘species’ that could thrive in nearly every environmental ablation (see Figure 3). The important thing to note is the development of these species was emergent from the training environment – a product of optimizing G for both policy diversity and reward maximization. How is it possible that ADAP produced a policy space capable of adapting to nearly every ablation? The training environment was relatively abundant with resources scattered about a large map. Thus, there were many degrees-of-freedom in the rules of survival, and by optimizing for diversity, we found a policy space that filled these degrees-of-freedom while still yielding high reward. While these ablations reflect some of the possible axes of diversity, there are certainly more. For example, an agent’s direction of ‘preference’ does not have to be the bottom-right, as in the Far Corner ablation. Indeed, as a sanity check, we tested placing food locations in various other spots on an enlarged map, and found that for every cardinal location, there was a species of agent in G that could exploit that new food location. What came as a surprise was that agents also used their health indicator to diversify: since agents diversify conditional on state, species developed in which agents would prefer to go upwards when their health is high, but downwards when their health is low. This particular agent species was the one that managed to thrive in the Wall Barrier ablation. Similarly, in the Patience ablation, ADAP learned a certain species of agent that waited until its health was low before farming a tower. The Poison Chickens ablation was the one hold-out in which latent optimization on ADAP could not find a profoundly successful species. It is possible that, there would have been too large of a trade-off between diversity and potential reward in the training environment in order to learn a policy that ignored half of its potential food sources. We come back to this ablation in the next experiment. Finally, we should note that ADAP beat the Vanilla baseline in all ablations aside from Speed. We hypothesize this ablation is the most in-distribution to the training environment. Since the Vanilla baseline optimized for solely for expected rewards, it makes no diversity tradeoffs and performs well in in-distribution environments. As visible from the plots, DIAYN* also did not learn to speciate in a manner that was successful on the majority of ablations. 7 Measurement of Agent Individuality and Diversity in a Population A good generative model of policies should be able to represent a multi-modal space of behaviors. That is: different agent policies should be able to act with individuality. Our generative model uses a shared parameter set across all agents, and naively using shared parameters could result in learning just one ‘average’ agent – which is precisely what we wish to avoid. Niche Specialization Experiment To test the abilities of our policy generator, we set up the Farmworld environment with a hidden rule specific to this experiment: when an agent spawns, it is able harness resources from either towers or chickens. However, once it gets health from one unit type, it becomes ‘locked-into’ that unit type, and cannot gain health from the other unit type. Information about an agent’s ‘locked-into’ state is not provided as part of the agent observation, and since agents have no memory, they would have to look to their latent z to determine their niche. Since there are equal numbers of chickens and towers on our map, a reasonable generative model algorithm should be able to map half the latent space to each of these two specializations, or niches. Results So we can see how well the entire latent space falls maps to a niche, we report rewards and other metrics in Table 3 without running latent space optimization on ADAP or baselines. In summary, ADAP consistently learned a more multi-modal policy space than any of the other baselines. Our results also indicate that using a multiplicative model can yield a higher degree of policy space multi-modality, and therefore greater success in this environment. We can see in Table 3 that ADAP (x) is able to attain the highest average agent lifetime. This, however, is not necessarily the most interesting point. ADAP, learns a policy generator with the highest mutual information I(T |Z) between an agent "niche" T and the latent distribution Z. Intuitively, this means that ADAP was able to learn a population of agents that were composed of two clear species – on one hand: agents that focus on chickens, and on the other: agents that focus on towers. Formally, let T be a discrete random variable where pT (t) is the probability that an agent attacks target t, for t ∈ {chicken, tower}. Then I(T ;Z) is high when individual agents are specialized in a niche, and we see diverse niches across our population. This is because I(T ;Z) = H(T )−H(T |Z) and is maximized by both increasing H(T ) and decreasing H(T |Z). H(T ) measures the diversity of niches across all agents in the population, and H(T |Z) measures how rigidly an agent falls into a single niche (i.e. specialization). As an example, suppose agents were highly specialized but not diverse, e.g., all agents were chickenonly attackers. Then H(T ) = H(T |Z) = I(T ;Z) = 0. On the other hand, suppose that all z ∼ Z yield an agent policy that attacks chickens and towers with equal probability. Then in this case H(T ) = H(T |Z) = 1 and I(T ;Z) = 0. Intuitively, this means half of the time agents are wasting timesteps to attack a target that they are unable to even damage! Qualitatively, we have seen that the latter case occurs with the Vanilla and (most seeds of) DIAYN* baselines: notice that their H(T |Z) is significantly higher than that of ADAP. For fun, we performed latent distribution optimization on generators trained using the Niche Specialization environment to fit the Poison Chickens environment. One would expect algorithms with high H(T |Z) to fare well, since Algorithm 2 can find optimized Z∗ such that pT (chicken|z ∼ Z∗) = 0. Sure enough, we see this result in Figure 7: ADAP (x) is most suc- cessful at consistently producing a generative model that can produce policies that not only avoid chickens, but also successfully attack only towers. 8 Adaptation and Self-Play in a Zero-Sum Two-Player Environment Environment This experiment uses Markov Soccer, introduced in [5]. Two agents, A and B, play on a gridworld and must ‘carry’ a ball into the opposing goal to score. Agents walk in cardinal directions or stand in place. Possession is randomly initialized, and switches if one an agent bumps into the other. Actions of A and B occur on the same timestep, execution order is randomized, and each timestep ends in a draw with some ϵ probability. Markov Soccer is an interesting environment, because the best policy for one agent depends on the policy of the other agent. As described in [5], there exists a worse-case-optimal probabilistic policy for Markov Soccer, which maximizes the minimum possible score against any adversary. This strategy tends to be conservative, preferring to act towards a draw where a different policy could have obtained a higher score. On the other hand, non-worse-case-optimal strategies may be less conservative and may achieve very high scores against some opponents, but very low scores against others. Analogous to real soccer, different players have varying abilities and play styles, and a given player p1 may be optimal against p2, but not against p3. If any single policy has its drawbacks, can we instead learn an entire space of diverse policies Π := {π1, π2, ..., πinf}, where for any opponent, we can select a policy πi ∈ Π that achieves the maximum score against that opponent? Ideally, this space includes the worse-case-optimal policy, as well as other more aggressive policies. Then, just as a coach might swap out a soccer player, we can mix and match our champion as suited. Experiment Can we learn a population of individuals that is holistically strong against all types of opponents? We evaluate adaptability to various adversaries using two methods. First, we test baselines and our method against a set of hand-coded soccer bots. These bots are designed to represent a wide gamut of strategies, some of which are more exploitable than others. Secondly, we evaluate each G by playing ADAP (x), ADAP (+), Vanilla (x), and Vanilla (+) in a round-robin tournament against each other. All scores is determined by wins minus losses over 1000 simulated games. Against Hard-Coded Bots: Each bot always starts on the left side, and the learned policy starts on the right side (although the environment is coded such that observations are side-invariant). Bot types fall into three categories: offense (bots start with possession), defense (policy starts with possession), and mixed (random starting possession). See Table 4 for more details. Round-Robin Against Each Other: We also pit each generative model in a round robin tournament against the other models. The manner in which we do this is described in the supplement. Training and Baselines We use self-play to train both ADAP and baselines. We use the same Vanilla baseline as described in Section 5, and we omit the DIAYN* baseline for brevity. Note that at no point in the training process did any of our algorithms train against any bots, or against each other. Results As in the Farmworld adaptability experiment, we see from Figure 8 that ADAP is able to learn a G during the train phase that emergently contains members that are successful against a variety of unexpected adversaries - including naive bots and other policies. Compared to Vanilla, the ADAP policy space generalizes better against all adversaries. Going back to the soccer team example, we were able to select individuals from the ADAP population that were well suited for a specific strategies. For example, against the Oscillate 1 adversary, ADAP latent optimization found a member of the population that side-stepped the oscillating adversary simply by moving to the top row, and then down to the goal. Additionally, against the Straight adversary, successful ADAP individuals stole possession by deterministically standing in-front of the opponent to block, and then moving around and into the goal. On the other hand in both of these situations, Vanilla could not find individuals that exploited the naive deterministic nature of their opponents. Using ADAP did not just allow us to optimize against naive opponents. ADAP learned the best G in the round-robin tournament, and was the only method that was able to consistently beat our rule-based bot. It is possible that by using ADAP during the self-play training, individuals encountered a wide variety of strategies that bettered overall performance. 9 Limitations Bad Apples When using ADAP, not every member of the policy space is going to be an optimal policy. In fact, some generated policies might be bad apples: policies that were incentivized by the diversity regularizer to take actions that were not rewarding. Naturally, some individuals might be better or worse than others. These individuals can be removed by optimizing the latent distribution. However, the bad apples may come with a plus side. Even though they do not perform well in the current environment, they might happen to perform well in a future ablated environment! Continuous-Action Space Environments The results presented so far focus entirely on environments with discrete categorical action spaces, in which we have observed that our diversity regularizer in Equation 1 empirically performs well. However, not all environments in RL use discrete action spaces - continuous action spaces are widely used in RL control tasks. While we believe that our regularizer can work in these environments, we have not rigorously tested in these environments. 10 Conclusion We have presented a framework to learn a generative model of policies. Rather than learning just one policy, we aim to find as many high-performing and individually distinct policies as possible, all compressed within the parameters of our generator. Learning a space of policies pays off in an open-ended environment such as Farmworld, in which there may be more than one path to success. We show in Section 6 that we can adapt to ablations by quickly choosing ‘species’ from our learned policy space that are successful in the new environment. We also learn a policy space in a competitive, two-player, zero-sum game in Section 8. Here, no single deterministic policy is optimal against all adversaries. Instead, we show how to train a family of policies that can be naturally adaptable to a wide array of both challenging and naive adversaries. Overall, we hope to show how it can be beneficial in RL to optimize not just for reward, but also for diversity of behavior. As environments continue to increase in complexity and open-endedness – filled with branching paths to success – it makes sense to learn not just one, but many, solutions. 11 Acknowledgements This research was supported in part by IBM through the MIT-IBM Watson AI Lab.
1. What is the focus and contribution of the paper on generative models for policies in reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its integration of quality diversity goals? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions for improvement in the review?
Summary Of The Paper Review
Summary Of The Paper The authors proposed a generative model of policies, which maps a low-dimensional latent space to an agent policy space to learn a space of diverse and high-reward policies on any given environment (without requiring the use of separate policy parameters). The proposed method is able to adapt to changes in our environment solely by selecting policies in latent space. The experiments evaluated our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. The strength of this paper is as follows: 1) The proposed method integrates the goals of quality diversity into deep RL by simulating an entire population of agents via a generative model of policies. 2) The authors evaluated this method using three different experiments and showed that this method was able to learn a more multi-modal and effective policy space than any of the other baselines. Review However, there were some unclear points. First, although the authors describe the novelty of this work in the Related work section fragmentally, there was little description of the technical novelty. In my understanding, it seems to be the diversity regularization and multiplicative model, but for those, there was no description of the related work in Section 2 (since the ideas are simple, it would be necessary to quote other work). In addition, although the experimental results were better, the theoretical contribution of this work should be clarified. Lastly, as described in abstracts, time and memory efficiency were not theoretically and numerically analyzed. Minor comments: Eq. (1): S and s were not defined. If s is a state, actions may not be also defined. 111 “requiring discriminability via state or trajectory impose diversity constraints along a and specific task-dependent axis. “ Does the author have theoretical grounds for that? Or just understanding from the experimental results? 172 no parenthesis 176 I did not understand why the authors augment the intrinsic reward with extrinsic environmental reward.
NIPS
Title Adaptable Agent Populations via a Generative Model of Policies Abstract In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/. 1 Introduction Quick thought experiment: imagine our world was such that all people acted, thought, and looked exactly the same in every situation. Would we ever have found the influential dissenters that sparked scientific, political, and cultural revolutions? In reinforcement learning (RL), it is common to learn a single policy that fits an environment. However, it is often desirable to instead find an entire array of high performing policies. To this end, we propose learning a generative model of policies. At a high level, we aim to show that purposefully learning a diverse policy space for a given environment can be competitive to learning a single policy, while better encompassing a range of skillful behaviors that are adaptable and robust to changes in the task and environment. We name our method of learning a space of adaptable agent polices: ADAP. Why should we bother with finding more than one policy per environment? We propose two primary reasons. First, RL environments are continually approaching greater levels of open-endedness and complexity. For a given environment, there might be an entire manifold of valid and near-equally high performing strategies. By finding points across this manifold, we avoid ‘having all eggs in one basket,’ granting robustness and adaptability to environmental changes. In the event of a change, we are able to adapt our generated population to select individuals that can still survive given the ablation, much like natural selection drives evolution in the real world. Secondly, using a generative model of policies as a population of agents makes intuitive sense in multi-agent environments, in which different agents should have the capacity to act like they are unique individuals. However, it is common in many multi-agent reinforcement learning settings to deploy the same policy across all agents, such that they are essentially distributed clones. Doing so may reduce the multi-modality of the agent population, resulting in a single ‘average’ agent. Previous work has touched on ideas akin to a generative model of policies. In hierarchical RL, the high-level policy controller can be considered a generator of sub-policies that are ‘options’ [1, 2, 3]. But these methods are designed to find decomposable skills that aid in the construction of just one 35th Conference on Neural Information Processing Systems (NeurIPS 2021). downstream controller policy. A core idea of our work is that of quality diversity [4], which aims to optimize a population of agents along the axes of both reward and diversity. Traditional methods often use evolutionary search over a discrete-sized population of separate agents, each with their own policy parameters. This consumes more time and training resources, and limits the number of potential behaviors. Our work integrates the goals of quality diversity into time and memory efficient deep RL by simulating an entire population of agents via a generative model of policies, with diversity bounded only by capacity of the generator. The rest of the paper is organized as follows. First we introduce our generative model of policies and the diversity objective that guides its learning. Next, we explore the potentials of learning a population of agents by ablating environments and then searching for suitable policies, directly in latent space. We primarily study two environments: Markov Soccer [5] and Farmworld. Farmworld is a new environment we have developed for testing diversity in a multi-agent, open-ended gridworld. At the website linked in the abstract, one can find qualitative results of experiments presented in this paper, as well as additional results on toy environments of CartPole [6] and a standard multi-goal environment. 2 Method Let Z be a sample space of n dimensional vectors, and Z be a random variable defined uniformly over Z . Then, we learn a mapping, G : ϕ,Z, from generator weights ϕ and latent distribution Z to a space of policies Π. The generator Gϕ itself is not a policy. It must be conditioned on a draw z ∼ Z in order to define a learned set of behaviors. In this sense, z is a stochastic parameter of Gϕ, and is sampled once at the beginning of each agent episode. In our experiments, Z is the sample space of all three dimensional vectors with magnitude one (i.e. the surface of the unit sphere). Practically, we use the low dimension of three, so that we can perform a key subject of this paper: rapid optimization, or adaptation, of G by changing Z rather than ϕ (fine tuning ϕ would be more typical in literature). We require magnitude one so that there is at least one non-zero element for any z ∼ Z, which we found important for providing signal and stability in the training of G. It is possible that with higher dimensions, this stipulation could be relaxed. Diversity Regularization In order to learn a diverse space of unique policies, we introduce a diversity regularization objective. Since policies define a space of actions taken over different states, we propose that in order for two policies to be distinct, they must have different action distributions given the same state. To this end, we define the objective Ldiv (1): Ldiv(ϕ) = E s∈S [ E zi,zj∼Z exp ( −DKL(πϕ,zi;b(s)∥πϕ,zj ;b(s)) )] (1) in which DKL is the KL-divergence between the two policy action distributions πϕ,zi and πϕ,zj , and b is a smoothing constant over the action distributions. Optimization of G In our experiments, we optimize the diversity objective in an online fashion using gradient descent, in conjunction with a PPO [7] clipped-surrogate objective and an entropy regularization objective. Our full optimization problem is max ϕ LPPO(ϕ)− αLdiv(ϕ) where LPPO is Equation 9 in [7] and α is a coefficient to scale the diversity regularization objective. See Algorithm 1 in the supplement for additional details. Adaptation via Optimization in the Latent Space of G By learning an entire space of policies Π, we are able to search our policy space for the highest performing policy, whether dealing with the training environment or an ablated future environment. In contrast to searching over policy parameters through transfer learning or fine-tuning, we are able to quickly search over the low-dimensional latent space (dimensionality 3 in our experiments). In fact, we can quickly adapt back and forth to various situations: the search procedure often takes less than 30 seconds, or 100 episode rollouts, to find any high quality solutions that exist. Over the course of a small number of generations, we evaluate randomly sampled latents, and keep higher performing ones with greater probability. In the event that episodes have a high degree of variablility per run – such as in the Markov Soccer environment – it may be necessary to run several episodes per latent vector and average the returns. Details can be found in Algorithm 2 of the supplement. Model Architecture Similarly to prior work [3], we have found that richer integrations between the latent vector and the observation can yield a more multi-modal policy space. To induce this richer integration, we introduce a multiplicative model denoted "(x)" for latent integration, and compare the results to a baseline of concatenating "(+)" the latent sample to the observation. We describe this architecture in the supplement. 3 Related Work Quality Diversity The evolutionary computing community has developed various quality diversity (QD) algorithms that aim to find a balance of novel and high-performing individuals within a population. Some methods can even be considered policy generators: NEAT and HyperNEAT [8, 9] use an indirect encoding to construct a network architecture. To encourage diversity, these methods use an idea known as fitness sharing: if genotypes are too similar, then they will split reward. While NEAT and HyperNEAT encourage diversity of parameters, other methods encourage diversity of behavior. Novelty Search (NS) [10] learns individuals that have high novelty along some user defined behavioral distance metric. For example, in a maze navigation task, the behavioral characteristic could be the final resting location of the individual, and agents are selected based on how far away they end up from an archive of past individuals. Unfortunately, as shown in [11], the choice of this characteristic can critical, and domain dependent. Additionally, NS focuses mainly on finding novel solutions, and ignores fitness, or reward. NS with Local Competition [12] and MapElites [13] aim to solve this problem by selecting for individuals with high fitness, but only against individuals in the same phenotypic or genotypic region, respectively. There are several prior and concurrent works that aim to connect ideas of quality diversity with deep reinforcement learning. Like quality diversity algorithms, these methods optimize a fixedsize population or archive of policies to be distinct from each other. [14, 15] aim to find a set of policies that yield diverse trajectories. [15] in particular focuses on the application to multi-agent environments and zero-shot coordination. [16] uses a KL-divergence over policies; but a policy’s diversity is optimized over previous SGD updates of itself, thus limiting the potential multi-modality of solutions. [17] optimizes for diversity of the total population via maximizing the determinant of a population distance matrix, but works best only with small populations of size three or five. [18] uses a method reminiscent of DIAYN, but introduces ideas to balance quality with diversity. It is especially similar to ADAP in optimizing the latent space to achieve robustness, but only searches over a fixed-size set of latent vectors and focuses on single-agent environments. Other methods have explored indirectly influencing diversity via differing training hyperparameters as in Population-Based Training [19], or using reward randomization as in [20]. Importantly, both classical QD algorithms [10, 12, 13] and most deep RL methods [14, 15, 16, 17, 19, 20] use sets of distinct agent parameters to learn a diverse population. ADAP makes the connection that we can encode unique policies into a latent space (an idea that also appears in a few recent works [2, 3, 21, 18]), and frames learning a diverse population as a generative modelling problem. Additionally, in distinction from classical QD methods that use a non-differential genetic algorithm or evolutionary search for optimization, ADAP is able to directly optimize for diversity and policy credit assignment via gradient descent. Option Discovery for Hierarchical RL The option framework introduced by [1] could be thought of as learning a generator of skills, which are temporal abstractions over actions that can be used by a downstream, higher-level controller. Recent works like DIAYN [2] and others [3, 21] in option discovery learn a fixed set of diverse skills that are discriminable by observed state or trajectory: such as learning to move left, or move right. These skills are generally not meant to be the final agent policy, DIAYN even learns skills without any extrinsic environmental reward. However, these methods are most similar to ADAP in terms of mapping a latent sample to final agent policies. Goal-Conditioned Reinforcement Learning Yet another way to induce diverse policy behaviors is through using goal-conditioned policies [22, 23, 24] that use a family of task-defined value or Q functions or expert trajectories [25] to incentivize diversity. These methods require structure in how to define diversity, such as defining a value function family over states [24]. Multi-Agent Roles Recent works generate specialized agent policies in a multi-agent setting, building on QMIX [26]. ROMA [27] learns agent roles that are not static through agent trajectories, require optimizing several additional objectives, and are learned jointly with other roles via a joint action-value function. Similarly, MAVEN [28] optimizes the mutual information between joint agent actions and a latent variable. While a single latent sample in ADAP encodes a single agent ‘species‘, a latent sample in these works encode how a group of agents should behave together: thus we cannot employ adaptation based on individual selection. 4 Introduction to Farmworld We test our learning G in a new open-ended grid-world environment called Farmworld, that supports multi-agent interaction and partially observable observations. The idea behind Farmworld is simple: agents move about the map to gather food from various resources, such as chickens and towers that spawn in random locations. In out experiments, agents only optimize their own reward: a single agent gets exactly 0.1 reward for each timestep it is alive. Thus, lifetime is directly proportional to reward. Agents can live longer by attacking other agents, chickens, and towers: for example, a chickens might take two timesteps of sword hits to yield five timesteps worth of health. To avoid cannibalism in our experiments, we set agents to gain zero health from other agents. Of course, these numbers are configurable to achieve different environment dynamics. Furthermore, Farmworld is a partially-observable environment: agents see only what is in a certain tile radius from their location. In our experiments, the observation is a vector representation of the units and tiles. Additional details of the Farmworld are provided in the supplement. 5 Baselines We use compare the ADAP algorithm to two algorithmic baselines. For each of the baselines, as well as ADAP, we experiment with both concatenation (+) and multiplicative model (x) types, and use consistent observation spaces, action spaces, and latent distributions - so the only difference is the diversity algorithm itself. The first baseline is Vanilla PPO, which we call the "Vanilla" baseline. The only difference between Vanilla and ADAP is that the former does not use the diversity regularization loss in Equation 1. Vanilla policies still receive samples from latent distribution Z - there is simply no objective term that enforces a diverse policy actions conditional on these samples. Our second baseline was adapted from DIAYN. DIAYN is formulated as a unsupervised skill generator, rather than a policy generator. However, we believe that it remains one of the technically closest works, and with slight modifications, we attempt to make a comparison between DIAYN and ADAP. First, we highlight some differences between the methods. ADAP uses a KL-divergence based diversity term rather than learning a skill discriminator network. This enables ADAP’s policy diversity to be optimized directly through gradient descent with respect to parameters ϕ, rather than be optimized through RL as with the skill diversity of DIAYN. Additionally, the ADAP latent distribution is defined over a continuous sample space, in contrast to the categorical sample space of DIAYN. We tried the standard DIAYN algorithm with categorical sample spaces and unsupervised skill discovery, however this performed poorly on all of our Farmworld and Markov Soccer experiments. Thus, to place the algorithms on more equal footing, we modify DIAYN: 1.) add extrinsic environmental reward to DIAYN training (this is briefly mentioned in the DIAYN paper itself) 2.) to use the continuous sample space 3.) train a skill regessor that minimizes predicted latent error, instead of a skill discriminator that outputs latent class probabilities. We describe the new skill regressor in the supplement. We call this method DIAYN*. Training and Hyperparameters We train each method for the same number of timesteps (30 million), and generally keep hyperparameters constant across methods. These are described in the supplement. Adaptation Comparisons When we apply Algorithm 2 to ADAP, we apply the same algorithm to each of the baselines. We can do this because ADAP and baselines all share the same input latent distribution Z - the only difference is how well they encode a diverse policy space within Z. 6 Adaptation to Environmental Ablations via Optimizing Z In nature, differences between species and even within species lend robustness to life as a whole. It becomes less likely that any single perturbation in the environment will break the overall system. In the same manner, differences between policies can lend robustness to the policy space as a whole. Experiment We aim to test how having a diverse policy space allows us to search in latent space for policies that better fit unexpected environmental ablations. Doing so would demonstrate the robustness of a population of policies, and simultaneously provide information about different types of diversity that are learned by G. To this end, we train G on a normal Farmworld environment as shown in Section 4. We then ablate the environment, changing features such as map size and features, location of food sources, and even re-spawn times and food-yield. Lastly, we deploy G into the ablated environment and without changing the parameters ϕ, we optimize the latent distribution for policies that are successful in the new environment, using the search Algorithm 2. Ablations and descriptions are available in Table 1. Results Rather to our surprise, in each experiment trial, learning G using ADAP created a policy space Π containing ‘species’ that could thrive in nearly every environmental ablation (see Figure 3). The important thing to note is the development of these species was emergent from the training environment – a product of optimizing G for both policy diversity and reward maximization. How is it possible that ADAP produced a policy space capable of adapting to nearly every ablation? The training environment was relatively abundant with resources scattered about a large map. Thus, there were many degrees-of-freedom in the rules of survival, and by optimizing for diversity, we found a policy space that filled these degrees-of-freedom while still yielding high reward. While these ablations reflect some of the possible axes of diversity, there are certainly more. For example, an agent’s direction of ‘preference’ does not have to be the bottom-right, as in the Far Corner ablation. Indeed, as a sanity check, we tested placing food locations in various other spots on an enlarged map, and found that for every cardinal location, there was a species of agent in G that could exploit that new food location. What came as a surprise was that agents also used their health indicator to diversify: since agents diversify conditional on state, species developed in which agents would prefer to go upwards when their health is high, but downwards when their health is low. This particular agent species was the one that managed to thrive in the Wall Barrier ablation. Similarly, in the Patience ablation, ADAP learned a certain species of agent that waited until its health was low before farming a tower. The Poison Chickens ablation was the one hold-out in which latent optimization on ADAP could not find a profoundly successful species. It is possible that, there would have been too large of a trade-off between diversity and potential reward in the training environment in order to learn a policy that ignored half of its potential food sources. We come back to this ablation in the next experiment. Finally, we should note that ADAP beat the Vanilla baseline in all ablations aside from Speed. We hypothesize this ablation is the most in-distribution to the training environment. Since the Vanilla baseline optimized for solely for expected rewards, it makes no diversity tradeoffs and performs well in in-distribution environments. As visible from the plots, DIAYN* also did not learn to speciate in a manner that was successful on the majority of ablations. 7 Measurement of Agent Individuality and Diversity in a Population A good generative model of policies should be able to represent a multi-modal space of behaviors. That is: different agent policies should be able to act with individuality. Our generative model uses a shared parameter set across all agents, and naively using shared parameters could result in learning just one ‘average’ agent – which is precisely what we wish to avoid. Niche Specialization Experiment To test the abilities of our policy generator, we set up the Farmworld environment with a hidden rule specific to this experiment: when an agent spawns, it is able harness resources from either towers or chickens. However, once it gets health from one unit type, it becomes ‘locked-into’ that unit type, and cannot gain health from the other unit type. Information about an agent’s ‘locked-into’ state is not provided as part of the agent observation, and since agents have no memory, they would have to look to their latent z to determine their niche. Since there are equal numbers of chickens and towers on our map, a reasonable generative model algorithm should be able to map half the latent space to each of these two specializations, or niches. Results So we can see how well the entire latent space falls maps to a niche, we report rewards and other metrics in Table 3 without running latent space optimization on ADAP or baselines. In summary, ADAP consistently learned a more multi-modal policy space than any of the other baselines. Our results also indicate that using a multiplicative model can yield a higher degree of policy space multi-modality, and therefore greater success in this environment. We can see in Table 3 that ADAP (x) is able to attain the highest average agent lifetime. This, however, is not necessarily the most interesting point. ADAP, learns a policy generator with the highest mutual information I(T |Z) between an agent "niche" T and the latent distribution Z. Intuitively, this means that ADAP was able to learn a population of agents that were composed of two clear species – on one hand: agents that focus on chickens, and on the other: agents that focus on towers. Formally, let T be a discrete random variable where pT (t) is the probability that an agent attacks target t, for t ∈ {chicken, tower}. Then I(T ;Z) is high when individual agents are specialized in a niche, and we see diverse niches across our population. This is because I(T ;Z) = H(T )−H(T |Z) and is maximized by both increasing H(T ) and decreasing H(T |Z). H(T ) measures the diversity of niches across all agents in the population, and H(T |Z) measures how rigidly an agent falls into a single niche (i.e. specialization). As an example, suppose agents were highly specialized but not diverse, e.g., all agents were chickenonly attackers. Then H(T ) = H(T |Z) = I(T ;Z) = 0. On the other hand, suppose that all z ∼ Z yield an agent policy that attacks chickens and towers with equal probability. Then in this case H(T ) = H(T |Z) = 1 and I(T ;Z) = 0. Intuitively, this means half of the time agents are wasting timesteps to attack a target that they are unable to even damage! Qualitatively, we have seen that the latter case occurs with the Vanilla and (most seeds of) DIAYN* baselines: notice that their H(T |Z) is significantly higher than that of ADAP. For fun, we performed latent distribution optimization on generators trained using the Niche Specialization environment to fit the Poison Chickens environment. One would expect algorithms with high H(T |Z) to fare well, since Algorithm 2 can find optimized Z∗ such that pT (chicken|z ∼ Z∗) = 0. Sure enough, we see this result in Figure 7: ADAP (x) is most suc- cessful at consistently producing a generative model that can produce policies that not only avoid chickens, but also successfully attack only towers. 8 Adaptation and Self-Play in a Zero-Sum Two-Player Environment Environment This experiment uses Markov Soccer, introduced in [5]. Two agents, A and B, play on a gridworld and must ‘carry’ a ball into the opposing goal to score. Agents walk in cardinal directions or stand in place. Possession is randomly initialized, and switches if one an agent bumps into the other. Actions of A and B occur on the same timestep, execution order is randomized, and each timestep ends in a draw with some ϵ probability. Markov Soccer is an interesting environment, because the best policy for one agent depends on the policy of the other agent. As described in [5], there exists a worse-case-optimal probabilistic policy for Markov Soccer, which maximizes the minimum possible score against any adversary. This strategy tends to be conservative, preferring to act towards a draw where a different policy could have obtained a higher score. On the other hand, non-worse-case-optimal strategies may be less conservative and may achieve very high scores against some opponents, but very low scores against others. Analogous to real soccer, different players have varying abilities and play styles, and a given player p1 may be optimal against p2, but not against p3. If any single policy has its drawbacks, can we instead learn an entire space of diverse policies Π := {π1, π2, ..., πinf}, where for any opponent, we can select a policy πi ∈ Π that achieves the maximum score against that opponent? Ideally, this space includes the worse-case-optimal policy, as well as other more aggressive policies. Then, just as a coach might swap out a soccer player, we can mix and match our champion as suited. Experiment Can we learn a population of individuals that is holistically strong against all types of opponents? We evaluate adaptability to various adversaries using two methods. First, we test baselines and our method against a set of hand-coded soccer bots. These bots are designed to represent a wide gamut of strategies, some of which are more exploitable than others. Secondly, we evaluate each G by playing ADAP (x), ADAP (+), Vanilla (x), and Vanilla (+) in a round-robin tournament against each other. All scores is determined by wins minus losses over 1000 simulated games. Against Hard-Coded Bots: Each bot always starts on the left side, and the learned policy starts on the right side (although the environment is coded such that observations are side-invariant). Bot types fall into three categories: offense (bots start with possession), defense (policy starts with possession), and mixed (random starting possession). See Table 4 for more details. Round-Robin Against Each Other: We also pit each generative model in a round robin tournament against the other models. The manner in which we do this is described in the supplement. Training and Baselines We use self-play to train both ADAP and baselines. We use the same Vanilla baseline as described in Section 5, and we omit the DIAYN* baseline for brevity. Note that at no point in the training process did any of our algorithms train against any bots, or against each other. Results As in the Farmworld adaptability experiment, we see from Figure 8 that ADAP is able to learn a G during the train phase that emergently contains members that are successful against a variety of unexpected adversaries - including naive bots and other policies. Compared to Vanilla, the ADAP policy space generalizes better against all adversaries. Going back to the soccer team example, we were able to select individuals from the ADAP population that were well suited for a specific strategies. For example, against the Oscillate 1 adversary, ADAP latent optimization found a member of the population that side-stepped the oscillating adversary simply by moving to the top row, and then down to the goal. Additionally, against the Straight adversary, successful ADAP individuals stole possession by deterministically standing in-front of the opponent to block, and then moving around and into the goal. On the other hand in both of these situations, Vanilla could not find individuals that exploited the naive deterministic nature of their opponents. Using ADAP did not just allow us to optimize against naive opponents. ADAP learned the best G in the round-robin tournament, and was the only method that was able to consistently beat our rule-based bot. It is possible that by using ADAP during the self-play training, individuals encountered a wide variety of strategies that bettered overall performance. 9 Limitations Bad Apples When using ADAP, not every member of the policy space is going to be an optimal policy. In fact, some generated policies might be bad apples: policies that were incentivized by the diversity regularizer to take actions that were not rewarding. Naturally, some individuals might be better or worse than others. These individuals can be removed by optimizing the latent distribution. However, the bad apples may come with a plus side. Even though they do not perform well in the current environment, they might happen to perform well in a future ablated environment! Continuous-Action Space Environments The results presented so far focus entirely on environments with discrete categorical action spaces, in which we have observed that our diversity regularizer in Equation 1 empirically performs well. However, not all environments in RL use discrete action spaces - continuous action spaces are widely used in RL control tasks. While we believe that our regularizer can work in these environments, we have not rigorously tested in these environments. 10 Conclusion We have presented a framework to learn a generative model of policies. Rather than learning just one policy, we aim to find as many high-performing and individually distinct policies as possible, all compressed within the parameters of our generator. Learning a space of policies pays off in an open-ended environment such as Farmworld, in which there may be more than one path to success. We show in Section 6 that we can adapt to ablations by quickly choosing ‘species’ from our learned policy space that are successful in the new environment. We also learn a policy space in a competitive, two-player, zero-sum game in Section 8. Here, no single deterministic policy is optimal against all adversaries. Instead, we show how to train a family of policies that can be naturally adaptable to a wide array of both challenging and naive adversaries. Overall, we hope to show how it can be beneficial in RL to optimize not just for reward, but also for diversity of behavior. As environments continue to increase in complexity and open-endedness – filled with branching paths to success – it makes sense to learn not just one, but many, solutions. 11 Acknowledgements This research was supported in part by IBM through the MIT-IBM Watson AI Lab.
1. What is the focus of the paper regarding training generative models for diverse agent policies? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like DIAYN? 3. How does the reviewer assess the clarity and quality of the writing in the paper? 4. What are some minor questions or typos pointed out by the reviewer?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to train a generative model for entire populations of maximally diverse agents, from which one specific individual policy can quickly be selected at deployment time through a fast search process. Policies are represented as networks augmented with a low-dimensional latent variable z, randomly sampled at agent initialization. Thus each trained network is actually a generative model, from which an infinity of policies can be generated by sampling over z. Crucially, the training procedure encourages the fixed weights of the network to not only obtain good performance for any given random z, but also to produce maximally different policies for any two different z. Thus the method is a quality diversity (QD) method (though the authors seem ambiguous on the subject). This method is compared with a previously published method (DIAYN), and with itself but without the diversity objective, and is found superior to both on various gridworld tasks, both in terms of expected performance and diversity of generated agents. Review I find the method interesting and somewhat novel (see below). Building an infinite, continuous space of diverse but efficient agents, through which one can quickly search (and thus adapt) at deployment time, as opposed to the discrete populations typically maintained in QD algorithms, seems like a significant advantage. One possible concern is insufficient relation with previous work. Bluntly: Am I correct that this method is basically DIAYN with continuous 3D z and a different, simpler diversity objective? If not, why? Note that this would not be a deal-breaker, since DIAYN is already used as a baseline and found less efficient for the tasks selected here. But somehow it's not mentioned in the "related work" section! It seems important to explain how this method relates to DIAYN and how it differs from it (e.g. in objectives, motivation, etc.), in addition to the results. The paper is confusingly written and some passages were unclear: Most importantly, in p. 5, the description of baselines (especially lines 171-173) is extremely obscure and it took me a while to understand (?) what was meant. I suggest replacing the start of l. 173 with the following: "the Vanilla baseline also includes a latent z; the only difference between 'Vanilla PPO' and ADAP is that the former does not use the diversity regularization (Eq. 2)". If this is incorrect, an equivalent explanation of what exactly Vanilla PPO means should be provided. Also: "DIAYN* uses a continuous distribution" - over what ?? (I suppose it should be over "skills", which are very similar to the latent z). The farmworld environment should be described a little bit more in the main text, e.g. what's the difference between towers and chickens, what does "attacking" another agent mean, etc. Just pointing to the supplements is not enough. In p.8, "G1 vs G1" ? What is pi_G1,z1 ? Also, this evaluation method seems obviously asymmetric: G1 is forced to produce a generalist, while G2 can choose an agent that specifically exploits G1, presumably lending an advantage to G2. How is this asymmetry handled? Whenever the z is optimized, it should be mentioned at least briefly how, in the main text. Minor points: In "related work" sections: The proposed method is very much a quality diversity method, by definition.Probability distributions over actions are a behavioral characterization, and KL divergence between them is a novelty metric. It may be different from existing QD methods, but it's still one. p.2, line 60: does (1) mean Equation 1? It should be spelt out. Typos: p. 4, l. 163: missing "to". p. 5, l. 172: missing ")". p. 6, l. 227: what's "the 2"?
NIPS
Title STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers Abstract Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior. A recent non-recurrent method Neural Data Transformers (NDT) has shown great success in capturing neural dynamics with low inference latency without an explicit dynamical model. However, NDT focuses on modeling the temporal evolution of the population activity while neglecting the rich covariation between individual neurons. In this paper we introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons in the population across time and space to uncover their underlying firing rates. In addition, we propose a contrastive learning loss that works in accordance with mask modeling objective to further improve the predictive performance. We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets, demonstrating its capability to capture autonomous and non-autonomous dynamics spanning different cortical regions while being completely agnostic to the specific behaviors at hand. Furthermore, STNDT spatial attention mechanism reveals consistently important subsets of neurons that play a vital role in driving the response of the entire population, providing interpretability and key insights into how the population of neurons performs computation.1 1 Introduction One of the most prominent questions in systems neuroscience is how neurons perform computations that give rise to behaviors. Recent evidence suggests that computation in the brain could be governed at the population level [1, 2]. Population of neurons are proposed to obey an internal dynamical rule that drives their activities over time [3, 4]. Inferring these dynamics on a single trial basis is crucial for understanding the relationship between neural population responses and behavior, subsequently enabling the development of robust decoding schemes with wide applicability in brain-computer interfaces (BCI) [5–7]. However, modeling population dynamics on single trials is challenging due to the stochasticity of individual neurons making their spiking activity vary from trial to trial even when they are subject to identical stimuli or recorded under repeated behavior conditions. A direct approach to reduce the trial-to-trial variability of neural responses could be to average responses over repeated trials of the same behavior [8, 9], to convolve the neural response with a Gaussian kernel [10], or in general, to define a variety of neural activity measures [11]. However, more success was found in approaches that explicitly model neural responses as a dynamical system, including methods treating the population dynamics as being linear [12, 13], switched linear [14], non-linear [15, 16], or reduced projected nonlinear models [11]. Recent approaches leveraging 1Code is available at https://github.com/shlizee/STNDT 36th Conference on Neural Information Processing Systems (NeurIPS 2022). recurrent neural networks (RNN) have shown promising progress in modeling distinct components of a dynamical system - neural latent states, initial conditions and external inputs - on a momentto-moment basis [15, 17, 18]. These sequential methods rely on continuous processing of neural inputs at successive timesteps, causing latency that hampers applicability in real-time decoding of neural signals. Consequently to RNN-based approaches, Neural Data Transformer (NDT) [16] was proposed as a non-recurrent approach to improve inference speed by leveraging the transformers architecture which learns and predicts momentary inputs in parallel [19]. While successful, NDT has only focused on modeling the relationship of neural population activity between timesteps while ignoring the rich covariation among individual neurons. Neurons in a population have been shown to have heterogeneous tuning profiles where each neuron has a different level of preference to a particular muscle movement direction [20, 21]. Neuron pairs also exhibit certain degree of correlation in terms of trial-to-trial variability (noise correlation) that affects the ability to decode the behaviors they represent [2, 22]. These spatial correlations characterize the amount of information that can be encoded in the neural population [22], necessitating the need to model the neural population activity across both time and space dimensions. In this work, we propose to incorporate the information distributed along the spatial dimension to improve the learning of neural population dynamics, and introduce SpatioTemporal Neural Data Transformer, an architecture based on Neural Data Transformer which explicitly learns both the spatial covariation between individual neurons and the temporal progression of the entire neural population. We summarize our main contributions as follows: • We introduce STNDT which allows the transformer to learn both the spatial coordination between neurons and the temporal progression of the population activity by letting neurons attend to each other while also attending over temporal instances. • We propose a contrastive training scheme, complementary to the mask modeling objective, to ensure the robustness of model prediction against induced noise augmentations. • We validate our model’s performance on four neural datasets in the publicly available Neural Latents Benchmark suite [23] and show that ensemble variants of our model outperforms other state-of-the-art methods, demonstrating its capability to model autonomous and non-autonomous neural dynamics in various brain regions while being agnostic to external behavior task structures. • We show that the spatial attention, a feature unique to STNDT, identifies consistently important subsets of neurons that play an essential role in driving the response of the entire population. This exclusive attribute of STNDT provides interpretability and key insights into how the neural population distributes the computation workload among the neurons. 2 Related Work Modeling spatial covariation in neural population: Neurons act as an orchestrated system which collectively encodes behaviors in a distributed and redundant manner. Many previous works have studied and incorporated neural variability across neurons to closely match firing statistics observed in multi-channel neural recordings [24–30]. [25] simulated population responses within a Dichotomized Gaussian framework and solved for signal and noise correlations numerically. [26, 27] developed Generative Adversarial Networks that were able to capture pairwise correlations among the neurons and generate realistic firing patterns. [28–30] modeled the population responses as being generated from a latent variable with learnable covariance matrix reflecting covariability among the neurons. While these methods resemble our work in the overarching motivation of capturing interactions among neurons, they rely on the knowledge of the respective stimuli/conditions that the trials belong to when modeling the interaction. On the other hand, STNDT is trained in an unsupervised manner and learns the rich covariation among neurons encompassing all recorded behaviors without access to any external observation apart from the population spiking activity. In addition, while the goal of aforementioned methods is to generate realistic firing activities associated with induced stimuli, oftentimes with some assumptions regarding their statistics (e.g. noise correlation is shared across time bins and trials), STNDT aims to uncover the denoised firing patterns behind the noisy single-trial spiking activity and does not depend on any prior assumptions regarding their firing statistics. Transformers for modeling spatiotemporal data: Transformers were initially developed to model the relationship between words in a sentence, which can be thought of as a temporal progression of a sequence of tokens. Recent works have leveraged the self-attention mechanism in transformers to model spatiotemporal data types where there exist an additional interacting dimensions possessing distinct dynamics, such as trajectories of traffic agents [31–33], dynamic scene graph of video [34], or 3D human motion [35]. However, in these works the spatial interaction at each timestep and the temporal dynamics for each entity are captured independently, treating the other dimension as the batch dimension at each attention block. In contrast, STNDT interleaves spatial and temporal attention in a unified framework, using spatial attention to re-weight temporal features and enabling direct study of each individual neuron’s role in driving the population dynamics. Interpretability of self-attention mechanism: Several approaches have been proposed to probe the inner workings of black-box deep learning models [36–38]. Unlike our work, these approaches attempted to attribute importance of visual inputs to the model prediction in a supervised setting and did not take into account interaction between input features. For attention-based models, the weights of attention matrix have been used as a tool to provide certain level of interpretability [39–42]. The interpretability is built upon the fact that attention weights signify how much influence other inputs have on a particular input in deciding its final outcome in a self-supervision manner. This influence might align with some human interpretable meaning, such as linguistic patterns [43]. In our work, we further leverage attention weights to gain insights into the interaction of neurons from multi-channel neural recordings. 3 Methods Problem formulation: Single-trial spiking activity of a neural population can be represented as a spatiotemporal matrix X ∈ NT×N , where each column Xi ∈ NT is the time series of one neuron, T is the number of time bins for each trial, and N is the number of neurons in the population. Each element Xtn in the matrix is the number of action potentials (spikes) that neuron n fires within the time bin t. Spike counts are assumed to be samples of an inhomogeneous Poisson process P (λ(t, n)) where λ(t, n) is the underlying true firing rate of neuron n at time t. The matrix Y ∈ RT×N containing λ(t, n) fully represents the dynamics of the neural population and explains the observable spiking data of the respective trial. We propose to learn the mapping ϕ(X;W ) : X → Y by the Spatiotemporal Transformer with the set of weights W . Spatiotemporal Neural Data Transformer: At the core of the transformer architecture is the multihead attention mechanism, where feature vectors learn to calibrate the influence of other feature vectors in their transformation. Spike trains are embedded into feature matrices X̃ with added sinusoidal positional encoding to preserve order information as initially proposed in [19]. We employed separate embeddings to encode positions in each temporal and spatial dimension individually, resulting in two distinct feature embeddings X̃T = Emb(X) + PT and X̃S = Emb(X⊤) + PS . A set of three matrices WQT , W K T , W V T ∈ RN×N are learned to transform T N -dimensional embedding X̃T = {x̃1, x̃2, ..., x̃T } to queries QT = X̃TWQT , keys KT = X̃TWKT and values VT = X̃TW V T , upon which latent variable ZT is computed as: ZT = Attention(QT ,KT , VT ) = F ( softmax ( QTK ⊤ T√ N ) VT ) (1) The outer product of QTK⊤T represents the attention each xi pays to all other xj and determines how much influence their values vj have on its latent output zi. F is the sequence of concatenating multiple heads and feeding through a feedforward network with ReLU activation [19]. We used 2 heads for all reported models. Implementations of transformers in popular applications such as in natural language processing literature consider each feature vector xi as an N -dimensional token in a sequence, equivalent to a word in a sentence. Elements in the N -dimensional vector therefore serve as a convenient numerical representation and do not have inherent relationships among them. The attention mechanism thus only models the relationship between tokens in a sequence. In our application, each feature vector xi is a collection of firing activities of N physical neurons among which there exists an interrelation as neuronal population acts as a coordinated structure with complex interdependencies rather than standalone individuals. We therefore propose to model both the temporal relationship - the evolution of neural activities - and the spatial relationship - covariability of neurons - by learning two separate multihead attention blocks (Figure 1). The temporal latent state ZT is computed with temporal attention block as in Equation 1. In parallel, spatial attention block operates on the spatial embedding X̃S and learns an attention weights matrix signifying the relationship between neurons: AS = softmax ( QSK ⊤ S√ T ) (2) where QS = X̃SW Q S and KS = X̃SW K S . This AS matrix is then multiplied with the transpose of temporal latent state ZT to incorporate the influence of spatial attention on the final spatiotemporal latent state ZST : ZST = F(ASZ⊤T ) (3) For stable training, as in [19] we used layer normalization before X̃T , X̃S , ASZ⊤T and feedforward layers. Residual connections are also employed around temporal attention, feedforward layers and ASZ ⊤ T . Mask modeling and contrastive losses: Similar to [16], we train the spatiotemporal transformer in an unsupervised way with BERT’s mask modeling objective [44]. During training, a random subset of spike bins along both spatial and temporal axes of input X are masked (zero-ed out or altered) and the transformer is asked to reconstruct the log firing rate at the masked bins such that the Poisson negative log likelihood is minimized: Lmask = N∑ i=1 T∑ j=1 exp(z̃ij)− x̃ij z̃ij (4) where z̃ij and x̃ij are the log output firing rate and input spike of neuron i at timestep j if location ij is masked. Neural dynamics are shown to be embedded in a low-dimensional space, i.e. model prediction should be fairly consistent when a smaller subset of neurons are used compared to when the entire population is taken into account. Furthermore, in stereotyped behaviors often found in neuroscience experiments, trials with the same condition should yield similar output firing rate profiles. Therefore, to enhance robustness of model prediction to neural firing variability we further constrain model firing rate outputs by a contrastive loss, such that different augmentations of the same trial input remain closer to each other and stay distant to other trial inputs. We adopt the NT-XEnt contrastive loss introduced in [45]: Lcontrastive = ∑ ij lij = ∑ ij −log exp(sim(zi, zj)/τ)∑2N k=1 1k ̸=iexp(sim(zi, zk)/τ) (5) where sim(u, v) = u⊤v/(∥u∥∥v∥) is the cosine similarity between two predictions u and v on two different augmentations of input x and τ is the temperature parameter. Transformations such as dropping out neurons and jittering samples in time have been used to create different views of neural data [46]. In our work, we define the augmentation transformation as random dropout and alteration of spike counts at random elements in the original input matrix X , similar to how masking is done, i.e. zero out or change spike counts to random integers at random neurons and timesteps. See Appendix for details on probabilities used to create these augmentations. Bayesian hyperparameter tuning: We follow [47] to use Bayesian optimization for hyperparameters tuning. We observe that the primary metrics co-smoothing bits/spike (co-bps) are not well correlated with the mask loss (see Figure 1 in the Appendix , while co-bps, vel R2, psth R2 and fp-bps are more pairwise correlated. Therefore, we run Bayesian optimization to optimize co-bps for M models then select the best N models as ranked by validation co-bps, and ensemble them by taking the mean of the predicted rates of these N models. 4 Experiments and results Datasets and evaluation metrics: We evaluate our model performance on four neural datasets in the publicly available Neural Latents Benchmark [23]: MC_Maze, MC_RTT, Area2_Bump, and DMFC_RSG. The 4 datasets cover autonomous and non-autonomous neural population dynamics recorded on rhesus macaques in a variety of behavioral tasks (delayed reaching, self-paced reaching, reaching with perturbation, time interval reproduction) spanning multiple brain regions (primary motor cortex, dorsal premotor cortex, somatosensory cortex, dorso-medial frontal cortex). The diverse scenarios and systems offer comprehensive evaluation of a latent variable model and serve as a standardized benchmark for comparison between different modeling approaches. We use different metrics to measure performance of our model depending on the particular behavior task of each dataset, following the standard evaluation pipeline in [23]. We evaluate and report our model performance on the hidden test split held by NLB to have a fair comparison with other state-of-the-art (SOTA) methods. See [23] for further details of evaluation strategy and how the metrics are calculated. • Co-smoothing (co-bps): the primary metric, measuring the ability of the model to predict activity of held-out neurons it has not seen during training. Co-bps is tied to the goodness of mask loss evaluated for held-out neurons. • Behavior decoding (vel R2 or tp-corr): measures how useful the model firing rates prediction can be used to decode behavior (the velocity of primate’s hand in the cases of MC_Maze and Areas_Bump datasets, or the correlation between neural speed and time between Set cue and Go response in DMFC_RSG dataset). • Match to peri-stimulus time histogram (psth R2): indicates how well predicted firing rates match the peri-stimuls time histogram in repeated, stereotyped task structures. • Forward prediction (fp-bps): measures model’s ability to predict unseen future activity of the neural population. It is computed in the similar manner as co-bps but on the held-out time points of all neurons. Baselines: We compare STNDT against the following baselines, all of which have been evaluated using the same held-out test split. • Smoothing [23]: A simple method where a Gaussian kernel is convolved with held-in spikes to produce smoothed held-in firing rates. Then a Poisson Generalized Linear Model (Poisson GLM) is fitted from the held-in smoothed rates to held-out rates. • GPFA [10]: extracts population latent states as a smooth and low dimensional evolution by combining smoothing and dimension reduction in a common probabilistic framework. • SLDS [14]: models neural dynamics as a switching linear dynamical system, which breaks down nonlinear data into sequences of simpler dynamical modes. • AutoLFADS [17]: models population activity as a non-linear dynamical system with bi-directional recurrent neural networks at the core and a scalable framework of hyperparameter tuning. • MINT [48]: an interpretable decode algorithm that exploits the sparsity and stereotypy of neural activity to interpolate neural states using a library of canonical neural trajectories. • iLQR-VAE [49]: improves upon LFADS with iterative linear quadratic regulator algorithm, an optimization-based recognition model to replace RNN as the inference network. • NDT [16]: leverages transformer architecture with some adaption to neural data to model temporal progression of neural activity across time. AESMTE1 is the best single model and AESMTE3 is the best emsemble of multiple models found as a result of Bayesian hyperparameter tuning [47]. 4.1 Spatiotemporal transformer achieves state-of-the-art performance in modeling autonomous dynamics We first tested STNDT on recordings of dorsal premotor (PMd) and motor cortex (M1) of a monkey performing a delayed reaching task (MC_Maze dataset) to evaluate the ability of STNDT to uncover single-trial population dynamics in a highly structured behavior. The dataset has been studied extensively in previous work [15–17]. It consists of 2869 trials of monkey performing a center-out reaching task in a maze with obstructing barriers, composing 108 different conditions for straight and curved reaching trajectories. The monkey is trained to hold the cursor at the center while the target is presented and only move the cursor to reach the target after a ‘Go’ cue. The neural dynamics during the preparation and execution periods is well modeled as an autonomous dynamical system [15]. We observed that by explicitly modeling spatial interaction, STNDT outperformed other state-of-theart methods and improved NDT’s ability to model autonomous single-trial dynamics as measured by the negative log likelihood of unobserved neural activity. The single STNDT model improved both Poisson log likelihood of heldout neurons (co-bps) and heldout timesteps (fp-bps). The performance is further increased by aggregating multiple STNDT models as shown in Table 1 and Figure 2A. Since MC_Maze features repeated trials, the prediction of any latent variable models should uncover stereotypical patterns of neuronal responses for trials belonging to the same condition. Therefore, we computed PSTH which is the average of neural population response across trials of the same condition, and measure R2 matching of model prediction to this PSTH. We observed that with the help of spatial modeling and contrastive loss, STNDT boosts NDT ability to recover this stereotyped firing pattern 1. We show in Figure 2C several responses of example neurons. STNDT firing rates prediction of trials under the same condition exhibit a consistent, stable PSTH as desired. These predicted rates also decode behaviors accurately when mapped to hand velocity via a linear regression model (Table 1, Figure 2B). 4.2 Spatiotemporal transformer improves inference of non-autonomous neural dynamics underlying naturalistic behaviors There is much interest in systems neuroscience to study neural dynamics in unconstrained, naturalistic behaviors as it is crucial for developing ubiquitous BCI decoders. We evaluated STNDT’s applicability to this setting via recordings in primary motor cortex during self-paced reaching task (MC_RTT dataset) [23, 50]. Unlike MC_Maze dataset, the monkey in this task continuously acquires targets which appear randomly in an 8x8 grid without preparatory periods, resulted in a wide variety of hand trajectories and trial lengths. We observe that STNDT achieves SOTA performance on the primary metric co-bps and performs on par with NDT on remaining metrics, while maintaining a more robust performance against random initializations of model weights (Table 1 and Appendix). 4.3 Spatiotemporal transformer better captures input-driven dynamics underlying sensory processes We next tested STNDT in a setting where unexpected input perturbations affect the neural dynamics in somatosensory cortex to probe whether STNDT can leverage spatial interaction to improve modeling of non-autonomous dynamics in this brain region. Area2_Bump dataset consists of recordings from the Area 2, which was shown in previous works to be driven by mechanical perturbation to the arm and contains information about whole-arm kinematics [23, 51]. The task comprises of active and passive trials with a center hold period at the start. During active trials, the monkey performs a classic center-out reaching task. In passive trials, a force is applied on the monkey’s hand in a random direction via a manipulandum, after which the monkey has to return to the center target and proceed with the task as in active trials. Despite the relatively small scale of the dataset, STNDT brings about further improvements to NDT performance in terms of co-bps and psth-R2, on both single and ensemble levels. 4.4 Spatiotemporal transformer enhances prediction of neural population activity during cognitive task Dorsomedial frontal cortex (DMFC) is believed to serve as an intermediate layer between low-level sensory and motor areas, and possess distinct confluence of internal dynamics and inputs [52, 53]. We are therefore interested to see if characterizing spatial relationship alongside temporal relationship and incorporating contrastive loss could help STNDT better model the dynamics in this brain region. We tested STNDT on the DMFC_RSG dataset [23, 53] consisting of recordings from a rhesus macaque performing a time-interval reproduction task. The monkey is presented two ‘Ready’ and ‘Set’ stimuli separated by a specific time interval ts while fixating eye and hold the joystick at the center position. It then has to execute a ‘Go’ response by either an eye saccade or joystick movement such that the time interval tp between its reponse and the ‘Set’ cue is sufficiently close to ts. STNDT successfully captures the dynamics in this cognitive task, outperforming NDT by a large margin across co-bps, psth-R2 and fp-bps on both single and ensemble level (Table 2). 4.5 Spatial attention mechanism identifies important subsets of neurons driving the population dynamics In Figure 3, we visualize spatial attention weights obtained from STNDT on the MC_Maze dataset in the first and last attention layers. Attention map for remaining datasets are provided in Appendix. Interestingly, spatial attention shows that in early layers, only a small subsets of neurons in the population are consistently attended to by all neurons. The spatial attention tends to disperse as the model goes to deeper layers. Strikingly, the subset of heavily-attended neurons stays relatively identical across different trials, hinting that these neurons might play a crucial role in driving the population response to the behavior task. We further tested this hypothesis by incrementally dropping the neurons heavily attended to (i.e. zeroing out their spiking activity input to the model) in a descending order of their attention weights identified in the first layer. We observed that dropping these important neurons identified by STNDT caused a significant decline in the model performance (Figure 4). The performance decline was significantly more than the case where the same number of random neurons are dropped. To rule out the possible case that dropping neurons only has adverse effect on the spatial attention module but that effect propagates to the subsequent modules and indirectly impacts the performance of the overall STNDT pipeline, we repeated the experiment on the vanilla NDT model which, unlike STNDT, lacks a spatial attention structure. Interestingly, we observed the same performance deterioration when we dropped the spiking activity of STNDTidentified important neurons and asked a pretrained vanilla NDT to make inference on the resulting inputs. This finding suggests that the impact of the important neurons that only STNDT can identify might potentially generalize to other latent variable models that without input from these neurons, some latent variable models might not function optimally. We provide additional results from similar analyses on GPFA and Smoothing models in the Appendix. We further examine whether important neurons were selected by the spatial attention mechanism based on some criteria more sophisticated than simple firing statistics, as more active neurons tend to have higher signal-to-noise ratio and might encode more useful information with regard to behaviors. We find that the important neurons are not the ones with the highest spike counts or the least variability in spiking activity. In fact, attention weights of a neuron do not correlate or only correlate weakly to its firing activity statistics, as we show in Table 3 the Pearson’s correlation of a neuron’s attention weight with the mean and variance of its spiking activity. All correlation values have p-value < 1e-4. These results indicate that STNDT’s spatial attention has picked up on meaningful population features that are more significant than firing statistics of the neurons. 4.6 Ablation Study: Contrastive loss encourages consistency of model prediction and improves performance We conduct an ablation study to assess the effectiveness of contrastive loss on the overall performance of STNDT. Tables 4 and 5 report how the model scores on different metrics across all four datasets on the single and ensemble levels. In general, we observe that having contrastive loss further improves the performance of STNDT on predicting neural activity of heldout neurons (co-bps) and heldout timesteps (fp-bps). The contribution of contrastive loss is most eminent on MC_Maze dataset. 5 Discussion In this paper we presented STNDT, a novel architecture based upon NDT [16] that explicitly learns the covariation among individual neurons in the population alongside the momentary evolution of the population spiking activity in order to infer the underlying firing rates behind highly variable single-trial spike trains. By incorporating self-attention along both spatial and temporal dimensions as well as a contrastive loss, STNDT enhances NDT’s ability to model dynamics spanning a variety of tasks and brain regions, most notably by the accurate prediction of activity for unseen neurons (co-bps). Although STNDT does not consistently outperform NDT on other secondary metrics, we show in the Appendix that STNDT is more robust to random initializations and performs better than NDT on average across random seeds. Moreover, the improvement STNDT contributes on co-bps is the direct reflection of the spatial attention’s success. Since the spatial attention module aims to learn the relationship between all (observed and unobserved) neurons at training time, it will leverage this information to infer activities of unobserved neurons based on those of observed neurons at testing time, which is exactly what co-bps measures. Finally, the novel spatial attention mechanism unique to STNDT brings about valuable interpretability as it discovers influential subsets of neurons whose activities contain salient information about the response of the entire neural population without which some latent variable models might not function optimally. Acknowledgment: This work was supported in part by National Science Foundation grant OAC-2117997 and Washington Research Fund to ES. Authors also acknowledge the partial support by the Departments of Electrical Computer Engineering (TL and ES), Applied Mathematics (ES), the Center of Computational Neuroscience (ES), and the eScience Center (ES) at the University of Washington.
1. What are the strengths and weaknesses of the paper regarding the proposed approach's contribution to capturing correlations between neurons? 2. How does the reviewer assess the paper's discussion and comparisons with previous works that have addressed capturing noise correlations in neural population activity? 3. What are the concerns regarding the interpretability of attention weights, and how do they compare to attribution methods used in other deep neural network architectures? 4. What information is lacking regarding training costs, computational and sample efficiency of the method, and how does it impact the judgment of its benefits? 5. Minor comments include suggestions for improving figure labels and providing intuition behind chosen metrics.
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper introduces a spatio-temporal neural transformers: a transformer-based generative model of neural population activity that captures correlations between neurons, in addition to stimulus-driven and temporal correlations in the activity. Strengths And Weaknesses Overall the paper is well-written, the methods and results are presented clearly. In particular, this seems to be an interesting use-case for transformers, and the ablation study with the contrastive loss is also interesting. However, there are some major concerns regarding the paper: There is a substantial body of work concerning capturing correlations between neurons (many of which also concurrently capture stimulus-driven / temporal variability in the population activity -- it would be good to have a discussion of these papers, and also, comparisons of STNDT against some of them. A non-exhaustive list includes: Schneidman et al (2006): https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC1785327&blobtype=pdf Macke et al. (2011): http://www.gatsby.ucl.ac.uk/~maneesh/papers/macke-etal-2011-nips-preprint.pdf Lyamzin et al. (2010): http://journal.frontiersin.org/article/10.3389/fncom.2010.00144/abstract Molano-Mazon et al. (2018): http://arxiv.org/abs/1803.00338 Ramesh et al (2019): https://openreview.net/forum?id=S1xxRoLKLH Bashiri et al. (2021): https://openreview.net/pdf?id=1yeYYtLqq7K While STNDT certainly achieves better performance compared to other methods, it is not clear what scientific insight can be gained from using a transformer. While section 3.3 and 3.4 describe how attention weights can be used to find sub-networks of "important" neurons and model consistency, it is not clear whether these weights are any more interpretable than say, regression weights, or those computed on CNNs with attribution methods. Indeed, attribution methods such as GradCAM (https://arxiv.org/abs/1610.02391) and axiomatic attribution (https://arxiv.org/abs/1703.01365) have been used in conjunction with other deep neural network architectures trained on neural population activity in precisely the same manner described here, with arguably similar success (e.g. Maheswaranathan et al. 2018: https://www.biorxiv.org/content/early/2018/06/08/340943). It would also be nice to have an estimate of the training costs, computational and sample efficiency of STNDT in comparison with other methods -- otherwise it is hard to truly judge the benefits of using this method, over others for datasets beyond those described in the paper. Questions It would be good to have a discussion of methods that have been developed to capture noise correlations in neural population acitivity. It would also be good to have a comparison of STNDT to at least one of the methods as well: e.g. Lyamzin et. al. (2010). It would be good to have a more thorough (or at least more measured) discussion of the interpretability of attention weights. Statements as in line 222: "The interpretability...final outcome."; in line 241 "This finding suggests...not function optimally" and in line 262 "Finally, the novel...not function optimally" may not be warranted when similar analyses on other network architectures exist, with arguably similar results. It would also be good to have some further analysis on the subset of "important" neurons identified by the attention weights: for example, are these neurons with the highest firing rates, or the least variability in firing rates? It would be good to have information about training costs, computational and sample efficiency of the menthod Performance of the different methods are compared based on several metrics: it would be good to have some intuition on why these particular metrics were chosen, and also how these are computed. Minor comments: Figures 1-4 have grey lines around the border that don't nescessarily overlap with the division of subpanels The colourbars in Figure 4 have no label, and the colours are hard to distinguish against the dark background in the plots Limitations Overall, although the paper presents an interesting use case of transformers, the value they add over existing methods is not clear. In particular, it is not clear whether the transformers truly are beneficial without a comparison to other methods explicitly set up to capture noise correlations in neural population data. Furthermore, this paper argues for the benefits of STNDT based on the interpretability of the attention weights: however the analysis does not convincingly show whether (a) the attention weights truly pick up on data features more significant than say, the firing rates of the neurons (b) whether they are any more interpretable than weights obtained from linear regression or attribution methods. Finally, without information about the computational costs of this method, it is hard to judge its value, even if it outperforms other methods on the Neural Latents Benchmark tasks. Taken together, there needs to be more analysis to show whether STNDT can provide more scientific insight and add more value to generative modeling of neural population data, over existing methods.
NIPS
Title STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers Abstract Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior. A recent non-recurrent method Neural Data Transformers (NDT) has shown great success in capturing neural dynamics with low inference latency without an explicit dynamical model. However, NDT focuses on modeling the temporal evolution of the population activity while neglecting the rich covariation between individual neurons. In this paper we introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons in the population across time and space to uncover their underlying firing rates. In addition, we propose a contrastive learning loss that works in accordance with mask modeling objective to further improve the predictive performance. We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets, demonstrating its capability to capture autonomous and non-autonomous dynamics spanning different cortical regions while being completely agnostic to the specific behaviors at hand. Furthermore, STNDT spatial attention mechanism reveals consistently important subsets of neurons that play a vital role in driving the response of the entire population, providing interpretability and key insights into how the population of neurons performs computation.1 1 Introduction One of the most prominent questions in systems neuroscience is how neurons perform computations that give rise to behaviors. Recent evidence suggests that computation in the brain could be governed at the population level [1, 2]. Population of neurons are proposed to obey an internal dynamical rule that drives their activities over time [3, 4]. Inferring these dynamics on a single trial basis is crucial for understanding the relationship between neural population responses and behavior, subsequently enabling the development of robust decoding schemes with wide applicability in brain-computer interfaces (BCI) [5–7]. However, modeling population dynamics on single trials is challenging due to the stochasticity of individual neurons making their spiking activity vary from trial to trial even when they are subject to identical stimuli or recorded under repeated behavior conditions. A direct approach to reduce the trial-to-trial variability of neural responses could be to average responses over repeated trials of the same behavior [8, 9], to convolve the neural response with a Gaussian kernel [10], or in general, to define a variety of neural activity measures [11]. However, more success was found in approaches that explicitly model neural responses as a dynamical system, including methods treating the population dynamics as being linear [12, 13], switched linear [14], non-linear [15, 16], or reduced projected nonlinear models [11]. Recent approaches leveraging 1Code is available at https://github.com/shlizee/STNDT 36th Conference on Neural Information Processing Systems (NeurIPS 2022). recurrent neural networks (RNN) have shown promising progress in modeling distinct components of a dynamical system - neural latent states, initial conditions and external inputs - on a momentto-moment basis [15, 17, 18]. These sequential methods rely on continuous processing of neural inputs at successive timesteps, causing latency that hampers applicability in real-time decoding of neural signals. Consequently to RNN-based approaches, Neural Data Transformer (NDT) [16] was proposed as a non-recurrent approach to improve inference speed by leveraging the transformers architecture which learns and predicts momentary inputs in parallel [19]. While successful, NDT has only focused on modeling the relationship of neural population activity between timesteps while ignoring the rich covariation among individual neurons. Neurons in a population have been shown to have heterogeneous tuning profiles where each neuron has a different level of preference to a particular muscle movement direction [20, 21]. Neuron pairs also exhibit certain degree of correlation in terms of trial-to-trial variability (noise correlation) that affects the ability to decode the behaviors they represent [2, 22]. These spatial correlations characterize the amount of information that can be encoded in the neural population [22], necessitating the need to model the neural population activity across both time and space dimensions. In this work, we propose to incorporate the information distributed along the spatial dimension to improve the learning of neural population dynamics, and introduce SpatioTemporal Neural Data Transformer, an architecture based on Neural Data Transformer which explicitly learns both the spatial covariation between individual neurons and the temporal progression of the entire neural population. We summarize our main contributions as follows: • We introduce STNDT which allows the transformer to learn both the spatial coordination between neurons and the temporal progression of the population activity by letting neurons attend to each other while also attending over temporal instances. • We propose a contrastive training scheme, complementary to the mask modeling objective, to ensure the robustness of model prediction against induced noise augmentations. • We validate our model’s performance on four neural datasets in the publicly available Neural Latents Benchmark suite [23] and show that ensemble variants of our model outperforms other state-of-the-art methods, demonstrating its capability to model autonomous and non-autonomous neural dynamics in various brain regions while being agnostic to external behavior task structures. • We show that the spatial attention, a feature unique to STNDT, identifies consistently important subsets of neurons that play an essential role in driving the response of the entire population. This exclusive attribute of STNDT provides interpretability and key insights into how the neural population distributes the computation workload among the neurons. 2 Related Work Modeling spatial covariation in neural population: Neurons act as an orchestrated system which collectively encodes behaviors in a distributed and redundant manner. Many previous works have studied and incorporated neural variability across neurons to closely match firing statistics observed in multi-channel neural recordings [24–30]. [25] simulated population responses within a Dichotomized Gaussian framework and solved for signal and noise correlations numerically. [26, 27] developed Generative Adversarial Networks that were able to capture pairwise correlations among the neurons and generate realistic firing patterns. [28–30] modeled the population responses as being generated from a latent variable with learnable covariance matrix reflecting covariability among the neurons. While these methods resemble our work in the overarching motivation of capturing interactions among neurons, they rely on the knowledge of the respective stimuli/conditions that the trials belong to when modeling the interaction. On the other hand, STNDT is trained in an unsupervised manner and learns the rich covariation among neurons encompassing all recorded behaviors without access to any external observation apart from the population spiking activity. In addition, while the goal of aforementioned methods is to generate realistic firing activities associated with induced stimuli, oftentimes with some assumptions regarding their statistics (e.g. noise correlation is shared across time bins and trials), STNDT aims to uncover the denoised firing patterns behind the noisy single-trial spiking activity and does not depend on any prior assumptions regarding their firing statistics. Transformers for modeling spatiotemporal data: Transformers were initially developed to model the relationship between words in a sentence, which can be thought of as a temporal progression of a sequence of tokens. Recent works have leveraged the self-attention mechanism in transformers to model spatiotemporal data types where there exist an additional interacting dimensions possessing distinct dynamics, such as trajectories of traffic agents [31–33], dynamic scene graph of video [34], or 3D human motion [35]. However, in these works the spatial interaction at each timestep and the temporal dynamics for each entity are captured independently, treating the other dimension as the batch dimension at each attention block. In contrast, STNDT interleaves spatial and temporal attention in a unified framework, using spatial attention to re-weight temporal features and enabling direct study of each individual neuron’s role in driving the population dynamics. Interpretability of self-attention mechanism: Several approaches have been proposed to probe the inner workings of black-box deep learning models [36–38]. Unlike our work, these approaches attempted to attribute importance of visual inputs to the model prediction in a supervised setting and did not take into account interaction between input features. For attention-based models, the weights of attention matrix have been used as a tool to provide certain level of interpretability [39–42]. The interpretability is built upon the fact that attention weights signify how much influence other inputs have on a particular input in deciding its final outcome in a self-supervision manner. This influence might align with some human interpretable meaning, such as linguistic patterns [43]. In our work, we further leverage attention weights to gain insights into the interaction of neurons from multi-channel neural recordings. 3 Methods Problem formulation: Single-trial spiking activity of a neural population can be represented as a spatiotemporal matrix X ∈ NT×N , where each column Xi ∈ NT is the time series of one neuron, T is the number of time bins for each trial, and N is the number of neurons in the population. Each element Xtn in the matrix is the number of action potentials (spikes) that neuron n fires within the time bin t. Spike counts are assumed to be samples of an inhomogeneous Poisson process P (λ(t, n)) where λ(t, n) is the underlying true firing rate of neuron n at time t. The matrix Y ∈ RT×N containing λ(t, n) fully represents the dynamics of the neural population and explains the observable spiking data of the respective trial. We propose to learn the mapping ϕ(X;W ) : X → Y by the Spatiotemporal Transformer with the set of weights W . Spatiotemporal Neural Data Transformer: At the core of the transformer architecture is the multihead attention mechanism, where feature vectors learn to calibrate the influence of other feature vectors in their transformation. Spike trains are embedded into feature matrices X̃ with added sinusoidal positional encoding to preserve order information as initially proposed in [19]. We employed separate embeddings to encode positions in each temporal and spatial dimension individually, resulting in two distinct feature embeddings X̃T = Emb(X) + PT and X̃S = Emb(X⊤) + PS . A set of three matrices WQT , W K T , W V T ∈ RN×N are learned to transform T N -dimensional embedding X̃T = {x̃1, x̃2, ..., x̃T } to queries QT = X̃TWQT , keys KT = X̃TWKT and values VT = X̃TW V T , upon which latent variable ZT is computed as: ZT = Attention(QT ,KT , VT ) = F ( softmax ( QTK ⊤ T√ N ) VT ) (1) The outer product of QTK⊤T represents the attention each xi pays to all other xj and determines how much influence their values vj have on its latent output zi. F is the sequence of concatenating multiple heads and feeding through a feedforward network with ReLU activation [19]. We used 2 heads for all reported models. Implementations of transformers in popular applications such as in natural language processing literature consider each feature vector xi as an N -dimensional token in a sequence, equivalent to a word in a sentence. Elements in the N -dimensional vector therefore serve as a convenient numerical representation and do not have inherent relationships among them. The attention mechanism thus only models the relationship between tokens in a sequence. In our application, each feature vector xi is a collection of firing activities of N physical neurons among which there exists an interrelation as neuronal population acts as a coordinated structure with complex interdependencies rather than standalone individuals. We therefore propose to model both the temporal relationship - the evolution of neural activities - and the spatial relationship - covariability of neurons - by learning two separate multihead attention blocks (Figure 1). The temporal latent state ZT is computed with temporal attention block as in Equation 1. In parallel, spatial attention block operates on the spatial embedding X̃S and learns an attention weights matrix signifying the relationship between neurons: AS = softmax ( QSK ⊤ S√ T ) (2) where QS = X̃SW Q S and KS = X̃SW K S . This AS matrix is then multiplied with the transpose of temporal latent state ZT to incorporate the influence of spatial attention on the final spatiotemporal latent state ZST : ZST = F(ASZ⊤T ) (3) For stable training, as in [19] we used layer normalization before X̃T , X̃S , ASZ⊤T and feedforward layers. Residual connections are also employed around temporal attention, feedforward layers and ASZ ⊤ T . Mask modeling and contrastive losses: Similar to [16], we train the spatiotemporal transformer in an unsupervised way with BERT’s mask modeling objective [44]. During training, a random subset of spike bins along both spatial and temporal axes of input X are masked (zero-ed out or altered) and the transformer is asked to reconstruct the log firing rate at the masked bins such that the Poisson negative log likelihood is minimized: Lmask = N∑ i=1 T∑ j=1 exp(z̃ij)− x̃ij z̃ij (4) where z̃ij and x̃ij are the log output firing rate and input spike of neuron i at timestep j if location ij is masked. Neural dynamics are shown to be embedded in a low-dimensional space, i.e. model prediction should be fairly consistent when a smaller subset of neurons are used compared to when the entire population is taken into account. Furthermore, in stereotyped behaviors often found in neuroscience experiments, trials with the same condition should yield similar output firing rate profiles. Therefore, to enhance robustness of model prediction to neural firing variability we further constrain model firing rate outputs by a contrastive loss, such that different augmentations of the same trial input remain closer to each other and stay distant to other trial inputs. We adopt the NT-XEnt contrastive loss introduced in [45]: Lcontrastive = ∑ ij lij = ∑ ij −log exp(sim(zi, zj)/τ)∑2N k=1 1k ̸=iexp(sim(zi, zk)/τ) (5) where sim(u, v) = u⊤v/(∥u∥∥v∥) is the cosine similarity between two predictions u and v on two different augmentations of input x and τ is the temperature parameter. Transformations such as dropping out neurons and jittering samples in time have been used to create different views of neural data [46]. In our work, we define the augmentation transformation as random dropout and alteration of spike counts at random elements in the original input matrix X , similar to how masking is done, i.e. zero out or change spike counts to random integers at random neurons and timesteps. See Appendix for details on probabilities used to create these augmentations. Bayesian hyperparameter tuning: We follow [47] to use Bayesian optimization for hyperparameters tuning. We observe that the primary metrics co-smoothing bits/spike (co-bps) are not well correlated with the mask loss (see Figure 1 in the Appendix , while co-bps, vel R2, psth R2 and fp-bps are more pairwise correlated. Therefore, we run Bayesian optimization to optimize co-bps for M models then select the best N models as ranked by validation co-bps, and ensemble them by taking the mean of the predicted rates of these N models. 4 Experiments and results Datasets and evaluation metrics: We evaluate our model performance on four neural datasets in the publicly available Neural Latents Benchmark [23]: MC_Maze, MC_RTT, Area2_Bump, and DMFC_RSG. The 4 datasets cover autonomous and non-autonomous neural population dynamics recorded on rhesus macaques in a variety of behavioral tasks (delayed reaching, self-paced reaching, reaching with perturbation, time interval reproduction) spanning multiple brain regions (primary motor cortex, dorsal premotor cortex, somatosensory cortex, dorso-medial frontal cortex). The diverse scenarios and systems offer comprehensive evaluation of a latent variable model and serve as a standardized benchmark for comparison between different modeling approaches. We use different metrics to measure performance of our model depending on the particular behavior task of each dataset, following the standard evaluation pipeline in [23]. We evaluate and report our model performance on the hidden test split held by NLB to have a fair comparison with other state-of-the-art (SOTA) methods. See [23] for further details of evaluation strategy and how the metrics are calculated. • Co-smoothing (co-bps): the primary metric, measuring the ability of the model to predict activity of held-out neurons it has not seen during training. Co-bps is tied to the goodness of mask loss evaluated for held-out neurons. • Behavior decoding (vel R2 or tp-corr): measures how useful the model firing rates prediction can be used to decode behavior (the velocity of primate’s hand in the cases of MC_Maze and Areas_Bump datasets, or the correlation between neural speed and time between Set cue and Go response in DMFC_RSG dataset). • Match to peri-stimulus time histogram (psth R2): indicates how well predicted firing rates match the peri-stimuls time histogram in repeated, stereotyped task structures. • Forward prediction (fp-bps): measures model’s ability to predict unseen future activity of the neural population. It is computed in the similar manner as co-bps but on the held-out time points of all neurons. Baselines: We compare STNDT against the following baselines, all of which have been evaluated using the same held-out test split. • Smoothing [23]: A simple method where a Gaussian kernel is convolved with held-in spikes to produce smoothed held-in firing rates. Then a Poisson Generalized Linear Model (Poisson GLM) is fitted from the held-in smoothed rates to held-out rates. • GPFA [10]: extracts population latent states as a smooth and low dimensional evolution by combining smoothing and dimension reduction in a common probabilistic framework. • SLDS [14]: models neural dynamics as a switching linear dynamical system, which breaks down nonlinear data into sequences of simpler dynamical modes. • AutoLFADS [17]: models population activity as a non-linear dynamical system with bi-directional recurrent neural networks at the core and a scalable framework of hyperparameter tuning. • MINT [48]: an interpretable decode algorithm that exploits the sparsity and stereotypy of neural activity to interpolate neural states using a library of canonical neural trajectories. • iLQR-VAE [49]: improves upon LFADS with iterative linear quadratic regulator algorithm, an optimization-based recognition model to replace RNN as the inference network. • NDT [16]: leverages transformer architecture with some adaption to neural data to model temporal progression of neural activity across time. AESMTE1 is the best single model and AESMTE3 is the best emsemble of multiple models found as a result of Bayesian hyperparameter tuning [47]. 4.1 Spatiotemporal transformer achieves state-of-the-art performance in modeling autonomous dynamics We first tested STNDT on recordings of dorsal premotor (PMd) and motor cortex (M1) of a monkey performing a delayed reaching task (MC_Maze dataset) to evaluate the ability of STNDT to uncover single-trial population dynamics in a highly structured behavior. The dataset has been studied extensively in previous work [15–17]. It consists of 2869 trials of monkey performing a center-out reaching task in a maze with obstructing barriers, composing 108 different conditions for straight and curved reaching trajectories. The monkey is trained to hold the cursor at the center while the target is presented and only move the cursor to reach the target after a ‘Go’ cue. The neural dynamics during the preparation and execution periods is well modeled as an autonomous dynamical system [15]. We observed that by explicitly modeling spatial interaction, STNDT outperformed other state-of-theart methods and improved NDT’s ability to model autonomous single-trial dynamics as measured by the negative log likelihood of unobserved neural activity. The single STNDT model improved both Poisson log likelihood of heldout neurons (co-bps) and heldout timesteps (fp-bps). The performance is further increased by aggregating multiple STNDT models as shown in Table 1 and Figure 2A. Since MC_Maze features repeated trials, the prediction of any latent variable models should uncover stereotypical patterns of neuronal responses for trials belonging to the same condition. Therefore, we computed PSTH which is the average of neural population response across trials of the same condition, and measure R2 matching of model prediction to this PSTH. We observed that with the help of spatial modeling and contrastive loss, STNDT boosts NDT ability to recover this stereotyped firing pattern 1. We show in Figure 2C several responses of example neurons. STNDT firing rates prediction of trials under the same condition exhibit a consistent, stable PSTH as desired. These predicted rates also decode behaviors accurately when mapped to hand velocity via a linear regression model (Table 1, Figure 2B). 4.2 Spatiotemporal transformer improves inference of non-autonomous neural dynamics underlying naturalistic behaviors There is much interest in systems neuroscience to study neural dynamics in unconstrained, naturalistic behaviors as it is crucial for developing ubiquitous BCI decoders. We evaluated STNDT’s applicability to this setting via recordings in primary motor cortex during self-paced reaching task (MC_RTT dataset) [23, 50]. Unlike MC_Maze dataset, the monkey in this task continuously acquires targets which appear randomly in an 8x8 grid without preparatory periods, resulted in a wide variety of hand trajectories and trial lengths. We observe that STNDT achieves SOTA performance on the primary metric co-bps and performs on par with NDT on remaining metrics, while maintaining a more robust performance against random initializations of model weights (Table 1 and Appendix). 4.3 Spatiotemporal transformer better captures input-driven dynamics underlying sensory processes We next tested STNDT in a setting where unexpected input perturbations affect the neural dynamics in somatosensory cortex to probe whether STNDT can leverage spatial interaction to improve modeling of non-autonomous dynamics in this brain region. Area2_Bump dataset consists of recordings from the Area 2, which was shown in previous works to be driven by mechanical perturbation to the arm and contains information about whole-arm kinematics [23, 51]. The task comprises of active and passive trials with a center hold period at the start. During active trials, the monkey performs a classic center-out reaching task. In passive trials, a force is applied on the monkey’s hand in a random direction via a manipulandum, after which the monkey has to return to the center target and proceed with the task as in active trials. Despite the relatively small scale of the dataset, STNDT brings about further improvements to NDT performance in terms of co-bps and psth-R2, on both single and ensemble levels. 4.4 Spatiotemporal transformer enhances prediction of neural population activity during cognitive task Dorsomedial frontal cortex (DMFC) is believed to serve as an intermediate layer between low-level sensory and motor areas, and possess distinct confluence of internal dynamics and inputs [52, 53]. We are therefore interested to see if characterizing spatial relationship alongside temporal relationship and incorporating contrastive loss could help STNDT better model the dynamics in this brain region. We tested STNDT on the DMFC_RSG dataset [23, 53] consisting of recordings from a rhesus macaque performing a time-interval reproduction task. The monkey is presented two ‘Ready’ and ‘Set’ stimuli separated by a specific time interval ts while fixating eye and hold the joystick at the center position. It then has to execute a ‘Go’ response by either an eye saccade or joystick movement such that the time interval tp between its reponse and the ‘Set’ cue is sufficiently close to ts. STNDT successfully captures the dynamics in this cognitive task, outperforming NDT by a large margin across co-bps, psth-R2 and fp-bps on both single and ensemble level (Table 2). 4.5 Spatial attention mechanism identifies important subsets of neurons driving the population dynamics In Figure 3, we visualize spatial attention weights obtained from STNDT on the MC_Maze dataset in the first and last attention layers. Attention map for remaining datasets are provided in Appendix. Interestingly, spatial attention shows that in early layers, only a small subsets of neurons in the population are consistently attended to by all neurons. The spatial attention tends to disperse as the model goes to deeper layers. Strikingly, the subset of heavily-attended neurons stays relatively identical across different trials, hinting that these neurons might play a crucial role in driving the population response to the behavior task. We further tested this hypothesis by incrementally dropping the neurons heavily attended to (i.e. zeroing out their spiking activity input to the model) in a descending order of their attention weights identified in the first layer. We observed that dropping these important neurons identified by STNDT caused a significant decline in the model performance (Figure 4). The performance decline was significantly more than the case where the same number of random neurons are dropped. To rule out the possible case that dropping neurons only has adverse effect on the spatial attention module but that effect propagates to the subsequent modules and indirectly impacts the performance of the overall STNDT pipeline, we repeated the experiment on the vanilla NDT model which, unlike STNDT, lacks a spatial attention structure. Interestingly, we observed the same performance deterioration when we dropped the spiking activity of STNDTidentified important neurons and asked a pretrained vanilla NDT to make inference on the resulting inputs. This finding suggests that the impact of the important neurons that only STNDT can identify might potentially generalize to other latent variable models that without input from these neurons, some latent variable models might not function optimally. We provide additional results from similar analyses on GPFA and Smoothing models in the Appendix. We further examine whether important neurons were selected by the spatial attention mechanism based on some criteria more sophisticated than simple firing statistics, as more active neurons tend to have higher signal-to-noise ratio and might encode more useful information with regard to behaviors. We find that the important neurons are not the ones with the highest spike counts or the least variability in spiking activity. In fact, attention weights of a neuron do not correlate or only correlate weakly to its firing activity statistics, as we show in Table 3 the Pearson’s correlation of a neuron’s attention weight with the mean and variance of its spiking activity. All correlation values have p-value < 1e-4. These results indicate that STNDT’s spatial attention has picked up on meaningful population features that are more significant than firing statistics of the neurons. 4.6 Ablation Study: Contrastive loss encourages consistency of model prediction and improves performance We conduct an ablation study to assess the effectiveness of contrastive loss on the overall performance of STNDT. Tables 4 and 5 report how the model scores on different metrics across all four datasets on the single and ensemble levels. In general, we observe that having contrastive loss further improves the performance of STNDT on predicting neural activity of heldout neurons (co-bps) and heldout timesteps (fp-bps). The contribution of contrastive loss is most eminent on MC_Maze dataset. 5 Discussion In this paper we presented STNDT, a novel architecture based upon NDT [16] that explicitly learns the covariation among individual neurons in the population alongside the momentary evolution of the population spiking activity in order to infer the underlying firing rates behind highly variable single-trial spike trains. By incorporating self-attention along both spatial and temporal dimensions as well as a contrastive loss, STNDT enhances NDT’s ability to model dynamics spanning a variety of tasks and brain regions, most notably by the accurate prediction of activity for unseen neurons (co-bps). Although STNDT does not consistently outperform NDT on other secondary metrics, we show in the Appendix that STNDT is more robust to random initializations and performs better than NDT on average across random seeds. Moreover, the improvement STNDT contributes on co-bps is the direct reflection of the spatial attention’s success. Since the spatial attention module aims to learn the relationship between all (observed and unobserved) neurons at training time, it will leverage this information to infer activities of unobserved neurons based on those of observed neurons at testing time, which is exactly what co-bps measures. Finally, the novel spatial attention mechanism unique to STNDT brings about valuable interpretability as it discovers influential subsets of neurons whose activities contain salient information about the response of the entire neural population without which some latent variable models might not function optimally. Acknowledgment: This work was supported in part by National Science Foundation grant OAC-2117997 and Washington Research Fund to ES. Authors also acknowledge the partial support by the Departments of Electrical Computer Engineering (TL and ES), Applied Mathematics (ES), the Center of Computational Neuroscience (ES), and the eScience Center (ES) at the University of Washington.
1. What is the focus and contribution of the paper on neural spike train modeling? 2. What are the strengths of the proposed transformer architecture, particularly in terms of its ability to consider both spatial and temporal aspects of neural activities? 3. What are the weaknesses of the paper regarding some missing architectural details? 4. Do you have any concerns or questions regarding the evaluation metrics used in the paper, specifically the emphasis on co-bps? 5. Are there any limitations to the proposed approach that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a transformer architecture for modelling neural spike trains. The model considers both the spatial (dependency between the activity of different neurons) and temporal aspects of neural activities. The previous model based on transformers only considers the temporal aspect. The method is evaluated on four benchmark datasets and compared with several baselines, indicating the model's effectiveness. Strengths And Weaknesses Strengths: The proposed architecture is intuitive and interesting. It also addressed an important shortcoming with the previous transformer model. The focus on interpretability is interesting and can potentially lead to important discoveries in neuroscience about the role of each brain region. The paper is well presented and easy to follow. Weaknesses: Some of the architectural details are missing from the paper. For example, how many heads were used? What was the exact architecture for each layer? How is the test performance calculated? Is it based on masked inputs on test data? Questions Among the four assessed metrics, the model is the best one, mainly from the aspect of co-bps; is this a consequence of hyper-parameters being tuned based on co-bps? Are the hyper-parameters of the baseline models also tuned based on co-bps? Limitations N/A
NIPS
Title STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers Abstract Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior. A recent non-recurrent method Neural Data Transformers (NDT) has shown great success in capturing neural dynamics with low inference latency without an explicit dynamical model. However, NDT focuses on modeling the temporal evolution of the population activity while neglecting the rich covariation between individual neurons. In this paper we introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons in the population across time and space to uncover their underlying firing rates. In addition, we propose a contrastive learning loss that works in accordance with mask modeling objective to further improve the predictive performance. We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets, demonstrating its capability to capture autonomous and non-autonomous dynamics spanning different cortical regions while being completely agnostic to the specific behaviors at hand. Furthermore, STNDT spatial attention mechanism reveals consistently important subsets of neurons that play a vital role in driving the response of the entire population, providing interpretability and key insights into how the population of neurons performs computation.1 1 Introduction One of the most prominent questions in systems neuroscience is how neurons perform computations that give rise to behaviors. Recent evidence suggests that computation in the brain could be governed at the population level [1, 2]. Population of neurons are proposed to obey an internal dynamical rule that drives their activities over time [3, 4]. Inferring these dynamics on a single trial basis is crucial for understanding the relationship between neural population responses and behavior, subsequently enabling the development of robust decoding schemes with wide applicability in brain-computer interfaces (BCI) [5–7]. However, modeling population dynamics on single trials is challenging due to the stochasticity of individual neurons making their spiking activity vary from trial to trial even when they are subject to identical stimuli or recorded under repeated behavior conditions. A direct approach to reduce the trial-to-trial variability of neural responses could be to average responses over repeated trials of the same behavior [8, 9], to convolve the neural response with a Gaussian kernel [10], or in general, to define a variety of neural activity measures [11]. However, more success was found in approaches that explicitly model neural responses as a dynamical system, including methods treating the population dynamics as being linear [12, 13], switched linear [14], non-linear [15, 16], or reduced projected nonlinear models [11]. Recent approaches leveraging 1Code is available at https://github.com/shlizee/STNDT 36th Conference on Neural Information Processing Systems (NeurIPS 2022). recurrent neural networks (RNN) have shown promising progress in modeling distinct components of a dynamical system - neural latent states, initial conditions and external inputs - on a momentto-moment basis [15, 17, 18]. These sequential methods rely on continuous processing of neural inputs at successive timesteps, causing latency that hampers applicability in real-time decoding of neural signals. Consequently to RNN-based approaches, Neural Data Transformer (NDT) [16] was proposed as a non-recurrent approach to improve inference speed by leveraging the transformers architecture which learns and predicts momentary inputs in parallel [19]. While successful, NDT has only focused on modeling the relationship of neural population activity between timesteps while ignoring the rich covariation among individual neurons. Neurons in a population have been shown to have heterogeneous tuning profiles where each neuron has a different level of preference to a particular muscle movement direction [20, 21]. Neuron pairs also exhibit certain degree of correlation in terms of trial-to-trial variability (noise correlation) that affects the ability to decode the behaviors they represent [2, 22]. These spatial correlations characterize the amount of information that can be encoded in the neural population [22], necessitating the need to model the neural population activity across both time and space dimensions. In this work, we propose to incorporate the information distributed along the spatial dimension to improve the learning of neural population dynamics, and introduce SpatioTemporal Neural Data Transformer, an architecture based on Neural Data Transformer which explicitly learns both the spatial covariation between individual neurons and the temporal progression of the entire neural population. We summarize our main contributions as follows: • We introduce STNDT which allows the transformer to learn both the spatial coordination between neurons and the temporal progression of the population activity by letting neurons attend to each other while also attending over temporal instances. • We propose a contrastive training scheme, complementary to the mask modeling objective, to ensure the robustness of model prediction against induced noise augmentations. • We validate our model’s performance on four neural datasets in the publicly available Neural Latents Benchmark suite [23] and show that ensemble variants of our model outperforms other state-of-the-art methods, demonstrating its capability to model autonomous and non-autonomous neural dynamics in various brain regions while being agnostic to external behavior task structures. • We show that the spatial attention, a feature unique to STNDT, identifies consistently important subsets of neurons that play an essential role in driving the response of the entire population. This exclusive attribute of STNDT provides interpretability and key insights into how the neural population distributes the computation workload among the neurons. 2 Related Work Modeling spatial covariation in neural population: Neurons act as an orchestrated system which collectively encodes behaviors in a distributed and redundant manner. Many previous works have studied and incorporated neural variability across neurons to closely match firing statistics observed in multi-channel neural recordings [24–30]. [25] simulated population responses within a Dichotomized Gaussian framework and solved for signal and noise correlations numerically. [26, 27] developed Generative Adversarial Networks that were able to capture pairwise correlations among the neurons and generate realistic firing patterns. [28–30] modeled the population responses as being generated from a latent variable with learnable covariance matrix reflecting covariability among the neurons. While these methods resemble our work in the overarching motivation of capturing interactions among neurons, they rely on the knowledge of the respective stimuli/conditions that the trials belong to when modeling the interaction. On the other hand, STNDT is trained in an unsupervised manner and learns the rich covariation among neurons encompassing all recorded behaviors without access to any external observation apart from the population spiking activity. In addition, while the goal of aforementioned methods is to generate realistic firing activities associated with induced stimuli, oftentimes with some assumptions regarding their statistics (e.g. noise correlation is shared across time bins and trials), STNDT aims to uncover the denoised firing patterns behind the noisy single-trial spiking activity and does not depend on any prior assumptions regarding their firing statistics. Transformers for modeling spatiotemporal data: Transformers were initially developed to model the relationship between words in a sentence, which can be thought of as a temporal progression of a sequence of tokens. Recent works have leveraged the self-attention mechanism in transformers to model spatiotemporal data types where there exist an additional interacting dimensions possessing distinct dynamics, such as trajectories of traffic agents [31–33], dynamic scene graph of video [34], or 3D human motion [35]. However, in these works the spatial interaction at each timestep and the temporal dynamics for each entity are captured independently, treating the other dimension as the batch dimension at each attention block. In contrast, STNDT interleaves spatial and temporal attention in a unified framework, using spatial attention to re-weight temporal features and enabling direct study of each individual neuron’s role in driving the population dynamics. Interpretability of self-attention mechanism: Several approaches have been proposed to probe the inner workings of black-box deep learning models [36–38]. Unlike our work, these approaches attempted to attribute importance of visual inputs to the model prediction in a supervised setting and did not take into account interaction between input features. For attention-based models, the weights of attention matrix have been used as a tool to provide certain level of interpretability [39–42]. The interpretability is built upon the fact that attention weights signify how much influence other inputs have on a particular input in deciding its final outcome in a self-supervision manner. This influence might align with some human interpretable meaning, such as linguistic patterns [43]. In our work, we further leverage attention weights to gain insights into the interaction of neurons from multi-channel neural recordings. 3 Methods Problem formulation: Single-trial spiking activity of a neural population can be represented as a spatiotemporal matrix X ∈ NT×N , where each column Xi ∈ NT is the time series of one neuron, T is the number of time bins for each trial, and N is the number of neurons in the population. Each element Xtn in the matrix is the number of action potentials (spikes) that neuron n fires within the time bin t. Spike counts are assumed to be samples of an inhomogeneous Poisson process P (λ(t, n)) where λ(t, n) is the underlying true firing rate of neuron n at time t. The matrix Y ∈ RT×N containing λ(t, n) fully represents the dynamics of the neural population and explains the observable spiking data of the respective trial. We propose to learn the mapping ϕ(X;W ) : X → Y by the Spatiotemporal Transformer with the set of weights W . Spatiotemporal Neural Data Transformer: At the core of the transformer architecture is the multihead attention mechanism, where feature vectors learn to calibrate the influence of other feature vectors in their transformation. Spike trains are embedded into feature matrices X̃ with added sinusoidal positional encoding to preserve order information as initially proposed in [19]. We employed separate embeddings to encode positions in each temporal and spatial dimension individually, resulting in two distinct feature embeddings X̃T = Emb(X) + PT and X̃S = Emb(X⊤) + PS . A set of three matrices WQT , W K T , W V T ∈ RN×N are learned to transform T N -dimensional embedding X̃T = {x̃1, x̃2, ..., x̃T } to queries QT = X̃TWQT , keys KT = X̃TWKT and values VT = X̃TW V T , upon which latent variable ZT is computed as: ZT = Attention(QT ,KT , VT ) = F ( softmax ( QTK ⊤ T√ N ) VT ) (1) The outer product of QTK⊤T represents the attention each xi pays to all other xj and determines how much influence their values vj have on its latent output zi. F is the sequence of concatenating multiple heads and feeding through a feedforward network with ReLU activation [19]. We used 2 heads for all reported models. Implementations of transformers in popular applications such as in natural language processing literature consider each feature vector xi as an N -dimensional token in a sequence, equivalent to a word in a sentence. Elements in the N -dimensional vector therefore serve as a convenient numerical representation and do not have inherent relationships among them. The attention mechanism thus only models the relationship between tokens in a sequence. In our application, each feature vector xi is a collection of firing activities of N physical neurons among which there exists an interrelation as neuronal population acts as a coordinated structure with complex interdependencies rather than standalone individuals. We therefore propose to model both the temporal relationship - the evolution of neural activities - and the spatial relationship - covariability of neurons - by learning two separate multihead attention blocks (Figure 1). The temporal latent state ZT is computed with temporal attention block as in Equation 1. In parallel, spatial attention block operates on the spatial embedding X̃S and learns an attention weights matrix signifying the relationship between neurons: AS = softmax ( QSK ⊤ S√ T ) (2) where QS = X̃SW Q S and KS = X̃SW K S . This AS matrix is then multiplied with the transpose of temporal latent state ZT to incorporate the influence of spatial attention on the final spatiotemporal latent state ZST : ZST = F(ASZ⊤T ) (3) For stable training, as in [19] we used layer normalization before X̃T , X̃S , ASZ⊤T and feedforward layers. Residual connections are also employed around temporal attention, feedforward layers and ASZ ⊤ T . Mask modeling and contrastive losses: Similar to [16], we train the spatiotemporal transformer in an unsupervised way with BERT’s mask modeling objective [44]. During training, a random subset of spike bins along both spatial and temporal axes of input X are masked (zero-ed out or altered) and the transformer is asked to reconstruct the log firing rate at the masked bins such that the Poisson negative log likelihood is minimized: Lmask = N∑ i=1 T∑ j=1 exp(z̃ij)− x̃ij z̃ij (4) where z̃ij and x̃ij are the log output firing rate and input spike of neuron i at timestep j if location ij is masked. Neural dynamics are shown to be embedded in a low-dimensional space, i.e. model prediction should be fairly consistent when a smaller subset of neurons are used compared to when the entire population is taken into account. Furthermore, in stereotyped behaviors often found in neuroscience experiments, trials with the same condition should yield similar output firing rate profiles. Therefore, to enhance robustness of model prediction to neural firing variability we further constrain model firing rate outputs by a contrastive loss, such that different augmentations of the same trial input remain closer to each other and stay distant to other trial inputs. We adopt the NT-XEnt contrastive loss introduced in [45]: Lcontrastive = ∑ ij lij = ∑ ij −log exp(sim(zi, zj)/τ)∑2N k=1 1k ̸=iexp(sim(zi, zk)/τ) (5) where sim(u, v) = u⊤v/(∥u∥∥v∥) is the cosine similarity between two predictions u and v on two different augmentations of input x and τ is the temperature parameter. Transformations such as dropping out neurons and jittering samples in time have been used to create different views of neural data [46]. In our work, we define the augmentation transformation as random dropout and alteration of spike counts at random elements in the original input matrix X , similar to how masking is done, i.e. zero out or change spike counts to random integers at random neurons and timesteps. See Appendix for details on probabilities used to create these augmentations. Bayesian hyperparameter tuning: We follow [47] to use Bayesian optimization for hyperparameters tuning. We observe that the primary metrics co-smoothing bits/spike (co-bps) are not well correlated with the mask loss (see Figure 1 in the Appendix , while co-bps, vel R2, psth R2 and fp-bps are more pairwise correlated. Therefore, we run Bayesian optimization to optimize co-bps for M models then select the best N models as ranked by validation co-bps, and ensemble them by taking the mean of the predicted rates of these N models. 4 Experiments and results Datasets and evaluation metrics: We evaluate our model performance on four neural datasets in the publicly available Neural Latents Benchmark [23]: MC_Maze, MC_RTT, Area2_Bump, and DMFC_RSG. The 4 datasets cover autonomous and non-autonomous neural population dynamics recorded on rhesus macaques in a variety of behavioral tasks (delayed reaching, self-paced reaching, reaching with perturbation, time interval reproduction) spanning multiple brain regions (primary motor cortex, dorsal premotor cortex, somatosensory cortex, dorso-medial frontal cortex). The diverse scenarios and systems offer comprehensive evaluation of a latent variable model and serve as a standardized benchmark for comparison between different modeling approaches. We use different metrics to measure performance of our model depending on the particular behavior task of each dataset, following the standard evaluation pipeline in [23]. We evaluate and report our model performance on the hidden test split held by NLB to have a fair comparison with other state-of-the-art (SOTA) methods. See [23] for further details of evaluation strategy and how the metrics are calculated. • Co-smoothing (co-bps): the primary metric, measuring the ability of the model to predict activity of held-out neurons it has not seen during training. Co-bps is tied to the goodness of mask loss evaluated for held-out neurons. • Behavior decoding (vel R2 or tp-corr): measures how useful the model firing rates prediction can be used to decode behavior (the velocity of primate’s hand in the cases of MC_Maze and Areas_Bump datasets, or the correlation between neural speed and time between Set cue and Go response in DMFC_RSG dataset). • Match to peri-stimulus time histogram (psth R2): indicates how well predicted firing rates match the peri-stimuls time histogram in repeated, stereotyped task structures. • Forward prediction (fp-bps): measures model’s ability to predict unseen future activity of the neural population. It is computed in the similar manner as co-bps but on the held-out time points of all neurons. Baselines: We compare STNDT against the following baselines, all of which have been evaluated using the same held-out test split. • Smoothing [23]: A simple method where a Gaussian kernel is convolved with held-in spikes to produce smoothed held-in firing rates. Then a Poisson Generalized Linear Model (Poisson GLM) is fitted from the held-in smoothed rates to held-out rates. • GPFA [10]: extracts population latent states as a smooth and low dimensional evolution by combining smoothing and dimension reduction in a common probabilistic framework. • SLDS [14]: models neural dynamics as a switching linear dynamical system, which breaks down nonlinear data into sequences of simpler dynamical modes. • AutoLFADS [17]: models population activity as a non-linear dynamical system with bi-directional recurrent neural networks at the core and a scalable framework of hyperparameter tuning. • MINT [48]: an interpretable decode algorithm that exploits the sparsity and stereotypy of neural activity to interpolate neural states using a library of canonical neural trajectories. • iLQR-VAE [49]: improves upon LFADS with iterative linear quadratic regulator algorithm, an optimization-based recognition model to replace RNN as the inference network. • NDT [16]: leverages transformer architecture with some adaption to neural data to model temporal progression of neural activity across time. AESMTE1 is the best single model and AESMTE3 is the best emsemble of multiple models found as a result of Bayesian hyperparameter tuning [47]. 4.1 Spatiotemporal transformer achieves state-of-the-art performance in modeling autonomous dynamics We first tested STNDT on recordings of dorsal premotor (PMd) and motor cortex (M1) of a monkey performing a delayed reaching task (MC_Maze dataset) to evaluate the ability of STNDT to uncover single-trial population dynamics in a highly structured behavior. The dataset has been studied extensively in previous work [15–17]. It consists of 2869 trials of monkey performing a center-out reaching task in a maze with obstructing barriers, composing 108 different conditions for straight and curved reaching trajectories. The monkey is trained to hold the cursor at the center while the target is presented and only move the cursor to reach the target after a ‘Go’ cue. The neural dynamics during the preparation and execution periods is well modeled as an autonomous dynamical system [15]. We observed that by explicitly modeling spatial interaction, STNDT outperformed other state-of-theart methods and improved NDT’s ability to model autonomous single-trial dynamics as measured by the negative log likelihood of unobserved neural activity. The single STNDT model improved both Poisson log likelihood of heldout neurons (co-bps) and heldout timesteps (fp-bps). The performance is further increased by aggregating multiple STNDT models as shown in Table 1 and Figure 2A. Since MC_Maze features repeated trials, the prediction of any latent variable models should uncover stereotypical patterns of neuronal responses for trials belonging to the same condition. Therefore, we computed PSTH which is the average of neural population response across trials of the same condition, and measure R2 matching of model prediction to this PSTH. We observed that with the help of spatial modeling and contrastive loss, STNDT boosts NDT ability to recover this stereotyped firing pattern 1. We show in Figure 2C several responses of example neurons. STNDT firing rates prediction of trials under the same condition exhibit a consistent, stable PSTH as desired. These predicted rates also decode behaviors accurately when mapped to hand velocity via a linear regression model (Table 1, Figure 2B). 4.2 Spatiotemporal transformer improves inference of non-autonomous neural dynamics underlying naturalistic behaviors There is much interest in systems neuroscience to study neural dynamics in unconstrained, naturalistic behaviors as it is crucial for developing ubiquitous BCI decoders. We evaluated STNDT’s applicability to this setting via recordings in primary motor cortex during self-paced reaching task (MC_RTT dataset) [23, 50]. Unlike MC_Maze dataset, the monkey in this task continuously acquires targets which appear randomly in an 8x8 grid without preparatory periods, resulted in a wide variety of hand trajectories and trial lengths. We observe that STNDT achieves SOTA performance on the primary metric co-bps and performs on par with NDT on remaining metrics, while maintaining a more robust performance against random initializations of model weights (Table 1 and Appendix). 4.3 Spatiotemporal transformer better captures input-driven dynamics underlying sensory processes We next tested STNDT in a setting where unexpected input perturbations affect the neural dynamics in somatosensory cortex to probe whether STNDT can leverage spatial interaction to improve modeling of non-autonomous dynamics in this brain region. Area2_Bump dataset consists of recordings from the Area 2, which was shown in previous works to be driven by mechanical perturbation to the arm and contains information about whole-arm kinematics [23, 51]. The task comprises of active and passive trials with a center hold period at the start. During active trials, the monkey performs a classic center-out reaching task. In passive trials, a force is applied on the monkey’s hand in a random direction via a manipulandum, after which the monkey has to return to the center target and proceed with the task as in active trials. Despite the relatively small scale of the dataset, STNDT brings about further improvements to NDT performance in terms of co-bps and psth-R2, on both single and ensemble levels. 4.4 Spatiotemporal transformer enhances prediction of neural population activity during cognitive task Dorsomedial frontal cortex (DMFC) is believed to serve as an intermediate layer between low-level sensory and motor areas, and possess distinct confluence of internal dynamics and inputs [52, 53]. We are therefore interested to see if characterizing spatial relationship alongside temporal relationship and incorporating contrastive loss could help STNDT better model the dynamics in this brain region. We tested STNDT on the DMFC_RSG dataset [23, 53] consisting of recordings from a rhesus macaque performing a time-interval reproduction task. The monkey is presented two ‘Ready’ and ‘Set’ stimuli separated by a specific time interval ts while fixating eye and hold the joystick at the center position. It then has to execute a ‘Go’ response by either an eye saccade or joystick movement such that the time interval tp between its reponse and the ‘Set’ cue is sufficiently close to ts. STNDT successfully captures the dynamics in this cognitive task, outperforming NDT by a large margin across co-bps, psth-R2 and fp-bps on both single and ensemble level (Table 2). 4.5 Spatial attention mechanism identifies important subsets of neurons driving the population dynamics In Figure 3, we visualize spatial attention weights obtained from STNDT on the MC_Maze dataset in the first and last attention layers. Attention map for remaining datasets are provided in Appendix. Interestingly, spatial attention shows that in early layers, only a small subsets of neurons in the population are consistently attended to by all neurons. The spatial attention tends to disperse as the model goes to deeper layers. Strikingly, the subset of heavily-attended neurons stays relatively identical across different trials, hinting that these neurons might play a crucial role in driving the population response to the behavior task. We further tested this hypothesis by incrementally dropping the neurons heavily attended to (i.e. zeroing out their spiking activity input to the model) in a descending order of their attention weights identified in the first layer. We observed that dropping these important neurons identified by STNDT caused a significant decline in the model performance (Figure 4). The performance decline was significantly more than the case where the same number of random neurons are dropped. To rule out the possible case that dropping neurons only has adverse effect on the spatial attention module but that effect propagates to the subsequent modules and indirectly impacts the performance of the overall STNDT pipeline, we repeated the experiment on the vanilla NDT model which, unlike STNDT, lacks a spatial attention structure. Interestingly, we observed the same performance deterioration when we dropped the spiking activity of STNDTidentified important neurons and asked a pretrained vanilla NDT to make inference on the resulting inputs. This finding suggests that the impact of the important neurons that only STNDT can identify might potentially generalize to other latent variable models that without input from these neurons, some latent variable models might not function optimally. We provide additional results from similar analyses on GPFA and Smoothing models in the Appendix. We further examine whether important neurons were selected by the spatial attention mechanism based on some criteria more sophisticated than simple firing statistics, as more active neurons tend to have higher signal-to-noise ratio and might encode more useful information with regard to behaviors. We find that the important neurons are not the ones with the highest spike counts or the least variability in spiking activity. In fact, attention weights of a neuron do not correlate or only correlate weakly to its firing activity statistics, as we show in Table 3 the Pearson’s correlation of a neuron’s attention weight with the mean and variance of its spiking activity. All correlation values have p-value < 1e-4. These results indicate that STNDT’s spatial attention has picked up on meaningful population features that are more significant than firing statistics of the neurons. 4.6 Ablation Study: Contrastive loss encourages consistency of model prediction and improves performance We conduct an ablation study to assess the effectiveness of contrastive loss on the overall performance of STNDT. Tables 4 and 5 report how the model scores on different metrics across all four datasets on the single and ensemble levels. In general, we observe that having contrastive loss further improves the performance of STNDT on predicting neural activity of heldout neurons (co-bps) and heldout timesteps (fp-bps). The contribution of contrastive loss is most eminent on MC_Maze dataset. 5 Discussion In this paper we presented STNDT, a novel architecture based upon NDT [16] that explicitly learns the covariation among individual neurons in the population alongside the momentary evolution of the population spiking activity in order to infer the underlying firing rates behind highly variable single-trial spike trains. By incorporating self-attention along both spatial and temporal dimensions as well as a contrastive loss, STNDT enhances NDT’s ability to model dynamics spanning a variety of tasks and brain regions, most notably by the accurate prediction of activity for unseen neurons (co-bps). Although STNDT does not consistently outperform NDT on other secondary metrics, we show in the Appendix that STNDT is more robust to random initializations and performs better than NDT on average across random seeds. Moreover, the improvement STNDT contributes on co-bps is the direct reflection of the spatial attention’s success. Since the spatial attention module aims to learn the relationship between all (observed and unobserved) neurons at training time, it will leverage this information to infer activities of unobserved neurons based on those of observed neurons at testing time, which is exactly what co-bps measures. Finally, the novel spatial attention mechanism unique to STNDT brings about valuable interpretability as it discovers influential subsets of neurons whose activities contain salient information about the response of the entire neural population without which some latent variable models might not function optimally. Acknowledgment: This work was supported in part by National Science Foundation grant OAC-2117997 and Washington Research Fund to ES. Authors also acknowledge the partial support by the Departments of Electrical Computer Engineering (TL and ES), Applied Mathematics (ES), the Center of Computational Neuroscience (ES), and the eScience Center (ES) at the University of Washington.
1. Can the author provide more information on how the attention mechanism and contrastive objective improve the model's performance? 2. How does the model use positional encodings, and how does it affect the attention mechanism? 3. Can the author explain in more detail how masking is performed in the model? 4. What is the dropout probability used in the contrastive loss, and how was it chosen? 5. Can the author provide more insight into what the attention maps show and what insights can be gained from them? 6. Why does the performance drop when increasing the ensemble size beyond approx. 8 models? 7. How many individual models are used to form one ensemble? 8. Can the author describe the two other datasets used in the study? 9. Can the author clarify how the dimensions of X and X^T are defined in Fig. 1? 10. Can the author make explicit in the Methods section and Fig. 1 that the model consists of four layers of the transformer block? 11. Can the author introduce the abbreviation NLB earlier in the text? 12. Can the author indicate which metrics correspond to the log-likelihood used for model training? 13. Can the author plot the ground truth and predicted traces in the same plot for better comparability in Fig. 3C? 14. Can the author provide more details on the meaning of the part of the sentence starting "as well as the discovery..." in L 258? 15. Can the author reword or restructure the paragraph starting at l. 264 to improve its readability?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper addresses the task of modelling noisy single-trial spiking activities and extends the Neural Data Transformer (NDT) model by including an attention mechanism between neurons that treats each neuron as a token. Additionally, the authors add the SimCLR contrastive learning objective to the model trying to improve generalization. The paper investigates the new model’s performance on spiking neuron activity of four publicly available motor control datasets. Strengths And Weaknesses Strengths Research question is important and the presented modeling idea makes sense The paper is well written and easy to understand The motivation in the Introduction section as well as the comparison to related work is decent Compares the model to a variety of baselines + metrics Weaknesses New attention mechanism does not lead to measurable improvement Contrastive objective does not lead to measurable improvement Unclear how attention maps provide "interpretability" Some of the methods not clearly described (augementation, masking, positional encodings?) Questions Tables 1 + 2: I do not see an overall improvement by your additional attention model over AESMTE1/3. The metrics are above the baseline as frequently as they are below i, suggesting that we're looking at random fluctuations. Am I missing something? Tables 3 + 4: Again, the ablation study seems to suggest to me that the contrastive objective does not actually improve the model. Am I missing something? Does the model use positional encodings? It seems like that would be a useful thing to encode properties of the tokens such as neuron identity. Otherwise the tokens are permutation invariant, which means the attention mechanism has to infer the token identity from the current spatial (or temporal) pattern of spikes, which probably limits the model quite substantially (and unnecessarily). Please explain whether you used positional encodings and, if so, how or, if not, why. L 102ff.: Please describe in more detail how exactly you perform the masking. Do you drop individual entries X_ij or entire rows and/or columns? L 119: Please expand on how exactly you alter the spike counts for the contrastive loss and provide the dropout probability. Fig 4: It is not clear to me what the attention maps show or what insights we can gain from them. I suspect the "important" neurons in layer 1 are just the ones with the most spikes, which form good basis functions for filling in since their response vectors are (a) dense and (b) tend to have higher signal-to-noise ration due to the (very approximate) Poisson statistics of spike trains. Details Fig 3A: Why does performance drop if you increase the ensemble size beyond approx. 8 models? Please also state how many individual models you use to form one ensemble. Paragraph at l. 123: I think it is reasonable to focus on co-bps for the Bayesian hyperparameter search. At the same time I do think that the mask loss – which is probably the Poisson log-likelihood? – correlates with the 4 evaluation metrics you compare it to in Fig 2A. If they did not correlate at all, it would be unclear how training the model with the log-likelihood objective could improve the model’s performance on an uncorrelated evaluation metric. You describe two out of the four datasets you used in Sec 3.1 and 3.2 in detail. I think your paper would improve if you would additionally describe the two other datasets you used. You might be able to cut down on the level of detail you provide in 3.1 and 3.2 in case the pages limit would be a problem. Fig 1: The dimensions of X and X^T are stated wrong. According to line 69 X is N x T, and the spike rasters in Fig. 1 suggest the same. Accordingly, the size of the temporal attention matrix in the figure should be 5 x 5 and the spatial one 3 x 3. For X and X^T the columns should be colored, not the rows. The model is well described, although you could make explicit in the Methods section and maybe Fig. 1 that the model is made of 4 layers of the transformer block depicted in Fig. 1 L 107: you could make clear that \tilde{z} is the “output log firing rate” L 131: Introduce later used abbreviation NLB L 125 and l 150: you could indicate here that these two metrics correspond to the log-likelihood you use for model training to improve clarity Fig 3C: This Fig might improve if you plot the ground truth and the predicted traces into one plot for better comparability. Please add a description what the different colors mean (different examples?). L 226: instead of “across 4 attention layers” is not quite correct, as you only depict layers 1 and 4. As a matter of fact, it would be interesting to see the same plots for layer 2 and 3. L 258: I do not understand the meaning of the part of the sentence starting “as well as the discovery…”. The paragraph starting at l. 264 is hard to comprehend as the sentences are a bit long and complicated to read. This paragraphs readability could improve a lot by rewording / restructuring it. Limitations Yes
NIPS
Title Cryptographic Hardness of Learning Halfspaces with Massart Noise Abstract We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (x, y) ∈ R × {±1}, where the distribution of x is arbitrary and the label y is a Massart corruption of f(x), for an unknown halfspace f : R → {±1}, with flipping probability η(x) ≤ η < 1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomialtime Massart halfspace learner can achieve error better than Ω(η), even if the optimal 0-1 error is small, namely OPT = 2− log (N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. N/A c(N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. 1 Introduction A halfspace or linear threshold function (LTF) is any function hw,t : RN → {±1} of the form hw,t(x) := sign(⟨w,x⟩ − t), where the vector w ∈ RN is called the weight vector, t ∈ R is called the threshold, and sign : R → {±1} is defined by sign(t) = 1 if t ≥ 0 and sign(t) = −1 otherwise. Halfspaces are a central concept class in machine learning, extensively investigated since the 1950s [Ros58, Nov62, MP68]. Here we study the computational complexity of learning halfspaces in Valiant’s (distribution independent) PAC model [Val84], when the labels have been corrupted by Massart noise [MN06]. We define the Massart noise model below. Definition 1.1 (Massart Noise). We say that a joint distribution D of labeled examples (x, y), supported on RN × {±1}, satisfies the Massart noise condition with noise parameter η ∈ [0, 1/2) with respect to a concept class C of Boolean-valued functions on RN if there is a concept c ∈ C such that for all x0 ∈ RN we have that η(x0) def = Pr(x,y)∼D[c(x) ̸= y | x = x0] ≤ η. The Massart PAC learning problem for the concept class C is the following: Given i.i.d. samples from a Massart distribution D, as in Definition 1.1, the goal is to output a hypothesis with small 0-1 error. In this work, we study the computational complexity of the Massart PAC learning problem, when the underlying concept class C is the class of halfspaces on RN . In its above form, the Massart noise model was defined in [MN06]. An essentially equivalent noise model had been defined in the 80s by Sloan and Rivest [Slo88, RS94, Slo96], and a very similar definition had been considered even earlier by Vapnik [Vap82]. The Massart model is a classical semi-random noise model that is more realistic than Random Classification Noise (RCN) In contrast to RCN, Massart noise allows for variations in misclassification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). rates (without a priori knowledge of which inputs are more likely to be misclassified). Asymmetric misclassification rates arise in a number of applications, including in human annotation noise [BK09]. Consequently, learning algorithms that can tolerate Massart noise are less brittle than those that depend on the uniformity of RCN. The agnostic model [Hau92, KSS94], where the noise can be fully adversarial, is of course even more robust; unfortunately, it is computationally hard to obtain agnostic learners with any non-trivial guarantees, even for basic settings. We now return to the class of halfspaces, which is the focus of this work. We recall that PAC learning halfspaces with RCN is known to be solvable in polynomial time (to any desired accuracy) [BFKV96]. On the other hand, agnostic PAC learning of halfspaces is known to computationally hard (even for weak learning) [GR06, FGKP06, Dan16]. The computational task of PAC learning halfspaces corrupted by Massart noise is a classical problem in machine learning theory that has been posed by several authors since the 1980s [Slo88, Coh97, Blu03]. Until recently, no progress had been made on the efficient PAC learnability of Massart halfspaces. [DGT19] made the first algorithmic progress on this problem: they gave a poly(N, 1/ϵ)-time learning algorithm with error guarantee of η+ ϵ. Subsequent work made a number of refinements to this algorithmic result, including giving an efficient proper learner [CKMY20] and developing an efficient learner with strongly polynomial sample complexity [DKT21]. In a related direction, [DIK+21] gave an efficient boosting algorithm achieving error η+ ϵ for any concept class, assuming the existence of a weak learner for the class. The error bound of η can be very far from the information-theoretically optimum error of OPT, where OPT = RLTF(D) ≤ η. Indeed, known polynomial-time algorithms only guarantee error ≈ η even if OPT is very small, i.e., OPT ≪ η. This prompts the following question: Question 1.1. Is there an efficient learning algorithm for Massart halfspaces with a relative error guarantee? Specifically, if OPT ≪ η is it possible to achieve error significantly better than η? Our main result (Theorem 1.2) answers this question in the negative, assuming the subexponentialtime hardness of the classical Learning with Errors (LWE) problem (Assumption 2.4). In other words, we essentially resolve the efficient PAC learnability of Massart halfspaces, under a widely-believed cryptographic assumption. 1.1 Our Results Before we state our main result, we recall the setup of the Learning with Errors (LWE) problem. In the LWE problem, we are given samples (x1, y1), . . . , (xm, ym) and the goal is to distinguish between the following two cases: (i) Each xi is drawn uniformly at random (u.a.r.) from Znq , and there is a hidden secret vector s ∈ Znq such that yi = ⟨xi, s⟩+ zi, where zi ∈ Zq is discrete Gaussian noise (independent of xi); (ii) Each xi and each yi are independent and are sampled u.a.r. from Znq and Zq respectively. Formal definitions of LWE (Definition 2.3) and related distributions together with the precise computational hardness assumption (Assumption 2.4) we rely on are given in Section 2. Our main result can now be stated as follows: Theorem 1.2 (Informal Main Theorem). Assume that LWE cannot be solved in 2n 1−Ω(1) time. Then, for any constant ζ > 0, there is no polynomial-time learning algorithm for Massart halfspaces on RN that can output a hypothesis with 0-1 error smaller than Ω(η), even when OPT ≤ 2− log1−ζ N and the Massart noise parameter η is a small positive constant. The reader is also referred to Theorem D.1 in the Appendix for a more detailed formal statement. Theorem 1.2 is the first computational hardness result for PAC learning halfspaces (and, in fact, any non-trivial concept class) in the presence of Massart noise. Our result rules out even improper PAC learning, where the learner is allowed to output any polynomially evaluatable hypothesis. As a corollary, it follows that the algorithm given in [DGT19] is essentially the best possible, even when assuming that OPT is almost inverse polynomially small (in the dimension N ). We also remark that this latter assumption is also nearly the best possible: if OPT is o(ϵ/N), then we can just draw Ω(N/ϵ) samples and output any halfspace that agrees with these samples. We note that a line of work has established qualitatively similar hardness in the Statistical Query (SQ) model [Kea98] — a natural, yet restricted, model of computation. Specifically, [CKMY20] established a super-polynomial SQ lower bound for learning within error of OPT + o(1). Subse- quently, [DK22] gave a near-optimal super-polynomial SQ lower bound: their result rules out the existence of efficient SQ algorithms that achieve error better than Ω(η), even if OPT = 2log 1−ζ N . Building on the techniques of [DK22], more recent work [NT22] established an SQ lower bound for learning to error better than η, even if OPT = 2log 1−ζ N — matching the guarantees of known algorithms exactly. While the SQ model is quite broad, it is also restricted. That is, the aforementioned prior results do not have any implications for the class of all polynomial-time algorithms. Interestingly, as we will explain in the proceeding discussion, our computational hardness reduction is inspired by the SQ-hard instances constructed in [DK22]. 1.2 Brief Technical Overview Here we give a high-level overview of our approach. Our reduction proceeds in two steps. The first is to reduce the standard LWE problem (as described above) to a different “continuous” LWE problem more suitable for our purposes. In particular, we consider the problem where the x samples are taken uniformly from Rn/Zn, y is either taken to be an independent random element of R/Z or is taken to be ⟨x, s⟩ mod 1 plus a small amount of (continuous) Gaussian noise, where s is some unknown vector in {±1}n. This reduction follows from existing techniques [Mic18a, GVV22]. The second step — which is the main technical contribution of our work — is reducing this continuous LWE problem to that of learning halfspaces with Massart noise. The basic idea is to perform a rejection sampling procedure that allows us to take LWE samples (x, y) and produce some new samples (x̃, ỹ). We will do this so that if y is independent of x, then ỹ is (nearly) independent of x̃; but if y = ⟨x, s⟩ + noise, then ỹ is a halfspace of x̃ with a small amount of Massart noise. An algorithm capable of learning halfspaces with Massart noise (with appropriate parameters) would be able to distinguish these cases by learning a hypothesis h and then looking at the probability that h(x̃) ̸= ỹ. In the case where ỹ was a halfspace with noise, this would necessarily be small; but in the case where x̃ and ỹ were independent, it could not be. In order to manage this reduction, we will attempt to produce a distribution (x̃, ỹ) similar to the SQ-hard instances of Massart halfspaces constructed in [DK22]. These instances can best be thought of as instances of a random variable (x′, y′) in Rn × {±1}, where y′ is given by a low-degree polynomial threshold function (PTF) of x′ with a small amount of Massart noise. Then, letting x̃ be the Veronese map applied to x′, we see that any low-degree polynomial in x′ is a linear function of x̃, and so ỹ = y′ is an LTF of x̃ plus a small amount of Massart noise. As for how the distribution over (x′, y′) is constructed in [DK22], essentially the conditional distribution of x′ on y′ = 1 and on y′ = −1 are carefully chosen mixtures of discrete Gaussians in the v-direction (for some randomly chosen unit vector v), and independent standard Gaussians in the orthogonal directions. () Our goal will be to find a way to perform rejection sampling on the distribution (x, y) to produce a distribution of this form. In pursuit of this, for some small real number b and some a ∈ [0, b), we let x′ be a random Gaussian subject to x′ ≡ bx (mod b) (in the coordinate-wise sense) conditioned on by ≡ a (mod b). We note that if we ignore the noise in the definition of y, this implies that ⟨x′, s⟩ ≡ ⟨bx, s⟩ ≡ b ⟨x, s⟩ ≡ by ≡ a (mod b) (recalling that s ∈ {±1}n). In fact, it is not hard to see that the resulting distribution on x′ is close to a standard Gaussian conditioned on ⟨x′, s⟩ ≡ a (mod b). In other words, x′ is close to a discrete Gaussian with spacing b/∥s∥2 and offset a/∥s∥2 in the s-direction, and an independent standard Gaussian in orthogonal directions. Furthermore, this x′ can be obtained from (x, y) samples by rejection sampling: taking many samples until one is found with by ≡ a (mod b), and then returning a random x′ with x′ ≡ bx (mod b). By taking an appropriate mixture of these distributions, we can manufacture a distribution close to the hard instances in [DK22]. This intuition is explained in detail in Section 3.1; see Lemma 3.3. (We note that Lemma 3.3 is included only for the purposes of intuition; it is a simpler version of Lemma 3.5, which is one of the main lemmas used to prove our main theorem.) Unfortunately, as will be discussed in Section 3.2, applying this construction directly does not quite work. This is because the small noise in the definition of y leads to a small amount of noise in the final values of ⟨x′, s⟩. This gives us distributions that are fairly similar to the hard instances of [DK22], but leads to small regions of values for u, where the following condition holds: Pr(y′ = +1 | x′ = u) = Pr(y′ = −1 | x′ = u). Unfortunately, the latter condition cannot hold if y′ is a function of x′ with Massart noise. In order to fix this issue, we need to modify the construction by carving intervals out of the support of x′ conditioned on y′ = −1, in order to eliminate these mixed regions. This procedure is discussed in detail in Section 3.3.2. 1.3 Additional Related Work There have also been several recent works showing reductions from LWE or lattice problems to other learning problems. Concurrent and independent work to ours [Tie22] showed hardness of weakly agnostically learning halfspaces, based on a worst-case lattice problem (via a reduction from “continuous” LWE). Two recent works obtained hardness for the unsupervised problem of learning mixtures of Gaussians (GMMs), assuming hardness of (variants of) the LWE problem. Specifically, [BRST21] defined a continuous version of LWE (whose hardness they established) and reduced it to the problem of learning GMMs. More recently, [GVV22] obtained a direct reduction from LWE to a (different) continuous version of LWE; and leveraged this connection to obtain quantitatively stronger hardness for learning GMMs. It is worth noting that for the purposes of our reduction, we require as a starting point a continuous version of LWE that differs from the one defined in [BRST21]. Specifically, we require that the distribution on x is uniform on [0, 1]n (instead of a Gaussian, as in [BRST21]) and the secret vector is binary. The hardness of this continuous version essentially follows from [Mic18b, GVV22]. 2 Preliminaries For x, s ∈ Rn with s ̸= 0, let xs def= ⟨x, s⟩/∥s∥2 be the length of the projection of x in the s direction, and x⊥s ∈ Rn−1 be the projection1 of x on the orthogonal complement of s. For f, g : U → R, we write f(u) ∝ g(u) if there is c ∈ R such that f(u) = cg(u) for all u ∈ U . We use X ∼ D to denote a random variable X with distribution D. We use PD or PX for the corresponding probability mass function (pmf) or density function (pdf), and PrD or PrX for the measure function of the distribution. We use DX to denote the distribution of the random variable X . For S ⊆ Rn, we will use λ(S) to denote the n-dimensional volume of S. Let U(S) denote the uniform distribution on S. For a distribution D on Rn and S ⊆ Rn, we denote by D | S the conditional distribution of X ∼ D given X ∈ S. Let Ds (resp. D⊥s) be the distribution of xs (resp. x⊥s), where x ∼ D. For distributions D1, D2, we use D1 +D2 to denote the pseudo-distribution with measure function PrD1+D2(A) = PrD1(A) + PrD2(A). For a ∈ R, let aD denote the pseudo-distribution with measure function aPrD. On the other hand, let a ◦D denote the distribution of aX , where X ∼ D. We use D1 ⋆ D2 to denote the convolution of distributions D1, D2. We will use LTFN for the class of halfspaces on RN ; when N is clear from the context, we may discard it and simply write LTF. For q ∈ N, we use Zq def = {0, 1, · · · , q − 1} and Rq def = [0, q). We use modq : Rn 7→ Rnq to denote the function that applies modq(x) on each coordinate of x. We use DNRn,σ to denote the n-dimensional Gaussian distribution with mean 0 and covariance matrix σ2/(2π) · In and use DNσ as a short hand for DNR,σ. In some cases, we will use N (0, In) for the standard (i.e., zero mean and identity covariance) multivariate Gaussian, Definition 2.1 (Partially Supported Gaussian Distribution). For σ ∈ R+ and x ∈ Rn, let ρσ(x) def = σ−n exp ( −π(∥x∥2/σ)2 ) . For any countable set S ⊆ Rn, we let ρσ(S) def = ∑ x∈S ρσ(x), and let DNS,σ be the distribution supported on S with pmf PDNS,σ (x) = ρσ(x)/ρσ(S). Definition 2.2 (Discrete Gaussian). For T ∈ R+, y ∈ R and σ ∈ R+, we define the “T -spaced, y-offset discrete Gaussian distribution with σ scale” to be the distribution of DNTZ+y,σ . Learning with Errors (LWE) We use the following definition of LWE, which allows for flexible distributions of samples, secrets, and noises. Here m is the number of samples, n is the dimension, and q is the ring size. Definition 2.3 (Generic LWE). Let m,n, q ∈ N, and let Dsample, Dsecret, Dnoise be distributions on Rn,Rn,R respectively. In the LWE(m,Dsample, Dsecret, Dnoise,modq) problem, we are given m independent samples (x, y) and want to distinguish between the following two cases: (i) Alternative 1More precisely, let B⊥s ∈ Rn×(n−1) for the matrix whose columns form an (arbitrary) orthonormal basis for the orthogonal complement of s, and let x⊥s def= (B⊥s)T x. hypothesis: s is drawn from Dsecret. Then, each sample is generated by taking x ∼ Dsample, z ∼ Dnoise, and letting y = modq(⟨x, s⟩+ z); and (ii) Null hypothesis: x, y are independent and each has the same marginal distribution as above. When a distribution in LWE is uniform over some set S, we may abbreviate U(S) merely as S. Note that LWE(m,Znq ,Znq , DNZ,σ,modq) to the classical LWE problem. Computational Hardness Assumption for LWE As alluded to earlier, the assumption for our hardness result is the hardness of the (classic) LWE problem, with the parameters stated below. Assumption 2.4 (Standard LWE Assumption (see, e.g., [LP11])). Let c > 0 be a sufficiently large constant. For any constant β ∈ (0, 1), κ ∈ N, LWE(2O(nβ),Znq ,Znq , DNZ,σ,modq) with q ≤ nκ and σ = c √ n cannot be solved in 2O(n β) time with 2−O(n β) advantage. We recall that [Reg09, Pei09] gave a polynomial-time quantum reduction from approximating (the decision version of) the Shortest Vector Problem (GapSVP) to LWE (with similar n, q, σ parameters). Our hardness assumption is the widely believed sub-exponential hardness of LWE. We note that the fastest known algorithm for GapSVP takes 2O(n) time [ALNS20]. Thus, refuting the conjecture would be a major breakthrough. A similar assumption was also used in [GVV22] to establish computational hardness of learning Gaussian mixtures. Our use of a sub-exponential hardness of LWE is not a coincidence; see Section 4. As mentioned earlier, we will use a different variant of LWE, where the sample is from Rn1 , the secret is from {±1}n, and the noise is drawn from a continuous Gaussian distribution. The hardness of this variant is stated below. The proof, which follows from [Mic18a, GVV22], is deferred to Appendix B. Lemma 2.5. Under Assumption 2.4, for any β ∈ (0, 1) and γ ∈ R+, there is no 2O(n β) time algorithm to solve LWE ( 2O(n β),Rn1 , {±1}n, DNO(n−γ),mod1 ) with 2−O(n β) advantage. Decisional Massart Halfspace Problem For a distribution D on labeled examples and a concept class C, we let RC(D) def = minh∈C Pr(x,y)∼D[h(x) ̸= y] be the error of the best classifier in C with respect to D. We will prove hardness for the following decision version of learning Massart halfspaces. This will directly imply hardness for the corresponding learning (search) problem. Definition 2.6 (Testing Halfspaces with Massart Noise). For n,N ∈ N, η,OPT ∈ (0, 1/2), let Massart(m,N, η,OPT) denote the problem of distinguishing, given m i.i.d. samples from D on RN × {±1}, between the following two cases: (i) Alternative hypothesis: D satisfies the Massart halfspace condition with noise parameter η and RLTF(D) ≤ OPT; and (ii) Null hypothesis: the Bayes optimal classifier has cη error, where c > 0 is a sufficiently small universal constant. 3 Reduction from LWE to Learning Massart Halfspaces In this section, we establish Theorem 1.2. Some intermediate technical lemmas have been deferred to the Appendix C. Our starting point is the problem LWE(m,Rn1 , {±1}n, DNσ ,mod1). Note that, by Lemma 2.5, Assumption 2.4 implies the hardness of LWE(m,Rn1 , {±1}n, DNσ ,mod1). We will reduce this variant of LWE to the decision/testing version of Massart halfspaces (Definition 2.6). Our reduction will employ multiple underlying parameters, which are required to satisfy a set of conditions. For convenience, we list these conditions below. Condition 3.1. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, δ ∈ (0, 1), satisfy: (i) t/ϵ is a sufficiently large even integer, (ii) σ ≤ √ n, (iii) 1 t √ n ≥ √ c log(n/δ), where c is a sufficiently large universal constant, (iv) ( c ′ϵ c′′tσ ) 2 ≥ log(m′/δ), where c′ > 0 is a sufficiently small universal constant and c′′ > 0 is a sufficiently large universal constant. The main theorem of this work is stated below. Theorem 3.2. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, ϵ′, δ ∈ (0, 1) satisfy Condition 3.1 and η < 1/2. Moreover, assume that m′ = c(ϵ/t)m, where c > 0 is a sufficiently small universal constant and m(ϵ/t)2 is sufficiently large, and N = (n + 1)d, where d/(t/ϵ) is sufficiently large. Suppose that there is no T + poly(m,N, log(1/δ))-time algorithm for solving LWE(m,Rn1 , {±1}n, DNσ ,mod1) with ϵ′ −O(δ) advantage. Then there is no T time algorithm for solving Massart(m′, N, η,OPT) with 2ϵ′ advantage, where OPT = exp(−Ω(t4/ϵ2)). Note that Theorem 3.2, combined with Lemma 2.5, can be easily used to prove Theorem 1.2 (e.g., by plugging in t = n−0.5−Θ(ζ), ϵ = Θ(n−1.5) in the above statement); see Appendix D. As such, we devote the remainder of the body of this paper to give an overview to the proof of Theorem 3.2. High-level Overview The starting point of our computational hardness reduction is the family of SQ-hard instances obtained in [DK22]. At a high-level, these instances are constructed using mixtures of “hidden direction” discrete Gaussian distributions, i.e., distributions that are discrete Gaussians in a hidden direction and continuous Gaussians on the orthogonal directions. In Section 3.1, we note that by using an appropriate rejection sampling procedure on the LWE samples (drawn from the alternative hypothesis), we obtain a distribution very similar to the “hidden direction discrete Gaussian”. A crucial difference in our setting is the existence of a small amount of additional “noise”. A natural attempt is to replace the discrete Gaussians in [DK22] with the noisy ones obtained from our rejection sampling procedure. This produces problems similar to the hard instances from [DK22]. Unfortunately, the extra noise in our construction means that the naive version of this construction will not work; even with small amounts of noise, the resulting distributions will not satisfy the assumptions of a PTF with Massart noise. In Section 3.2, we elaborate on this issue and the modifications we need to make to our construction in order to overcome it. In Section 3.3, we provide the complete construction of our Massart PTF hard instance. Overview of the [DK22] SQ-hard Construction [DK22] showed SQ-hardness for the following hypothesis testing version of the problem (which implies hardness for the learning problem): For an input distribution D on Rn × {±1}, distinguish between the cases where D is a specific distribution Dnull in which x and y are independent or where D belongs to a class of alternative hypothesis distributions Dalternative. In particular, for D ∈ Dalternative, y will be given by a low-degree PTF in x with a small amount of Massart noise. As we will be trying to reproduce it, it is important for us to understand this alternative hypothesis distribution. Each distribution in Dalternative is parameterized by a hidden direction s ∈ Sn−1. We will denote the corresponding distribution by Ds. Ds is constructed so that x⊥s ∼ DNRn−1,1 is independent of x s and y. This means that we can specify Ds by describing the simpler distribution of (xs, y) ∈ R × {±1}. For (xs, y), we have that y = +1 with probability 1− η. The distributions of xs conditioned on y = ±1 are defined to be mixtures of discrete Gaussians as follows: Dxs|(y=+1) = 1 ϵ ∫ ϵ 0 DNu+(t+u)Z,1du and Dxs|(y=−1) = 1 ϵ ∫ t/2+ϵ t/2 DNu+(t+u−t/2)Z,1du . (1) As we noted, both xs | (y = +1) and xs | (y = −1) are mixtures of discrete Gaussians. Combining this with the fact that x⊥s ∼ N (n, In−1), this indicates that x | (y = +1) and x | (y = −1) are mixtures of “hidden direction discrete Gaussians” — with different spacing and offset for their support on the hidden direction. These conditional distributions were carefully selected to ensure that y is a Massart PTF of x with small error. To see why this is, notice that the support of xs | (y = +1) is ⋃ i∈Z [it, it+ (i+1)ϵ], while the support of xs | (y = −1) is ⋃ i∈Z [it+ t/2, it+ t/2+ (i+1)ϵ]; both supports are unions of intervals. Consider the implications of this for three different ranges of xs: 1. For xs ∈ [−t2/(2ϵ), t2/(2ϵ)], the intervals have lengths in [0, t/2]; thus, the +1 intervals and the −1 intervals do not overlap at all. 2. For xs ∈ [−t2/ϵ,−t2/(2ϵ)) ∪ (t2/(2ϵ), t2/ϵ], the intervals have lengths in [t/2, t]; thus, the +1 intervals and the −1 intervals overlap, so that their union covers the space. We note that in this case there are gaps between the +1 intervals; specifically, there are at most O(t/ϵ) such gaps. 3. For xs ∈ (−∞,−t2/ϵ)∪ (t2/ϵ,∞), the intervals have lengths in [t,∞), so the +1 intervals cover the space by themselves. Consider the degree-O(t/ϵ) PTF sign(p(x)) such that sign(p(x)) = +1 iff xs ∈ ⋃ i∈Z [it, it+(i+1)ϵ]. In particular, sign(p(x)) = 1 for x in the support of the conditional distribution on y = 1. Note that the PTF sign(p(x)) has zero error in the first case; thus, its total 0-1 error is at most exp(−Ω(t2/ϵ)2). Moreover, since the probability of y = 1 is substantially larger than the probability of y = −1, it is not hard to see that for any x with sign(p(x)) = 1 that Pr[y = 1 | x = x] > 1−O(η). This implies that y is given by sign(p(x)) with Massart noise O(η). 3.1 Basic Rejection Sampling Procedure In this subsection, we show that by performing rejection sampling on LWE samples, one can obtain a distribution similar to the “hidden direction discrete Gaussian”. For the sake of intuition, we start with the following simple lemma. The lemma essentially states that, doing rejection sampling on LWE samples, gives a distribution with the following properties: On the hidden direction s, the distribution is pointwise close to the convolutional sum of a discrete Gaussian and a continuous Gaussian noise. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s stays pointwise close to a Gaussian. Note that this distribution closely resembles the “hidden direction discrete Gaussian” in [DK22]. Lemma 3.3. Let (x, y) be a sample of the LWE(m,Rn1 , {±1}n, DNσ ,mod1) from the alternative hypothesis case, let y′ be any constant in [0, 1), and let x′ ∼ (1/σscale) ◦ DNx+Zn,σscale | (y = y ′) . Then we have the following: (i) For x′s, we have that for any u ∈ R it holds that Px′s(u) = (1±O(δ))PD′⋆DNσnoise (u) , whereD ′ = DNT (y′+Z),σsignal , and T = SR/(n 1/2σscale), σsignal = √ SR, σnoise = √ 1− SR, and SR = σ 2 scale σ2scale+σ 2/n , (ii) x′⊥s is “nearly independent” of x′s, namely for any l ∈ R and u ∈ Rn−1 we have that Px⊥s|xs=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Lemma 3.3 is a special case of Lemma 3.5, which is one of the main lemmas required for our proof. We note that the distribution of x′ obtained from the above rejection sampling is very similar to the “hidden direction discrete Gaussian” used in [DK22]. The key differences are as follows: (i) on the hidden direction, x′s is close to a discrete Gaussian plus extra Gaussian noise (instead of simply being a discrete Gaussian), (ii) x′⊥s and x′s are not perfectly independent. More importantly, by taking different values for y′ and σscale, we can obtain distributions with the same hidden direction, but their discrete Gaussian on the hidden direction has different spacing (T ) and offset (y′). To obtain a computational hardness reduction, our goal will be to simulate the instances from [DK22] by replacing the hidden direction discrete Gaussians with the noisy versions that we obtain from this rejection sampling. We next discuss this procedure and see why a naive implementation of it does not produce a PTF with Massart noise. 3.2 Intuition for the Hard Instance The natural thing to try is to simulate the conditional distributions from [DK22] by replacing the hidden direction discrete Gaussian terms in (1) with similar distributions obtained from rejection sampling. In particular, Lemma 3.3 says that we can obtain a distribution which is close to this hidden direction Gaussian plus a small amount of Gaussian noise. Unfortunately, this extra noise will cause problems for our construction. Recall that the support of xs | (y = +1) was ⋃ i∈Z [it, it+ (i+ 1)ϵ], and the support of xs | (y = −1) was ⋃ i∈Z [it+ t/2, it+ t/2 + (i+ 1)ϵ] for [DK22]. With the extra noise, there is a decaying density tail in both sides of each [it, it + (i + 1)ϵ] interval in the support of xs | (y = +1). The same holds for each interval in the support of xs | (y = −1). Recalling the three cases of these intervals discussed earlier, this leads to the following issue. In the second case, the intervals have length within [t/2, t]; thus, the intervals [it, it+ (i+ 1)ϵ] and [it+ t/2, it+ t/2 + (i+ 1)ϵ] overlap, i.e., it + (i + 1)ϵ ≥ it + t/2. On the right side of [it, it + (i + 1)ϵ], in the support of xs | (y = −1), there is a small region of values for u, where Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u]. This causes the labels y = +1 and y = −1 to be equally likely over that small region, violating the Massart condition. (We note that for the first case, there is also this kind of small region that Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u] caused by the noise tail. However, the probability density of this region is negligibly small, as we will later see in Lemma 3.9.) We can address this by carving out empty spaces in the [it+ t/2, it+ t/2 + (i+ 1)ϵ] intervals for xs | (y = −1), so that these decaying parts can fit into. Since this only needs to be done for intervals of Case 2, at most O(t/ϵ) many such slots are needed. It should be noted that no finite slot will totally prevent this from occurring. However, we only need the slot to be wide enough so that the decay of the error implies that there is negligible mass in the overlap (which can be treated as an error). We also need to discuss another technical detail. In the last section, we defined the rejection sampling process as taking (1/σscale) ◦ DNx+Zn,σscale | (y = y ′), where we can control the offset by y′ and spacing by σscale (Lemma 3.3). This distribution is effectively a noisy version of a discrete Gaussian. Therefore, we can produce a noisy version of the hard instances of [DK22] by taking a mixture of these noisy discrete Gaussians. Unfortunately the noise rate of one of these instances will be σnoise. This quantity depends on the spacing T of the discrete Gaussian, which varies across the mixture we would like to take. This inconsistent noise rate is inconvenient for our analysis. However, we can fix the issue by adding extra noise artificially to each of the discrete Gaussians in our mixture, so that they will all have a uniform noise rate σnoise; see Algorithm 1 and Lemma 3.5. The last bit of technical detail is that instead of doing the rejection for y = y′, which has 0 acceptance probability, we will only reject if y is not corresponding to any discrete Gaussian we need. Then we do another rejection to make sure that the magnitude of discrete Gaussians in the mixture is correct. In the next subsection, we introduce the complete rejection sampling method. 3.3 The Full Hard Instance Construction We first introduce the complete rejection algorithm, and then explain how the hard instance is produced using it. Below we provide proof overviews; omitted proofs can be found in Appendix C. 3.3.1 The Complete Rejection Algorithm The rejection sampling algorithm is the following. The sampling process produces the noisy variant of the distribution which, for some carefully selected set B ⊆ [0, 1], has PDF function 1 λ(B) ∫ B DNk+(t+k−ψ)Z,1dk in the hidden direction, as we will see in Lemma 3.5. Algorithm 1 Rejection Sampling Algorithm Inputs: A sample (x, y) ∈ Rn1 × R1 and the input parameters are t, ϵ, ψ ∈ R>0, where ψ + ϵ ≤ t, B ⊆ [ψ,ψ + ϵ], δ ∈ (0, 1). In addition, the parameters satisfy items (i)-(iii) of Condition 3.1. Output: REJECT or a sample x′ ∈ Rn. 1. Reject unless there is a k ∈ B such that y = kt+k−ψ . 2. Furthermore, reject with probability 1− t 2 (t+k−ψ)2 . 3. Let SR = 1 − 4(t + ϵ)2σ2, σscale = SR(t+k−ψ)√n and σadd = √ (1−SR)σ2scale−SR(σ/ √ n)2 SR . Then, sample independent noise xadd ∼ DNRn,σadd and output x ′ ∼ (1/σscale) ◦DNx+xadd+Zn,σscale . Notice that the parameter SR does not depend on y, whereas σscale, σadd do depend on y. For convenience, let us use the following notation for the output distributions. Definition 3.4 (Output Distribution of Rejection Sampling). Let Dalternativet,ϵ,ψ,B,δ be the distributions of x′ produced by Algorithm 1 (conditioned that the algorithm accepts) given that (x, y) are sampled as follows: let x ∼ U(Rn1 ), z ∼ DNσ , and then let y = mod1(⟨x, s⟩+ z), where s ∈ {±1}n is the secret. Furthermore, let Dnullt,ϵ,ψ,B,δ be a similar distribution, but when x ∼ U(Rn1 ), y ∼ U(R1) are independent. Note that Dalternativet,ϵ,ψ,B,δ depends on s, but we do not explicitly denote this in our notation. Alternative Hypothesis Analysis The main properties of Dalternativet,ϵ,ψ,B,δ are summarized in the following lemma. Essentially, the lemma states that for this distribution Dalternativet,ϵ,ψ,B,δ , the marginal distribution on the hidden direction s is pointwise close to the convolution sum of D′ and a Gaussian noise, where D′ is a linear combination of discrete Gaussians. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s always stays pointwise close to a Gaussian. Lemma 3.5. Let x′ ∼ Dalternativet,ϵ,ψ,B,δ . Then we have the following: (i) For x′s, we have that for any u ∈ R, Px′s(u) = (1 ± O(δ))PD′⋆DNσnoise (u) , where D ′ = 1λ(B) ∫ B DNk+(t+k−ψ)Z,σsignaldk , σsignal = √ SR, and σnoise = √ 1− SR = 2(t + ϵ)σ. (SR is defined in Algorithm 1), (ii) x′⊥s is “nearly independent” of x′s; namely, for any l ∈ R and u ∈ Rn−1, we have that Px′⊥s|x′s=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Null Hypothesis Analysis For Dnullt,ϵ,ψ,B,δ , we can show that it is pointwise close to DNRn,1: Lemma 3.6. For any u ∈ Rn, we have that PDnullt,ϵ,ψ,B,δ(u) = (1±O(δ))PDNRn,1(u) . 3.3.2 The Reduction Algorithm With the rejection sampling algorithm (Algorithm 1) at our disposal, we can now give the full construction of the hard instance. We use Dt,ϵ,ψ+,B+,δ for x | y = +1, Dt,ϵ,ψ−,B−,δ for x | y = −1 (with a carefully chosen pair of (B+, ψ+) and (B−, ψ−), as we discussed in Section 3.2), and take a proper marginal distribution of y to build a joint distribution of (x, y). We introduce a reduction algorithm that, given samples from our LWE problem (either from the null or the alternative hypothesis), produces i.i.d. samples (x, y) from a joint distribution with the following properties: 1. If the input LWE problem is the null hypothesis, then x | y = +1 and x | y = −1 are close in total variation distance. Therefore, no hypothesis for predicting y in terms of x can do much better than the best constant hypothesis. 2. If the input LWE problem is the alternative hypothesis, then the joint distribution of (x, y) we build is close to a distribution D that satisfies O(η) Massart condition with respect to a degree-O(t/ϵ) PTF, and there is a degree-O(t/ϵ) PTF with small error on D. We formalize the idea from Section 3.2 here. For x | y = +1, we will use ψ+ def = 0 and B+ def = [0, ϵ]. For x | y = −1, we take ψ− def = t/2, which is also the same as [DK22]; but instead of taking B− def = [t/2, t/2 + ϵ], we will need to carve out the slots on B−. First, we define the mapping g : R− [−1.5t, 0.5t] 7→ [0.5t, t], as follows: for i ∈ Z and b ∈ Rt, we have that g(it+ t/2 + b) def = { b i+1 + t/2 if i ≥ 0; b−t i+2 + t/2 if i < 0. This function maps a location it+ t/2 + b to the corresponding place we need to carve out on B−, which is defined in Algorithm 2. These intervals are chosen so that the decaying density part of +1 can fit in, as we discussed in Section 3.2. Now we introduce the algorithm that reduces LWE to learning Massart PTFs. We similarly define the output distributions of the algorithms in the two cases as follows: Definition 3.7. Let DalternativePTF be mixture of Dalternativet,ϵ,ψ+,B+,δ and D alternative t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1−η and η respectively. Similarly, letDnullPTF be mixture ofDnullt,ϵ,ψ+,B+,δ andD null t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1− η and η respectively. The following observation is immediate from the algorithm. Observation 3.8. In the alternative (resp. null) hypothesis case, the output distribution of Algorithm 2, conditioned on not failing, is the same as m′ i.i.d. samples drawn from DalternativePTF (resp. D null PTF). Alternative Hypothesis Analysis We prove that there exists a degree-O(t/ϵ) PTF such that DalternativePTF is close to (in total variation distance) satisfying the O(η) Massart noise condition with respect to this PTF, and this PTF has small error with respect to DalternativePTF . Lemma 3.9. DalternativePTF is O(δ/m′) close in total variation distance to a distribution Dtruncated such that there is a degree-O(t/ϵ) PTF sign(p(x)) that: (i) E(x,y)∼Dtruncated [sign(p(x)) ̸= y] ≤ exp(−Ω(t4/ϵ2)), (ii) Dtruncated satisfies the O(η) Massart noise condition with respect to sign(p(x)). Null Hypothesis Analysis The reader is referred to Lemma C.8 in Appendix C for the null hypothesis analysis. Algorithm 2 Reducing LWE to Learning PTFs with Massart Noise Inputs: m samples from an instance of LWE(m,Rn1 , {±1}n,Nσ,mod1). The input parameters are m′ ∈ N, t, ϵ ∈ R>0, δ ∈ (0, 1), and η > 0 a sufficiently small value. In addition, the parameters satisfy Condition 3.1. Output: m′ many samples in Rn × {±1} or FAIL. 1. We take ψ+ = 0, B+ = [0, ϵ], ψ− = t/2 and B− def = [t/2, t/2 + ϵ]− t ϵ−1⋃ i= t2ϵ−1 g([it− 2c′ϵ, it])− t ϵ−1⋃ i= t2ϵ−1 g([it+ (i+ 1)ϵ, it+ (i+ 1)ϵ+ 2c′ϵ]) − − t2ϵ−1⋃ i=− tϵ−1 g([it+ (i+ 1)ϵ− 2c′ϵ, it+ (i+ 1)ϵ])− − t2ϵ−1⋃ i=− tϵ−1 g([it, it+ 2c′ϵ]) . 2. Repeat the following m′ times. If at any point the algorithm attempts to use more than m LWE samples from the input, then output FAIL. (a) With probability 1 − η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ+, B = B+, δ. Add (x′,+1) to the output samples. (b) With probability η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ−, B = B−, δ. Add (x′,−1) to the output samples. Putting Everything Together Having reduced LWE to learning Massart PTFs, we can apply a Veronese mapping on the samples; this PTF becomes an LTF on the Veronese mapping. Since we use degree-O(t/ϵ) Veronese mapping, the dimension for the Massart LTF problem is N = (n+ 1)O(t/ϵ). 4 Discussion Our result rules out the existence of polynomial time algorithms achieving error smaller than Ω(η), where η is the upper bound on the noise rate, even of the optimal error is very small, assuming the subexponential time hardness of LWE. A technical open question is whether the constant factor in the Ω(η)-term of our lower bound can be improved to the value C = 1; this would match known algorithms exactly. (As mentioned in the introduction, such a sharp lower bound has been recently established in the SQ model [NT22], improving on [DK22].) It is also worth noting that our reduction rules out polynomial-time algorithms, but does not rule out, e.g., subexponential or even quasipolynomial time algorithms with improved error guarantees. We believe that obtaining stronger hardness for these problems would require substantially new ideas, as our runtime lower bounds are essentially the same as the best time lower bounds for learning in the (much stronger) agnostic noise model or in restricted models of computation (like SQ). This seems related to the requirement that our bounds require subexponential hardness of LWE in our assumption. As the strongest possible assumptions only allow us to prove quasi-polynomial lower bounds, any substantially weaker assumption will likely fail to prove super-polynomial ones. Acknowledgments Ilias Diakonikolas was supported by NSF Medium Award CCF-2107079, NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant. Daniel M. Kane was supported by NSF Medium Award CCF-2107547, NSF Award CCF-1553288 (CAREER), a Sloan Research Fellowship, and a grant from CasperLabs. Lisheng Ren was supported by NSF Award CCF-1652862 (CAREER) and a DARPA Learning with Less Labels (LwLL) grant.
1. What is the focus of the paper regarding learning halfspaces with Massart noise? 2. What are the strengths of the proposed approach, particularly in its hardness reduction? 3. Do you have any questions about the paper's assumptions or limitations? 4. How does the reviewer assess the significance and optimality of the presented result? 5. Can the reviewer provide further insights into the relevance and impact of the paper's findings in the context of learning theory?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper establishes the cryptographic hardness of learning halfspaces with Massart noise by reducing the Learning with Errors problem to it, which as claimed is widely believed to be subexponential-time hard. This improves the precious statistical-query hardness by [DK21], which only holds for a restricted class of SQ models. The key technique is establishing some hard instances for Massart halfspaces and then performing rejection sampling to produce them from the LWE samples. As a result, the hardness of LWE problem implies the hardness of learning LTFs with Massart noise. Strengths And Weaknesses The result shows that no polynomial-time algorithm can achieve o(\eta) error even though the optimal rate OPT is very small, implying the optimality of the existing Massart halfspaces algorithms ([DGT19,CKMY20,DKT21]). The established lower bound helps to understand the hardness of an interesting and important problem of learning Halfspaces with Massart noise, a noise type that is considered to be between the random classification noise model and agnostic model, which makes it an important progress in the theory of learning. No concerns on the clarity. Questions Can the authors address the essentialness of the subexponential hardness of LWE, and to what extent it holds? Limitations There are no negative social impact concerns for this paper.
NIPS
Title Cryptographic Hardness of Learning Halfspaces with Massart Noise Abstract We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (x, y) ∈ R × {±1}, where the distribution of x is arbitrary and the label y is a Massart corruption of f(x), for an unknown halfspace f : R → {±1}, with flipping probability η(x) ≤ η < 1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomialtime Massart halfspace learner can achieve error better than Ω(η), even if the optimal 0-1 error is small, namely OPT = 2− log (N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. N/A c(N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. 1 Introduction A halfspace or linear threshold function (LTF) is any function hw,t : RN → {±1} of the form hw,t(x) := sign(⟨w,x⟩ − t), where the vector w ∈ RN is called the weight vector, t ∈ R is called the threshold, and sign : R → {±1} is defined by sign(t) = 1 if t ≥ 0 and sign(t) = −1 otherwise. Halfspaces are a central concept class in machine learning, extensively investigated since the 1950s [Ros58, Nov62, MP68]. Here we study the computational complexity of learning halfspaces in Valiant’s (distribution independent) PAC model [Val84], when the labels have been corrupted by Massart noise [MN06]. We define the Massart noise model below. Definition 1.1 (Massart Noise). We say that a joint distribution D of labeled examples (x, y), supported on RN × {±1}, satisfies the Massart noise condition with noise parameter η ∈ [0, 1/2) with respect to a concept class C of Boolean-valued functions on RN if there is a concept c ∈ C such that for all x0 ∈ RN we have that η(x0) def = Pr(x,y)∼D[c(x) ̸= y | x = x0] ≤ η. The Massart PAC learning problem for the concept class C is the following: Given i.i.d. samples from a Massart distribution D, as in Definition 1.1, the goal is to output a hypothesis with small 0-1 error. In this work, we study the computational complexity of the Massart PAC learning problem, when the underlying concept class C is the class of halfspaces on RN . In its above form, the Massart noise model was defined in [MN06]. An essentially equivalent noise model had been defined in the 80s by Sloan and Rivest [Slo88, RS94, Slo96], and a very similar definition had been considered even earlier by Vapnik [Vap82]. The Massart model is a classical semi-random noise model that is more realistic than Random Classification Noise (RCN) In contrast to RCN, Massart noise allows for variations in misclassification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). rates (without a priori knowledge of which inputs are more likely to be misclassified). Asymmetric misclassification rates arise in a number of applications, including in human annotation noise [BK09]. Consequently, learning algorithms that can tolerate Massart noise are less brittle than those that depend on the uniformity of RCN. The agnostic model [Hau92, KSS94], where the noise can be fully adversarial, is of course even more robust; unfortunately, it is computationally hard to obtain agnostic learners with any non-trivial guarantees, even for basic settings. We now return to the class of halfspaces, which is the focus of this work. We recall that PAC learning halfspaces with RCN is known to be solvable in polynomial time (to any desired accuracy) [BFKV96]. On the other hand, agnostic PAC learning of halfspaces is known to computationally hard (even for weak learning) [GR06, FGKP06, Dan16]. The computational task of PAC learning halfspaces corrupted by Massart noise is a classical problem in machine learning theory that has been posed by several authors since the 1980s [Slo88, Coh97, Blu03]. Until recently, no progress had been made on the efficient PAC learnability of Massart halfspaces. [DGT19] made the first algorithmic progress on this problem: they gave a poly(N, 1/ϵ)-time learning algorithm with error guarantee of η+ ϵ. Subsequent work made a number of refinements to this algorithmic result, including giving an efficient proper learner [CKMY20] and developing an efficient learner with strongly polynomial sample complexity [DKT21]. In a related direction, [DIK+21] gave an efficient boosting algorithm achieving error η+ ϵ for any concept class, assuming the existence of a weak learner for the class. The error bound of η can be very far from the information-theoretically optimum error of OPT, where OPT = RLTF(D) ≤ η. Indeed, known polynomial-time algorithms only guarantee error ≈ η even if OPT is very small, i.e., OPT ≪ η. This prompts the following question: Question 1.1. Is there an efficient learning algorithm for Massart halfspaces with a relative error guarantee? Specifically, if OPT ≪ η is it possible to achieve error significantly better than η? Our main result (Theorem 1.2) answers this question in the negative, assuming the subexponentialtime hardness of the classical Learning with Errors (LWE) problem (Assumption 2.4). In other words, we essentially resolve the efficient PAC learnability of Massart halfspaces, under a widely-believed cryptographic assumption. 1.1 Our Results Before we state our main result, we recall the setup of the Learning with Errors (LWE) problem. In the LWE problem, we are given samples (x1, y1), . . . , (xm, ym) and the goal is to distinguish between the following two cases: (i) Each xi is drawn uniformly at random (u.a.r.) from Znq , and there is a hidden secret vector s ∈ Znq such that yi = ⟨xi, s⟩+ zi, where zi ∈ Zq is discrete Gaussian noise (independent of xi); (ii) Each xi and each yi are independent and are sampled u.a.r. from Znq and Zq respectively. Formal definitions of LWE (Definition 2.3) and related distributions together with the precise computational hardness assumption (Assumption 2.4) we rely on are given in Section 2. Our main result can now be stated as follows: Theorem 1.2 (Informal Main Theorem). Assume that LWE cannot be solved in 2n 1−Ω(1) time. Then, for any constant ζ > 0, there is no polynomial-time learning algorithm for Massart halfspaces on RN that can output a hypothesis with 0-1 error smaller than Ω(η), even when OPT ≤ 2− log1−ζ N and the Massart noise parameter η is a small positive constant. The reader is also referred to Theorem D.1 in the Appendix for a more detailed formal statement. Theorem 1.2 is the first computational hardness result for PAC learning halfspaces (and, in fact, any non-trivial concept class) in the presence of Massart noise. Our result rules out even improper PAC learning, where the learner is allowed to output any polynomially evaluatable hypothesis. As a corollary, it follows that the algorithm given in [DGT19] is essentially the best possible, even when assuming that OPT is almost inverse polynomially small (in the dimension N ). We also remark that this latter assumption is also nearly the best possible: if OPT is o(ϵ/N), then we can just draw Ω(N/ϵ) samples and output any halfspace that agrees with these samples. We note that a line of work has established qualitatively similar hardness in the Statistical Query (SQ) model [Kea98] — a natural, yet restricted, model of computation. Specifically, [CKMY20] established a super-polynomial SQ lower bound for learning within error of OPT + o(1). Subse- quently, [DK22] gave a near-optimal super-polynomial SQ lower bound: their result rules out the existence of efficient SQ algorithms that achieve error better than Ω(η), even if OPT = 2log 1−ζ N . Building on the techniques of [DK22], more recent work [NT22] established an SQ lower bound for learning to error better than η, even if OPT = 2log 1−ζ N — matching the guarantees of known algorithms exactly. While the SQ model is quite broad, it is also restricted. That is, the aforementioned prior results do not have any implications for the class of all polynomial-time algorithms. Interestingly, as we will explain in the proceeding discussion, our computational hardness reduction is inspired by the SQ-hard instances constructed in [DK22]. 1.2 Brief Technical Overview Here we give a high-level overview of our approach. Our reduction proceeds in two steps. The first is to reduce the standard LWE problem (as described above) to a different “continuous” LWE problem more suitable for our purposes. In particular, we consider the problem where the x samples are taken uniformly from Rn/Zn, y is either taken to be an independent random element of R/Z or is taken to be ⟨x, s⟩ mod 1 plus a small amount of (continuous) Gaussian noise, where s is some unknown vector in {±1}n. This reduction follows from existing techniques [Mic18a, GVV22]. The second step — which is the main technical contribution of our work — is reducing this continuous LWE problem to that of learning halfspaces with Massart noise. The basic idea is to perform a rejection sampling procedure that allows us to take LWE samples (x, y) and produce some new samples (x̃, ỹ). We will do this so that if y is independent of x, then ỹ is (nearly) independent of x̃; but if y = ⟨x, s⟩ + noise, then ỹ is a halfspace of x̃ with a small amount of Massart noise. An algorithm capable of learning halfspaces with Massart noise (with appropriate parameters) would be able to distinguish these cases by learning a hypothesis h and then looking at the probability that h(x̃) ̸= ỹ. In the case where ỹ was a halfspace with noise, this would necessarily be small; but in the case where x̃ and ỹ were independent, it could not be. In order to manage this reduction, we will attempt to produce a distribution (x̃, ỹ) similar to the SQ-hard instances of Massart halfspaces constructed in [DK22]. These instances can best be thought of as instances of a random variable (x′, y′) in Rn × {±1}, where y′ is given by a low-degree polynomial threshold function (PTF) of x′ with a small amount of Massart noise. Then, letting x̃ be the Veronese map applied to x′, we see that any low-degree polynomial in x′ is a linear function of x̃, and so ỹ = y′ is an LTF of x̃ plus a small amount of Massart noise. As for how the distribution over (x′, y′) is constructed in [DK22], essentially the conditional distribution of x′ on y′ = 1 and on y′ = −1 are carefully chosen mixtures of discrete Gaussians in the v-direction (for some randomly chosen unit vector v), and independent standard Gaussians in the orthogonal directions. () Our goal will be to find a way to perform rejection sampling on the distribution (x, y) to produce a distribution of this form. In pursuit of this, for some small real number b and some a ∈ [0, b), we let x′ be a random Gaussian subject to x′ ≡ bx (mod b) (in the coordinate-wise sense) conditioned on by ≡ a (mod b). We note that if we ignore the noise in the definition of y, this implies that ⟨x′, s⟩ ≡ ⟨bx, s⟩ ≡ b ⟨x, s⟩ ≡ by ≡ a (mod b) (recalling that s ∈ {±1}n). In fact, it is not hard to see that the resulting distribution on x′ is close to a standard Gaussian conditioned on ⟨x′, s⟩ ≡ a (mod b). In other words, x′ is close to a discrete Gaussian with spacing b/∥s∥2 and offset a/∥s∥2 in the s-direction, and an independent standard Gaussian in orthogonal directions. Furthermore, this x′ can be obtained from (x, y) samples by rejection sampling: taking many samples until one is found with by ≡ a (mod b), and then returning a random x′ with x′ ≡ bx (mod b). By taking an appropriate mixture of these distributions, we can manufacture a distribution close to the hard instances in [DK22]. This intuition is explained in detail in Section 3.1; see Lemma 3.3. (We note that Lemma 3.3 is included only for the purposes of intuition; it is a simpler version of Lemma 3.5, which is one of the main lemmas used to prove our main theorem.) Unfortunately, as will be discussed in Section 3.2, applying this construction directly does not quite work. This is because the small noise in the definition of y leads to a small amount of noise in the final values of ⟨x′, s⟩. This gives us distributions that are fairly similar to the hard instances of [DK22], but leads to small regions of values for u, where the following condition holds: Pr(y′ = +1 | x′ = u) = Pr(y′ = −1 | x′ = u). Unfortunately, the latter condition cannot hold if y′ is a function of x′ with Massart noise. In order to fix this issue, we need to modify the construction by carving intervals out of the support of x′ conditioned on y′ = −1, in order to eliminate these mixed regions. This procedure is discussed in detail in Section 3.3.2. 1.3 Additional Related Work There have also been several recent works showing reductions from LWE or lattice problems to other learning problems. Concurrent and independent work to ours [Tie22] showed hardness of weakly agnostically learning halfspaces, based on a worst-case lattice problem (via a reduction from “continuous” LWE). Two recent works obtained hardness for the unsupervised problem of learning mixtures of Gaussians (GMMs), assuming hardness of (variants of) the LWE problem. Specifically, [BRST21] defined a continuous version of LWE (whose hardness they established) and reduced it to the problem of learning GMMs. More recently, [GVV22] obtained a direct reduction from LWE to a (different) continuous version of LWE; and leveraged this connection to obtain quantitatively stronger hardness for learning GMMs. It is worth noting that for the purposes of our reduction, we require as a starting point a continuous version of LWE that differs from the one defined in [BRST21]. Specifically, we require that the distribution on x is uniform on [0, 1]n (instead of a Gaussian, as in [BRST21]) and the secret vector is binary. The hardness of this continuous version essentially follows from [Mic18b, GVV22]. 2 Preliminaries For x, s ∈ Rn with s ̸= 0, let xs def= ⟨x, s⟩/∥s∥2 be the length of the projection of x in the s direction, and x⊥s ∈ Rn−1 be the projection1 of x on the orthogonal complement of s. For f, g : U → R, we write f(u) ∝ g(u) if there is c ∈ R such that f(u) = cg(u) for all u ∈ U . We use X ∼ D to denote a random variable X with distribution D. We use PD or PX for the corresponding probability mass function (pmf) or density function (pdf), and PrD or PrX for the measure function of the distribution. We use DX to denote the distribution of the random variable X . For S ⊆ Rn, we will use λ(S) to denote the n-dimensional volume of S. Let U(S) denote the uniform distribution on S. For a distribution D on Rn and S ⊆ Rn, we denote by D | S the conditional distribution of X ∼ D given X ∈ S. Let Ds (resp. D⊥s) be the distribution of xs (resp. x⊥s), where x ∼ D. For distributions D1, D2, we use D1 +D2 to denote the pseudo-distribution with measure function PrD1+D2(A) = PrD1(A) + PrD2(A). For a ∈ R, let aD denote the pseudo-distribution with measure function aPrD. On the other hand, let a ◦D denote the distribution of aX , where X ∼ D. We use D1 ⋆ D2 to denote the convolution of distributions D1, D2. We will use LTFN for the class of halfspaces on RN ; when N is clear from the context, we may discard it and simply write LTF. For q ∈ N, we use Zq def = {0, 1, · · · , q − 1} and Rq def = [0, q). We use modq : Rn 7→ Rnq to denote the function that applies modq(x) on each coordinate of x. We use DNRn,σ to denote the n-dimensional Gaussian distribution with mean 0 and covariance matrix σ2/(2π) · In and use DNσ as a short hand for DNR,σ. In some cases, we will use N (0, In) for the standard (i.e., zero mean and identity covariance) multivariate Gaussian, Definition 2.1 (Partially Supported Gaussian Distribution). For σ ∈ R+ and x ∈ Rn, let ρσ(x) def = σ−n exp ( −π(∥x∥2/σ)2 ) . For any countable set S ⊆ Rn, we let ρσ(S) def = ∑ x∈S ρσ(x), and let DNS,σ be the distribution supported on S with pmf PDNS,σ (x) = ρσ(x)/ρσ(S). Definition 2.2 (Discrete Gaussian). For T ∈ R+, y ∈ R and σ ∈ R+, we define the “T -spaced, y-offset discrete Gaussian distribution with σ scale” to be the distribution of DNTZ+y,σ . Learning with Errors (LWE) We use the following definition of LWE, which allows for flexible distributions of samples, secrets, and noises. Here m is the number of samples, n is the dimension, and q is the ring size. Definition 2.3 (Generic LWE). Let m,n, q ∈ N, and let Dsample, Dsecret, Dnoise be distributions on Rn,Rn,R respectively. In the LWE(m,Dsample, Dsecret, Dnoise,modq) problem, we are given m independent samples (x, y) and want to distinguish between the following two cases: (i) Alternative 1More precisely, let B⊥s ∈ Rn×(n−1) for the matrix whose columns form an (arbitrary) orthonormal basis for the orthogonal complement of s, and let x⊥s def= (B⊥s)T x. hypothesis: s is drawn from Dsecret. Then, each sample is generated by taking x ∼ Dsample, z ∼ Dnoise, and letting y = modq(⟨x, s⟩+ z); and (ii) Null hypothesis: x, y are independent and each has the same marginal distribution as above. When a distribution in LWE is uniform over some set S, we may abbreviate U(S) merely as S. Note that LWE(m,Znq ,Znq , DNZ,σ,modq) to the classical LWE problem. Computational Hardness Assumption for LWE As alluded to earlier, the assumption for our hardness result is the hardness of the (classic) LWE problem, with the parameters stated below. Assumption 2.4 (Standard LWE Assumption (see, e.g., [LP11])). Let c > 0 be a sufficiently large constant. For any constant β ∈ (0, 1), κ ∈ N, LWE(2O(nβ),Znq ,Znq , DNZ,σ,modq) with q ≤ nκ and σ = c √ n cannot be solved in 2O(n β) time with 2−O(n β) advantage. We recall that [Reg09, Pei09] gave a polynomial-time quantum reduction from approximating (the decision version of) the Shortest Vector Problem (GapSVP) to LWE (with similar n, q, σ parameters). Our hardness assumption is the widely believed sub-exponential hardness of LWE. We note that the fastest known algorithm for GapSVP takes 2O(n) time [ALNS20]. Thus, refuting the conjecture would be a major breakthrough. A similar assumption was also used in [GVV22] to establish computational hardness of learning Gaussian mixtures. Our use of a sub-exponential hardness of LWE is not a coincidence; see Section 4. As mentioned earlier, we will use a different variant of LWE, where the sample is from Rn1 , the secret is from {±1}n, and the noise is drawn from a continuous Gaussian distribution. The hardness of this variant is stated below. The proof, which follows from [Mic18a, GVV22], is deferred to Appendix B. Lemma 2.5. Under Assumption 2.4, for any β ∈ (0, 1) and γ ∈ R+, there is no 2O(n β) time algorithm to solve LWE ( 2O(n β),Rn1 , {±1}n, DNO(n−γ),mod1 ) with 2−O(n β) advantage. Decisional Massart Halfspace Problem For a distribution D on labeled examples and a concept class C, we let RC(D) def = minh∈C Pr(x,y)∼D[h(x) ̸= y] be the error of the best classifier in C with respect to D. We will prove hardness for the following decision version of learning Massart halfspaces. This will directly imply hardness for the corresponding learning (search) problem. Definition 2.6 (Testing Halfspaces with Massart Noise). For n,N ∈ N, η,OPT ∈ (0, 1/2), let Massart(m,N, η,OPT) denote the problem of distinguishing, given m i.i.d. samples from D on RN × {±1}, between the following two cases: (i) Alternative hypothesis: D satisfies the Massart halfspace condition with noise parameter η and RLTF(D) ≤ OPT; and (ii) Null hypothesis: the Bayes optimal classifier has cη error, where c > 0 is a sufficiently small universal constant. 3 Reduction from LWE to Learning Massart Halfspaces In this section, we establish Theorem 1.2. Some intermediate technical lemmas have been deferred to the Appendix C. Our starting point is the problem LWE(m,Rn1 , {±1}n, DNσ ,mod1). Note that, by Lemma 2.5, Assumption 2.4 implies the hardness of LWE(m,Rn1 , {±1}n, DNσ ,mod1). We will reduce this variant of LWE to the decision/testing version of Massart halfspaces (Definition 2.6). Our reduction will employ multiple underlying parameters, which are required to satisfy a set of conditions. For convenience, we list these conditions below. Condition 3.1. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, δ ∈ (0, 1), satisfy: (i) t/ϵ is a sufficiently large even integer, (ii) σ ≤ √ n, (iii) 1 t √ n ≥ √ c log(n/δ), where c is a sufficiently large universal constant, (iv) ( c ′ϵ c′′tσ ) 2 ≥ log(m′/δ), where c′ > 0 is a sufficiently small universal constant and c′′ > 0 is a sufficiently large universal constant. The main theorem of this work is stated below. Theorem 3.2. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, ϵ′, δ ∈ (0, 1) satisfy Condition 3.1 and η < 1/2. Moreover, assume that m′ = c(ϵ/t)m, where c > 0 is a sufficiently small universal constant and m(ϵ/t)2 is sufficiently large, and N = (n + 1)d, where d/(t/ϵ) is sufficiently large. Suppose that there is no T + poly(m,N, log(1/δ))-time algorithm for solving LWE(m,Rn1 , {±1}n, DNσ ,mod1) with ϵ′ −O(δ) advantage. Then there is no T time algorithm for solving Massart(m′, N, η,OPT) with 2ϵ′ advantage, where OPT = exp(−Ω(t4/ϵ2)). Note that Theorem 3.2, combined with Lemma 2.5, can be easily used to prove Theorem 1.2 (e.g., by plugging in t = n−0.5−Θ(ζ), ϵ = Θ(n−1.5) in the above statement); see Appendix D. As such, we devote the remainder of the body of this paper to give an overview to the proof of Theorem 3.2. High-level Overview The starting point of our computational hardness reduction is the family of SQ-hard instances obtained in [DK22]. At a high-level, these instances are constructed using mixtures of “hidden direction” discrete Gaussian distributions, i.e., distributions that are discrete Gaussians in a hidden direction and continuous Gaussians on the orthogonal directions. In Section 3.1, we note that by using an appropriate rejection sampling procedure on the LWE samples (drawn from the alternative hypothesis), we obtain a distribution very similar to the “hidden direction discrete Gaussian”. A crucial difference in our setting is the existence of a small amount of additional “noise”. A natural attempt is to replace the discrete Gaussians in [DK22] with the noisy ones obtained from our rejection sampling procedure. This produces problems similar to the hard instances from [DK22]. Unfortunately, the extra noise in our construction means that the naive version of this construction will not work; even with small amounts of noise, the resulting distributions will not satisfy the assumptions of a PTF with Massart noise. In Section 3.2, we elaborate on this issue and the modifications we need to make to our construction in order to overcome it. In Section 3.3, we provide the complete construction of our Massart PTF hard instance. Overview of the [DK22] SQ-hard Construction [DK22] showed SQ-hardness for the following hypothesis testing version of the problem (which implies hardness for the learning problem): For an input distribution D on Rn × {±1}, distinguish between the cases where D is a specific distribution Dnull in which x and y are independent or where D belongs to a class of alternative hypothesis distributions Dalternative. In particular, for D ∈ Dalternative, y will be given by a low-degree PTF in x with a small amount of Massart noise. As we will be trying to reproduce it, it is important for us to understand this alternative hypothesis distribution. Each distribution in Dalternative is parameterized by a hidden direction s ∈ Sn−1. We will denote the corresponding distribution by Ds. Ds is constructed so that x⊥s ∼ DNRn−1,1 is independent of x s and y. This means that we can specify Ds by describing the simpler distribution of (xs, y) ∈ R × {±1}. For (xs, y), we have that y = +1 with probability 1− η. The distributions of xs conditioned on y = ±1 are defined to be mixtures of discrete Gaussians as follows: Dxs|(y=+1) = 1 ϵ ∫ ϵ 0 DNu+(t+u)Z,1du and Dxs|(y=−1) = 1 ϵ ∫ t/2+ϵ t/2 DNu+(t+u−t/2)Z,1du . (1) As we noted, both xs | (y = +1) and xs | (y = −1) are mixtures of discrete Gaussians. Combining this with the fact that x⊥s ∼ N (n, In−1), this indicates that x | (y = +1) and x | (y = −1) are mixtures of “hidden direction discrete Gaussians” — with different spacing and offset for their support on the hidden direction. These conditional distributions were carefully selected to ensure that y is a Massart PTF of x with small error. To see why this is, notice that the support of xs | (y = +1) is ⋃ i∈Z [it, it+ (i+1)ϵ], while the support of xs | (y = −1) is ⋃ i∈Z [it+ t/2, it+ t/2+ (i+1)ϵ]; both supports are unions of intervals. Consider the implications of this for three different ranges of xs: 1. For xs ∈ [−t2/(2ϵ), t2/(2ϵ)], the intervals have lengths in [0, t/2]; thus, the +1 intervals and the −1 intervals do not overlap at all. 2. For xs ∈ [−t2/ϵ,−t2/(2ϵ)) ∪ (t2/(2ϵ), t2/ϵ], the intervals have lengths in [t/2, t]; thus, the +1 intervals and the −1 intervals overlap, so that their union covers the space. We note that in this case there are gaps between the +1 intervals; specifically, there are at most O(t/ϵ) such gaps. 3. For xs ∈ (−∞,−t2/ϵ)∪ (t2/ϵ,∞), the intervals have lengths in [t,∞), so the +1 intervals cover the space by themselves. Consider the degree-O(t/ϵ) PTF sign(p(x)) such that sign(p(x)) = +1 iff xs ∈ ⋃ i∈Z [it, it+(i+1)ϵ]. In particular, sign(p(x)) = 1 for x in the support of the conditional distribution on y = 1. Note that the PTF sign(p(x)) has zero error in the first case; thus, its total 0-1 error is at most exp(−Ω(t2/ϵ)2). Moreover, since the probability of y = 1 is substantially larger than the probability of y = −1, it is not hard to see that for any x with sign(p(x)) = 1 that Pr[y = 1 | x = x] > 1−O(η). This implies that y is given by sign(p(x)) with Massart noise O(η). 3.1 Basic Rejection Sampling Procedure In this subsection, we show that by performing rejection sampling on LWE samples, one can obtain a distribution similar to the “hidden direction discrete Gaussian”. For the sake of intuition, we start with the following simple lemma. The lemma essentially states that, doing rejection sampling on LWE samples, gives a distribution with the following properties: On the hidden direction s, the distribution is pointwise close to the convolutional sum of a discrete Gaussian and a continuous Gaussian noise. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s stays pointwise close to a Gaussian. Note that this distribution closely resembles the “hidden direction discrete Gaussian” in [DK22]. Lemma 3.3. Let (x, y) be a sample of the LWE(m,Rn1 , {±1}n, DNσ ,mod1) from the alternative hypothesis case, let y′ be any constant in [0, 1), and let x′ ∼ (1/σscale) ◦ DNx+Zn,σscale | (y = y ′) . Then we have the following: (i) For x′s, we have that for any u ∈ R it holds that Px′s(u) = (1±O(δ))PD′⋆DNσnoise (u) , whereD ′ = DNT (y′+Z),σsignal , and T = SR/(n 1/2σscale), σsignal = √ SR, σnoise = √ 1− SR, and SR = σ 2 scale σ2scale+σ 2/n , (ii) x′⊥s is “nearly independent” of x′s, namely for any l ∈ R and u ∈ Rn−1 we have that Px⊥s|xs=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Lemma 3.3 is a special case of Lemma 3.5, which is one of the main lemmas required for our proof. We note that the distribution of x′ obtained from the above rejection sampling is very similar to the “hidden direction discrete Gaussian” used in [DK22]. The key differences are as follows: (i) on the hidden direction, x′s is close to a discrete Gaussian plus extra Gaussian noise (instead of simply being a discrete Gaussian), (ii) x′⊥s and x′s are not perfectly independent. More importantly, by taking different values for y′ and σscale, we can obtain distributions with the same hidden direction, but their discrete Gaussian on the hidden direction has different spacing (T ) and offset (y′). To obtain a computational hardness reduction, our goal will be to simulate the instances from [DK22] by replacing the hidden direction discrete Gaussians with the noisy versions that we obtain from this rejection sampling. We next discuss this procedure and see why a naive implementation of it does not produce a PTF with Massart noise. 3.2 Intuition for the Hard Instance The natural thing to try is to simulate the conditional distributions from [DK22] by replacing the hidden direction discrete Gaussian terms in (1) with similar distributions obtained from rejection sampling. In particular, Lemma 3.3 says that we can obtain a distribution which is close to this hidden direction Gaussian plus a small amount of Gaussian noise. Unfortunately, this extra noise will cause problems for our construction. Recall that the support of xs | (y = +1) was ⋃ i∈Z [it, it+ (i+ 1)ϵ], and the support of xs | (y = −1) was ⋃ i∈Z [it+ t/2, it+ t/2 + (i+ 1)ϵ] for [DK22]. With the extra noise, there is a decaying density tail in both sides of each [it, it + (i + 1)ϵ] interval in the support of xs | (y = +1). The same holds for each interval in the support of xs | (y = −1). Recalling the three cases of these intervals discussed earlier, this leads to the following issue. In the second case, the intervals have length within [t/2, t]; thus, the intervals [it, it+ (i+ 1)ϵ] and [it+ t/2, it+ t/2 + (i+ 1)ϵ] overlap, i.e., it + (i + 1)ϵ ≥ it + t/2. On the right side of [it, it + (i + 1)ϵ], in the support of xs | (y = −1), there is a small region of values for u, where Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u]. This causes the labels y = +1 and y = −1 to be equally likely over that small region, violating the Massart condition. (We note that for the first case, there is also this kind of small region that Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u] caused by the noise tail. However, the probability density of this region is negligibly small, as we will later see in Lemma 3.9.) We can address this by carving out empty spaces in the [it+ t/2, it+ t/2 + (i+ 1)ϵ] intervals for xs | (y = −1), so that these decaying parts can fit into. Since this only needs to be done for intervals of Case 2, at most O(t/ϵ) many such slots are needed. It should be noted that no finite slot will totally prevent this from occurring. However, we only need the slot to be wide enough so that the decay of the error implies that there is negligible mass in the overlap (which can be treated as an error). We also need to discuss another technical detail. In the last section, we defined the rejection sampling process as taking (1/σscale) ◦ DNx+Zn,σscale | (y = y ′), where we can control the offset by y′ and spacing by σscale (Lemma 3.3). This distribution is effectively a noisy version of a discrete Gaussian. Therefore, we can produce a noisy version of the hard instances of [DK22] by taking a mixture of these noisy discrete Gaussians. Unfortunately the noise rate of one of these instances will be σnoise. This quantity depends on the spacing T of the discrete Gaussian, which varies across the mixture we would like to take. This inconsistent noise rate is inconvenient for our analysis. However, we can fix the issue by adding extra noise artificially to each of the discrete Gaussians in our mixture, so that they will all have a uniform noise rate σnoise; see Algorithm 1 and Lemma 3.5. The last bit of technical detail is that instead of doing the rejection for y = y′, which has 0 acceptance probability, we will only reject if y is not corresponding to any discrete Gaussian we need. Then we do another rejection to make sure that the magnitude of discrete Gaussians in the mixture is correct. In the next subsection, we introduce the complete rejection sampling method. 3.3 The Full Hard Instance Construction We first introduce the complete rejection algorithm, and then explain how the hard instance is produced using it. Below we provide proof overviews; omitted proofs can be found in Appendix C. 3.3.1 The Complete Rejection Algorithm The rejection sampling algorithm is the following. The sampling process produces the noisy variant of the distribution which, for some carefully selected set B ⊆ [0, 1], has PDF function 1 λ(B) ∫ B DNk+(t+k−ψ)Z,1dk in the hidden direction, as we will see in Lemma 3.5. Algorithm 1 Rejection Sampling Algorithm Inputs: A sample (x, y) ∈ Rn1 × R1 and the input parameters are t, ϵ, ψ ∈ R>0, where ψ + ϵ ≤ t, B ⊆ [ψ,ψ + ϵ], δ ∈ (0, 1). In addition, the parameters satisfy items (i)-(iii) of Condition 3.1. Output: REJECT or a sample x′ ∈ Rn. 1. Reject unless there is a k ∈ B such that y = kt+k−ψ . 2. Furthermore, reject with probability 1− t 2 (t+k−ψ)2 . 3. Let SR = 1 − 4(t + ϵ)2σ2, σscale = SR(t+k−ψ)√n and σadd = √ (1−SR)σ2scale−SR(σ/ √ n)2 SR . Then, sample independent noise xadd ∼ DNRn,σadd and output x ′ ∼ (1/σscale) ◦DNx+xadd+Zn,σscale . Notice that the parameter SR does not depend on y, whereas σscale, σadd do depend on y. For convenience, let us use the following notation for the output distributions. Definition 3.4 (Output Distribution of Rejection Sampling). Let Dalternativet,ϵ,ψ,B,δ be the distributions of x′ produced by Algorithm 1 (conditioned that the algorithm accepts) given that (x, y) are sampled as follows: let x ∼ U(Rn1 ), z ∼ DNσ , and then let y = mod1(⟨x, s⟩+ z), where s ∈ {±1}n is the secret. Furthermore, let Dnullt,ϵ,ψ,B,δ be a similar distribution, but when x ∼ U(Rn1 ), y ∼ U(R1) are independent. Note that Dalternativet,ϵ,ψ,B,δ depends on s, but we do not explicitly denote this in our notation. Alternative Hypothesis Analysis The main properties of Dalternativet,ϵ,ψ,B,δ are summarized in the following lemma. Essentially, the lemma states that for this distribution Dalternativet,ϵ,ψ,B,δ , the marginal distribution on the hidden direction s is pointwise close to the convolution sum of D′ and a Gaussian noise, where D′ is a linear combination of discrete Gaussians. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s always stays pointwise close to a Gaussian. Lemma 3.5. Let x′ ∼ Dalternativet,ϵ,ψ,B,δ . Then we have the following: (i) For x′s, we have that for any u ∈ R, Px′s(u) = (1 ± O(δ))PD′⋆DNσnoise (u) , where D ′ = 1λ(B) ∫ B DNk+(t+k−ψ)Z,σsignaldk , σsignal = √ SR, and σnoise = √ 1− SR = 2(t + ϵ)σ. (SR is defined in Algorithm 1), (ii) x′⊥s is “nearly independent” of x′s; namely, for any l ∈ R and u ∈ Rn−1, we have that Px′⊥s|x′s=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Null Hypothesis Analysis For Dnullt,ϵ,ψ,B,δ , we can show that it is pointwise close to DNRn,1: Lemma 3.6. For any u ∈ Rn, we have that PDnullt,ϵ,ψ,B,δ(u) = (1±O(δ))PDNRn,1(u) . 3.3.2 The Reduction Algorithm With the rejection sampling algorithm (Algorithm 1) at our disposal, we can now give the full construction of the hard instance. We use Dt,ϵ,ψ+,B+,δ for x | y = +1, Dt,ϵ,ψ−,B−,δ for x | y = −1 (with a carefully chosen pair of (B+, ψ+) and (B−, ψ−), as we discussed in Section 3.2), and take a proper marginal distribution of y to build a joint distribution of (x, y). We introduce a reduction algorithm that, given samples from our LWE problem (either from the null or the alternative hypothesis), produces i.i.d. samples (x, y) from a joint distribution with the following properties: 1. If the input LWE problem is the null hypothesis, then x | y = +1 and x | y = −1 are close in total variation distance. Therefore, no hypothesis for predicting y in terms of x can do much better than the best constant hypothesis. 2. If the input LWE problem is the alternative hypothesis, then the joint distribution of (x, y) we build is close to a distribution D that satisfies O(η) Massart condition with respect to a degree-O(t/ϵ) PTF, and there is a degree-O(t/ϵ) PTF with small error on D. We formalize the idea from Section 3.2 here. For x | y = +1, we will use ψ+ def = 0 and B+ def = [0, ϵ]. For x | y = −1, we take ψ− def = t/2, which is also the same as [DK22]; but instead of taking B− def = [t/2, t/2 + ϵ], we will need to carve out the slots on B−. First, we define the mapping g : R− [−1.5t, 0.5t] 7→ [0.5t, t], as follows: for i ∈ Z and b ∈ Rt, we have that g(it+ t/2 + b) def = { b i+1 + t/2 if i ≥ 0; b−t i+2 + t/2 if i < 0. This function maps a location it+ t/2 + b to the corresponding place we need to carve out on B−, which is defined in Algorithm 2. These intervals are chosen so that the decaying density part of +1 can fit in, as we discussed in Section 3.2. Now we introduce the algorithm that reduces LWE to learning Massart PTFs. We similarly define the output distributions of the algorithms in the two cases as follows: Definition 3.7. Let DalternativePTF be mixture of Dalternativet,ϵ,ψ+,B+,δ and D alternative t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1−η and η respectively. Similarly, letDnullPTF be mixture ofDnullt,ϵ,ψ+,B+,δ andD null t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1− η and η respectively. The following observation is immediate from the algorithm. Observation 3.8. In the alternative (resp. null) hypothesis case, the output distribution of Algorithm 2, conditioned on not failing, is the same as m′ i.i.d. samples drawn from DalternativePTF (resp. D null PTF). Alternative Hypothesis Analysis We prove that there exists a degree-O(t/ϵ) PTF such that DalternativePTF is close to (in total variation distance) satisfying the O(η) Massart noise condition with respect to this PTF, and this PTF has small error with respect to DalternativePTF . Lemma 3.9. DalternativePTF is O(δ/m′) close in total variation distance to a distribution Dtruncated such that there is a degree-O(t/ϵ) PTF sign(p(x)) that: (i) E(x,y)∼Dtruncated [sign(p(x)) ̸= y] ≤ exp(−Ω(t4/ϵ2)), (ii) Dtruncated satisfies the O(η) Massart noise condition with respect to sign(p(x)). Null Hypothesis Analysis The reader is referred to Lemma C.8 in Appendix C for the null hypothesis analysis. Algorithm 2 Reducing LWE to Learning PTFs with Massart Noise Inputs: m samples from an instance of LWE(m,Rn1 , {±1}n,Nσ,mod1). The input parameters are m′ ∈ N, t, ϵ ∈ R>0, δ ∈ (0, 1), and η > 0 a sufficiently small value. In addition, the parameters satisfy Condition 3.1. Output: m′ many samples in Rn × {±1} or FAIL. 1. We take ψ+ = 0, B+ = [0, ϵ], ψ− = t/2 and B− def = [t/2, t/2 + ϵ]− t ϵ−1⋃ i= t2ϵ−1 g([it− 2c′ϵ, it])− t ϵ−1⋃ i= t2ϵ−1 g([it+ (i+ 1)ϵ, it+ (i+ 1)ϵ+ 2c′ϵ]) − − t2ϵ−1⋃ i=− tϵ−1 g([it+ (i+ 1)ϵ− 2c′ϵ, it+ (i+ 1)ϵ])− − t2ϵ−1⋃ i=− tϵ−1 g([it, it+ 2c′ϵ]) . 2. Repeat the following m′ times. If at any point the algorithm attempts to use more than m LWE samples from the input, then output FAIL. (a) With probability 1 − η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ+, B = B+, δ. Add (x′,+1) to the output samples. (b) With probability η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ−, B = B−, δ. Add (x′,−1) to the output samples. Putting Everything Together Having reduced LWE to learning Massart PTFs, we can apply a Veronese mapping on the samples; this PTF becomes an LTF on the Veronese mapping. Since we use degree-O(t/ϵ) Veronese mapping, the dimension for the Massart LTF problem is N = (n+ 1)O(t/ϵ). 4 Discussion Our result rules out the existence of polynomial time algorithms achieving error smaller than Ω(η), where η is the upper bound on the noise rate, even of the optimal error is very small, assuming the subexponential time hardness of LWE. A technical open question is whether the constant factor in the Ω(η)-term of our lower bound can be improved to the value C = 1; this would match known algorithms exactly. (As mentioned in the introduction, such a sharp lower bound has been recently established in the SQ model [NT22], improving on [DK22].) It is also worth noting that our reduction rules out polynomial-time algorithms, but does not rule out, e.g., subexponential or even quasipolynomial time algorithms with improved error guarantees. We believe that obtaining stronger hardness for these problems would require substantially new ideas, as our runtime lower bounds are essentially the same as the best time lower bounds for learning in the (much stronger) agnostic noise model or in restricted models of computation (like SQ). This seems related to the requirement that our bounds require subexponential hardness of LWE in our assumption. As the strongest possible assumptions only allow us to prove quasi-polynomial lower bounds, any substantially weaker assumption will likely fail to prove super-polynomial ones. Acknowledgments Ilias Diakonikolas was supported by NSF Medium Award CCF-2107079, NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant. Daniel M. Kane was supported by NSF Medium Award CCF-2107547, NSF Award CCF-1553288 (CAREER), a Sloan Research Fellowship, and a grant from CasperLabs. Lisheng Ren was supported by NSF Award CCF-1652862 (CAREER) and a DARPA Learning with Less Labels (LwLL) grant.
1. What is the focus of the paper regarding cryptographic hardness results for learning halfspaces? 2. What are the strengths of the proposed approach, particularly in its technical work and mastery of ideas? 3. What are the weaknesses of the paper, especially regarding its assumptions and novelty compared to prior works? 4. Do you have any questions regarding the overall proof, such as the need for subexponential hardness of LWE? 5. Are there any limitations or potential negative societal impacts of this work that the authors should consider?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proves a new cryptographic hardness result for the fundamental problem of learning halfspaces with Massart noise in the distribution-free setting. Specifically, they assume subexponential hardness of the Learning with Errors (LWE) problem and show the following: given a distribution consistent with a halfspace except with η -bounded Massart noise (think of η = 1 / 3 ), and for which the true OPT is almost polynomially small, no polynomial-time algorithm can achieve 0-1 loss better than Ω ( η ) . Technically, they show that even a natural decision version of this learning problem is hard. The approach taken is a careful reduction from continuous LWE (itself as hard as LWE) to the problem of learning polynomial threshold functions (PTFs) with Massart noise; by the natural monomials feature map, PTFs are just LTFs over a higher-dimensional space. The reduction builds closely on prior work that showed a superpolynomial SQ lower bound for the same problem. At a high level, the final hard Massart distribution that one wishes to generate is roughly similar in both: one wants a distribution of ( x ′ , y ′ ) that is consistent with a Massart PTF, and such that the two conditional distributions x ′ | y ′ = + 1 and x ′ | y ′ = − 1 are "pancake distributions", i.e. Gaussian in all directions except one, in which it is instead a mixture of discrete Gaussians that matches many moments with a standard Gaussian. One does this using a very careful kind of rejection sampling; the technical work required is very delicate and nontrivial. Overall, this approach falls into a major line of work on lower bounds for non-Gaussian component analysis and other problems using reductions arising from it, with one of the key foundational works being DKS17. Strengths And Weaknesses Strengths: This is an important hardness result for a fundamental problem in learning theory, perhaps one of the most basic ones in robust PAC learning. The SQ lower bound was already quite convincing, but a cryptographic lower bound has the benefit of not being restricted to a certain model. And while the approach in this paper closely follows the SQ proof, the additional technical work required is still considerable and showcases great mastery of the ideas and techniques involved. The paper is very well-written and clear; the figures in the supplement are particularly helpful. Considering the proofs are quite technical, I do not claim to have verified them in complete detail, but they seem correct to me. Weaknesses: One small weakness of this work is that it assumes subexponential hardness of LWE. This is still widely-believed, but it is natural to wonder if a weaker assumption would suffice. The authors do point out how this assumption is consistent with the known SQ lower bounds, though. In terms of novelty, it definitely borrows heavily from the SQ lower bound of DK21, but again it does require significant extra work and ideas. Overall, I think this is a very good paper that makes a significant contribution to the area. Questions It would be helpful to understand why in the overall proof we end up needing subexponential hardness of LWE. That is, why must the parameters in appendix D be set that way? What would fail if we tried to make do with say quasipolynomial hardness? My guess is that if we tried to set the PTF degree d = O ( t / ϵ ) to be say logarithmic in n , then OPT would be too large (something like 1 / polylog ( N ) maybe?), but I wonder if it would still at least be nontrivial. The reduction itself seems to only take poly ( m , N ) time, so that can't be the bottleneck. Some remarks on this would be useful right in the introduction. (Note that the comparison with known SQ bounds is a slightly different matter; the question is why the proof in this paper requires the strong assumption.) A small typo in the statement of Thm 1.2: I think it should be O P T ≤ 2 − log 1 − ζ ⁡ N . Limitations The authors generally do a good job contextualizing their result and its limitations. A few further remarks on how the proof crucially uses subexponential hardness of LWE would not be out of place. I am not aware of any potential negative societal impact of this work.
NIPS
Title Cryptographic Hardness of Learning Halfspaces with Massart Noise Abstract We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (x, y) ∈ R × {±1}, where the distribution of x is arbitrary and the label y is a Massart corruption of f(x), for an unknown halfspace f : R → {±1}, with flipping probability η(x) ≤ η < 1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomialtime Massart halfspace learner can achieve error better than Ω(η), even if the optimal 0-1 error is small, namely OPT = 2− log (N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. N/A c(N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. 1 Introduction A halfspace or linear threshold function (LTF) is any function hw,t : RN → {±1} of the form hw,t(x) := sign(⟨w,x⟩ − t), where the vector w ∈ RN is called the weight vector, t ∈ R is called the threshold, and sign : R → {±1} is defined by sign(t) = 1 if t ≥ 0 and sign(t) = −1 otherwise. Halfspaces are a central concept class in machine learning, extensively investigated since the 1950s [Ros58, Nov62, MP68]. Here we study the computational complexity of learning halfspaces in Valiant’s (distribution independent) PAC model [Val84], when the labels have been corrupted by Massart noise [MN06]. We define the Massart noise model below. Definition 1.1 (Massart Noise). We say that a joint distribution D of labeled examples (x, y), supported on RN × {±1}, satisfies the Massart noise condition with noise parameter η ∈ [0, 1/2) with respect to a concept class C of Boolean-valued functions on RN if there is a concept c ∈ C such that for all x0 ∈ RN we have that η(x0) def = Pr(x,y)∼D[c(x) ̸= y | x = x0] ≤ η. The Massart PAC learning problem for the concept class C is the following: Given i.i.d. samples from a Massart distribution D, as in Definition 1.1, the goal is to output a hypothesis with small 0-1 error. In this work, we study the computational complexity of the Massart PAC learning problem, when the underlying concept class C is the class of halfspaces on RN . In its above form, the Massart noise model was defined in [MN06]. An essentially equivalent noise model had been defined in the 80s by Sloan and Rivest [Slo88, RS94, Slo96], and a very similar definition had been considered even earlier by Vapnik [Vap82]. The Massart model is a classical semi-random noise model that is more realistic than Random Classification Noise (RCN) In contrast to RCN, Massart noise allows for variations in misclassification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). rates (without a priori knowledge of which inputs are more likely to be misclassified). Asymmetric misclassification rates arise in a number of applications, including in human annotation noise [BK09]. Consequently, learning algorithms that can tolerate Massart noise are less brittle than those that depend on the uniformity of RCN. The agnostic model [Hau92, KSS94], where the noise can be fully adversarial, is of course even more robust; unfortunately, it is computationally hard to obtain agnostic learners with any non-trivial guarantees, even for basic settings. We now return to the class of halfspaces, which is the focus of this work. We recall that PAC learning halfspaces with RCN is known to be solvable in polynomial time (to any desired accuracy) [BFKV96]. On the other hand, agnostic PAC learning of halfspaces is known to computationally hard (even for weak learning) [GR06, FGKP06, Dan16]. The computational task of PAC learning halfspaces corrupted by Massart noise is a classical problem in machine learning theory that has been posed by several authors since the 1980s [Slo88, Coh97, Blu03]. Until recently, no progress had been made on the efficient PAC learnability of Massart halfspaces. [DGT19] made the first algorithmic progress on this problem: they gave a poly(N, 1/ϵ)-time learning algorithm with error guarantee of η+ ϵ. Subsequent work made a number of refinements to this algorithmic result, including giving an efficient proper learner [CKMY20] and developing an efficient learner with strongly polynomial sample complexity [DKT21]. In a related direction, [DIK+21] gave an efficient boosting algorithm achieving error η+ ϵ for any concept class, assuming the existence of a weak learner for the class. The error bound of η can be very far from the information-theoretically optimum error of OPT, where OPT = RLTF(D) ≤ η. Indeed, known polynomial-time algorithms only guarantee error ≈ η even if OPT is very small, i.e., OPT ≪ η. This prompts the following question: Question 1.1. Is there an efficient learning algorithm for Massart halfspaces with a relative error guarantee? Specifically, if OPT ≪ η is it possible to achieve error significantly better than η? Our main result (Theorem 1.2) answers this question in the negative, assuming the subexponentialtime hardness of the classical Learning with Errors (LWE) problem (Assumption 2.4). In other words, we essentially resolve the efficient PAC learnability of Massart halfspaces, under a widely-believed cryptographic assumption. 1.1 Our Results Before we state our main result, we recall the setup of the Learning with Errors (LWE) problem. In the LWE problem, we are given samples (x1, y1), . . . , (xm, ym) and the goal is to distinguish between the following two cases: (i) Each xi is drawn uniformly at random (u.a.r.) from Znq , and there is a hidden secret vector s ∈ Znq such that yi = ⟨xi, s⟩+ zi, where zi ∈ Zq is discrete Gaussian noise (independent of xi); (ii) Each xi and each yi are independent and are sampled u.a.r. from Znq and Zq respectively. Formal definitions of LWE (Definition 2.3) and related distributions together with the precise computational hardness assumption (Assumption 2.4) we rely on are given in Section 2. Our main result can now be stated as follows: Theorem 1.2 (Informal Main Theorem). Assume that LWE cannot be solved in 2n 1−Ω(1) time. Then, for any constant ζ > 0, there is no polynomial-time learning algorithm for Massart halfspaces on RN that can output a hypothesis with 0-1 error smaller than Ω(η), even when OPT ≤ 2− log1−ζ N and the Massart noise parameter η is a small positive constant. The reader is also referred to Theorem D.1 in the Appendix for a more detailed formal statement. Theorem 1.2 is the first computational hardness result for PAC learning halfspaces (and, in fact, any non-trivial concept class) in the presence of Massart noise. Our result rules out even improper PAC learning, where the learner is allowed to output any polynomially evaluatable hypothesis. As a corollary, it follows that the algorithm given in [DGT19] is essentially the best possible, even when assuming that OPT is almost inverse polynomially small (in the dimension N ). We also remark that this latter assumption is also nearly the best possible: if OPT is o(ϵ/N), then we can just draw Ω(N/ϵ) samples and output any halfspace that agrees with these samples. We note that a line of work has established qualitatively similar hardness in the Statistical Query (SQ) model [Kea98] — a natural, yet restricted, model of computation. Specifically, [CKMY20] established a super-polynomial SQ lower bound for learning within error of OPT + o(1). Subse- quently, [DK22] gave a near-optimal super-polynomial SQ lower bound: their result rules out the existence of efficient SQ algorithms that achieve error better than Ω(η), even if OPT = 2log 1−ζ N . Building on the techniques of [DK22], more recent work [NT22] established an SQ lower bound for learning to error better than η, even if OPT = 2log 1−ζ N — matching the guarantees of known algorithms exactly. While the SQ model is quite broad, it is also restricted. That is, the aforementioned prior results do not have any implications for the class of all polynomial-time algorithms. Interestingly, as we will explain in the proceeding discussion, our computational hardness reduction is inspired by the SQ-hard instances constructed in [DK22]. 1.2 Brief Technical Overview Here we give a high-level overview of our approach. Our reduction proceeds in two steps. The first is to reduce the standard LWE problem (as described above) to a different “continuous” LWE problem more suitable for our purposes. In particular, we consider the problem where the x samples are taken uniformly from Rn/Zn, y is either taken to be an independent random element of R/Z or is taken to be ⟨x, s⟩ mod 1 plus a small amount of (continuous) Gaussian noise, where s is some unknown vector in {±1}n. This reduction follows from existing techniques [Mic18a, GVV22]. The second step — which is the main technical contribution of our work — is reducing this continuous LWE problem to that of learning halfspaces with Massart noise. The basic idea is to perform a rejection sampling procedure that allows us to take LWE samples (x, y) and produce some new samples (x̃, ỹ). We will do this so that if y is independent of x, then ỹ is (nearly) independent of x̃; but if y = ⟨x, s⟩ + noise, then ỹ is a halfspace of x̃ with a small amount of Massart noise. An algorithm capable of learning halfspaces with Massart noise (with appropriate parameters) would be able to distinguish these cases by learning a hypothesis h and then looking at the probability that h(x̃) ̸= ỹ. In the case where ỹ was a halfspace with noise, this would necessarily be small; but in the case where x̃ and ỹ were independent, it could not be. In order to manage this reduction, we will attempt to produce a distribution (x̃, ỹ) similar to the SQ-hard instances of Massart halfspaces constructed in [DK22]. These instances can best be thought of as instances of a random variable (x′, y′) in Rn × {±1}, where y′ is given by a low-degree polynomial threshold function (PTF) of x′ with a small amount of Massart noise. Then, letting x̃ be the Veronese map applied to x′, we see that any low-degree polynomial in x′ is a linear function of x̃, and so ỹ = y′ is an LTF of x̃ plus a small amount of Massart noise. As for how the distribution over (x′, y′) is constructed in [DK22], essentially the conditional distribution of x′ on y′ = 1 and on y′ = −1 are carefully chosen mixtures of discrete Gaussians in the v-direction (for some randomly chosen unit vector v), and independent standard Gaussians in the orthogonal directions. () Our goal will be to find a way to perform rejection sampling on the distribution (x, y) to produce a distribution of this form. In pursuit of this, for some small real number b and some a ∈ [0, b), we let x′ be a random Gaussian subject to x′ ≡ bx (mod b) (in the coordinate-wise sense) conditioned on by ≡ a (mod b). We note that if we ignore the noise in the definition of y, this implies that ⟨x′, s⟩ ≡ ⟨bx, s⟩ ≡ b ⟨x, s⟩ ≡ by ≡ a (mod b) (recalling that s ∈ {±1}n). In fact, it is not hard to see that the resulting distribution on x′ is close to a standard Gaussian conditioned on ⟨x′, s⟩ ≡ a (mod b). In other words, x′ is close to a discrete Gaussian with spacing b/∥s∥2 and offset a/∥s∥2 in the s-direction, and an independent standard Gaussian in orthogonal directions. Furthermore, this x′ can be obtained from (x, y) samples by rejection sampling: taking many samples until one is found with by ≡ a (mod b), and then returning a random x′ with x′ ≡ bx (mod b). By taking an appropriate mixture of these distributions, we can manufacture a distribution close to the hard instances in [DK22]. This intuition is explained in detail in Section 3.1; see Lemma 3.3. (We note that Lemma 3.3 is included only for the purposes of intuition; it is a simpler version of Lemma 3.5, which is one of the main lemmas used to prove our main theorem.) Unfortunately, as will be discussed in Section 3.2, applying this construction directly does not quite work. This is because the small noise in the definition of y leads to a small amount of noise in the final values of ⟨x′, s⟩. This gives us distributions that are fairly similar to the hard instances of [DK22], but leads to small regions of values for u, where the following condition holds: Pr(y′ = +1 | x′ = u) = Pr(y′ = −1 | x′ = u). Unfortunately, the latter condition cannot hold if y′ is a function of x′ with Massart noise. In order to fix this issue, we need to modify the construction by carving intervals out of the support of x′ conditioned on y′ = −1, in order to eliminate these mixed regions. This procedure is discussed in detail in Section 3.3.2. 1.3 Additional Related Work There have also been several recent works showing reductions from LWE or lattice problems to other learning problems. Concurrent and independent work to ours [Tie22] showed hardness of weakly agnostically learning halfspaces, based on a worst-case lattice problem (via a reduction from “continuous” LWE). Two recent works obtained hardness for the unsupervised problem of learning mixtures of Gaussians (GMMs), assuming hardness of (variants of) the LWE problem. Specifically, [BRST21] defined a continuous version of LWE (whose hardness they established) and reduced it to the problem of learning GMMs. More recently, [GVV22] obtained a direct reduction from LWE to a (different) continuous version of LWE; and leveraged this connection to obtain quantitatively stronger hardness for learning GMMs. It is worth noting that for the purposes of our reduction, we require as a starting point a continuous version of LWE that differs from the one defined in [BRST21]. Specifically, we require that the distribution on x is uniform on [0, 1]n (instead of a Gaussian, as in [BRST21]) and the secret vector is binary. The hardness of this continuous version essentially follows from [Mic18b, GVV22]. 2 Preliminaries For x, s ∈ Rn with s ̸= 0, let xs def= ⟨x, s⟩/∥s∥2 be the length of the projection of x in the s direction, and x⊥s ∈ Rn−1 be the projection1 of x on the orthogonal complement of s. For f, g : U → R, we write f(u) ∝ g(u) if there is c ∈ R such that f(u) = cg(u) for all u ∈ U . We use X ∼ D to denote a random variable X with distribution D. We use PD or PX for the corresponding probability mass function (pmf) or density function (pdf), and PrD or PrX for the measure function of the distribution. We use DX to denote the distribution of the random variable X . For S ⊆ Rn, we will use λ(S) to denote the n-dimensional volume of S. Let U(S) denote the uniform distribution on S. For a distribution D on Rn and S ⊆ Rn, we denote by D | S the conditional distribution of X ∼ D given X ∈ S. Let Ds (resp. D⊥s) be the distribution of xs (resp. x⊥s), where x ∼ D. For distributions D1, D2, we use D1 +D2 to denote the pseudo-distribution with measure function PrD1+D2(A) = PrD1(A) + PrD2(A). For a ∈ R, let aD denote the pseudo-distribution with measure function aPrD. On the other hand, let a ◦D denote the distribution of aX , where X ∼ D. We use D1 ⋆ D2 to denote the convolution of distributions D1, D2. We will use LTFN for the class of halfspaces on RN ; when N is clear from the context, we may discard it and simply write LTF. For q ∈ N, we use Zq def = {0, 1, · · · , q − 1} and Rq def = [0, q). We use modq : Rn 7→ Rnq to denote the function that applies modq(x) on each coordinate of x. We use DNRn,σ to denote the n-dimensional Gaussian distribution with mean 0 and covariance matrix σ2/(2π) · In and use DNσ as a short hand for DNR,σ. In some cases, we will use N (0, In) for the standard (i.e., zero mean and identity covariance) multivariate Gaussian, Definition 2.1 (Partially Supported Gaussian Distribution). For σ ∈ R+ and x ∈ Rn, let ρσ(x) def = σ−n exp ( −π(∥x∥2/σ)2 ) . For any countable set S ⊆ Rn, we let ρσ(S) def = ∑ x∈S ρσ(x), and let DNS,σ be the distribution supported on S with pmf PDNS,σ (x) = ρσ(x)/ρσ(S). Definition 2.2 (Discrete Gaussian). For T ∈ R+, y ∈ R and σ ∈ R+, we define the “T -spaced, y-offset discrete Gaussian distribution with σ scale” to be the distribution of DNTZ+y,σ . Learning with Errors (LWE) We use the following definition of LWE, which allows for flexible distributions of samples, secrets, and noises. Here m is the number of samples, n is the dimension, and q is the ring size. Definition 2.3 (Generic LWE). Let m,n, q ∈ N, and let Dsample, Dsecret, Dnoise be distributions on Rn,Rn,R respectively. In the LWE(m,Dsample, Dsecret, Dnoise,modq) problem, we are given m independent samples (x, y) and want to distinguish between the following two cases: (i) Alternative 1More precisely, let B⊥s ∈ Rn×(n−1) for the matrix whose columns form an (arbitrary) orthonormal basis for the orthogonal complement of s, and let x⊥s def= (B⊥s)T x. hypothesis: s is drawn from Dsecret. Then, each sample is generated by taking x ∼ Dsample, z ∼ Dnoise, and letting y = modq(⟨x, s⟩+ z); and (ii) Null hypothesis: x, y are independent and each has the same marginal distribution as above. When a distribution in LWE is uniform over some set S, we may abbreviate U(S) merely as S. Note that LWE(m,Znq ,Znq , DNZ,σ,modq) to the classical LWE problem. Computational Hardness Assumption for LWE As alluded to earlier, the assumption for our hardness result is the hardness of the (classic) LWE problem, with the parameters stated below. Assumption 2.4 (Standard LWE Assumption (see, e.g., [LP11])). Let c > 0 be a sufficiently large constant. For any constant β ∈ (0, 1), κ ∈ N, LWE(2O(nβ),Znq ,Znq , DNZ,σ,modq) with q ≤ nκ and σ = c √ n cannot be solved in 2O(n β) time with 2−O(n β) advantage. We recall that [Reg09, Pei09] gave a polynomial-time quantum reduction from approximating (the decision version of) the Shortest Vector Problem (GapSVP) to LWE (with similar n, q, σ parameters). Our hardness assumption is the widely believed sub-exponential hardness of LWE. We note that the fastest known algorithm for GapSVP takes 2O(n) time [ALNS20]. Thus, refuting the conjecture would be a major breakthrough. A similar assumption was also used in [GVV22] to establish computational hardness of learning Gaussian mixtures. Our use of a sub-exponential hardness of LWE is not a coincidence; see Section 4. As mentioned earlier, we will use a different variant of LWE, where the sample is from Rn1 , the secret is from {±1}n, and the noise is drawn from a continuous Gaussian distribution. The hardness of this variant is stated below. The proof, which follows from [Mic18a, GVV22], is deferred to Appendix B. Lemma 2.5. Under Assumption 2.4, for any β ∈ (0, 1) and γ ∈ R+, there is no 2O(n β) time algorithm to solve LWE ( 2O(n β),Rn1 , {±1}n, DNO(n−γ),mod1 ) with 2−O(n β) advantage. Decisional Massart Halfspace Problem For a distribution D on labeled examples and a concept class C, we let RC(D) def = minh∈C Pr(x,y)∼D[h(x) ̸= y] be the error of the best classifier in C with respect to D. We will prove hardness for the following decision version of learning Massart halfspaces. This will directly imply hardness for the corresponding learning (search) problem. Definition 2.6 (Testing Halfspaces with Massart Noise). For n,N ∈ N, η,OPT ∈ (0, 1/2), let Massart(m,N, η,OPT) denote the problem of distinguishing, given m i.i.d. samples from D on RN × {±1}, between the following two cases: (i) Alternative hypothesis: D satisfies the Massart halfspace condition with noise parameter η and RLTF(D) ≤ OPT; and (ii) Null hypothesis: the Bayes optimal classifier has cη error, where c > 0 is a sufficiently small universal constant. 3 Reduction from LWE to Learning Massart Halfspaces In this section, we establish Theorem 1.2. Some intermediate technical lemmas have been deferred to the Appendix C. Our starting point is the problem LWE(m,Rn1 , {±1}n, DNσ ,mod1). Note that, by Lemma 2.5, Assumption 2.4 implies the hardness of LWE(m,Rn1 , {±1}n, DNσ ,mod1). We will reduce this variant of LWE to the decision/testing version of Massart halfspaces (Definition 2.6). Our reduction will employ multiple underlying parameters, which are required to satisfy a set of conditions. For convenience, we list these conditions below. Condition 3.1. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, δ ∈ (0, 1), satisfy: (i) t/ϵ is a sufficiently large even integer, (ii) σ ≤ √ n, (iii) 1 t √ n ≥ √ c log(n/δ), where c is a sufficiently large universal constant, (iv) ( c ′ϵ c′′tσ ) 2 ≥ log(m′/δ), where c′ > 0 is a sufficiently small universal constant and c′′ > 0 is a sufficiently large universal constant. The main theorem of this work is stated below. Theorem 3.2. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, ϵ′, δ ∈ (0, 1) satisfy Condition 3.1 and η < 1/2. Moreover, assume that m′ = c(ϵ/t)m, where c > 0 is a sufficiently small universal constant and m(ϵ/t)2 is sufficiently large, and N = (n + 1)d, where d/(t/ϵ) is sufficiently large. Suppose that there is no T + poly(m,N, log(1/δ))-time algorithm for solving LWE(m,Rn1 , {±1}n, DNσ ,mod1) with ϵ′ −O(δ) advantage. Then there is no T time algorithm for solving Massart(m′, N, η,OPT) with 2ϵ′ advantage, where OPT = exp(−Ω(t4/ϵ2)). Note that Theorem 3.2, combined with Lemma 2.5, can be easily used to prove Theorem 1.2 (e.g., by plugging in t = n−0.5−Θ(ζ), ϵ = Θ(n−1.5) in the above statement); see Appendix D. As such, we devote the remainder of the body of this paper to give an overview to the proof of Theorem 3.2. High-level Overview The starting point of our computational hardness reduction is the family of SQ-hard instances obtained in [DK22]. At a high-level, these instances are constructed using mixtures of “hidden direction” discrete Gaussian distributions, i.e., distributions that are discrete Gaussians in a hidden direction and continuous Gaussians on the orthogonal directions. In Section 3.1, we note that by using an appropriate rejection sampling procedure on the LWE samples (drawn from the alternative hypothesis), we obtain a distribution very similar to the “hidden direction discrete Gaussian”. A crucial difference in our setting is the existence of a small amount of additional “noise”. A natural attempt is to replace the discrete Gaussians in [DK22] with the noisy ones obtained from our rejection sampling procedure. This produces problems similar to the hard instances from [DK22]. Unfortunately, the extra noise in our construction means that the naive version of this construction will not work; even with small amounts of noise, the resulting distributions will not satisfy the assumptions of a PTF with Massart noise. In Section 3.2, we elaborate on this issue and the modifications we need to make to our construction in order to overcome it. In Section 3.3, we provide the complete construction of our Massart PTF hard instance. Overview of the [DK22] SQ-hard Construction [DK22] showed SQ-hardness for the following hypothesis testing version of the problem (which implies hardness for the learning problem): For an input distribution D on Rn × {±1}, distinguish between the cases where D is a specific distribution Dnull in which x and y are independent or where D belongs to a class of alternative hypothesis distributions Dalternative. In particular, for D ∈ Dalternative, y will be given by a low-degree PTF in x with a small amount of Massart noise. As we will be trying to reproduce it, it is important for us to understand this alternative hypothesis distribution. Each distribution in Dalternative is parameterized by a hidden direction s ∈ Sn−1. We will denote the corresponding distribution by Ds. Ds is constructed so that x⊥s ∼ DNRn−1,1 is independent of x s and y. This means that we can specify Ds by describing the simpler distribution of (xs, y) ∈ R × {±1}. For (xs, y), we have that y = +1 with probability 1− η. The distributions of xs conditioned on y = ±1 are defined to be mixtures of discrete Gaussians as follows: Dxs|(y=+1) = 1 ϵ ∫ ϵ 0 DNu+(t+u)Z,1du and Dxs|(y=−1) = 1 ϵ ∫ t/2+ϵ t/2 DNu+(t+u−t/2)Z,1du . (1) As we noted, both xs | (y = +1) and xs | (y = −1) are mixtures of discrete Gaussians. Combining this with the fact that x⊥s ∼ N (n, In−1), this indicates that x | (y = +1) and x | (y = −1) are mixtures of “hidden direction discrete Gaussians” — with different spacing and offset for their support on the hidden direction. These conditional distributions were carefully selected to ensure that y is a Massart PTF of x with small error. To see why this is, notice that the support of xs | (y = +1) is ⋃ i∈Z [it, it+ (i+1)ϵ], while the support of xs | (y = −1) is ⋃ i∈Z [it+ t/2, it+ t/2+ (i+1)ϵ]; both supports are unions of intervals. Consider the implications of this for three different ranges of xs: 1. For xs ∈ [−t2/(2ϵ), t2/(2ϵ)], the intervals have lengths in [0, t/2]; thus, the +1 intervals and the −1 intervals do not overlap at all. 2. For xs ∈ [−t2/ϵ,−t2/(2ϵ)) ∪ (t2/(2ϵ), t2/ϵ], the intervals have lengths in [t/2, t]; thus, the +1 intervals and the −1 intervals overlap, so that their union covers the space. We note that in this case there are gaps between the +1 intervals; specifically, there are at most O(t/ϵ) such gaps. 3. For xs ∈ (−∞,−t2/ϵ)∪ (t2/ϵ,∞), the intervals have lengths in [t,∞), so the +1 intervals cover the space by themselves. Consider the degree-O(t/ϵ) PTF sign(p(x)) such that sign(p(x)) = +1 iff xs ∈ ⋃ i∈Z [it, it+(i+1)ϵ]. In particular, sign(p(x)) = 1 for x in the support of the conditional distribution on y = 1. Note that the PTF sign(p(x)) has zero error in the first case; thus, its total 0-1 error is at most exp(−Ω(t2/ϵ)2). Moreover, since the probability of y = 1 is substantially larger than the probability of y = −1, it is not hard to see that for any x with sign(p(x)) = 1 that Pr[y = 1 | x = x] > 1−O(η). This implies that y is given by sign(p(x)) with Massart noise O(η). 3.1 Basic Rejection Sampling Procedure In this subsection, we show that by performing rejection sampling on LWE samples, one can obtain a distribution similar to the “hidden direction discrete Gaussian”. For the sake of intuition, we start with the following simple lemma. The lemma essentially states that, doing rejection sampling on LWE samples, gives a distribution with the following properties: On the hidden direction s, the distribution is pointwise close to the convolutional sum of a discrete Gaussian and a continuous Gaussian noise. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s stays pointwise close to a Gaussian. Note that this distribution closely resembles the “hidden direction discrete Gaussian” in [DK22]. Lemma 3.3. Let (x, y) be a sample of the LWE(m,Rn1 , {±1}n, DNσ ,mod1) from the alternative hypothesis case, let y′ be any constant in [0, 1), and let x′ ∼ (1/σscale) ◦ DNx+Zn,σscale | (y = y ′) . Then we have the following: (i) For x′s, we have that for any u ∈ R it holds that Px′s(u) = (1±O(δ))PD′⋆DNσnoise (u) , whereD ′ = DNT (y′+Z),σsignal , and T = SR/(n 1/2σscale), σsignal = √ SR, σnoise = √ 1− SR, and SR = σ 2 scale σ2scale+σ 2/n , (ii) x′⊥s is “nearly independent” of x′s, namely for any l ∈ R and u ∈ Rn−1 we have that Px⊥s|xs=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Lemma 3.3 is a special case of Lemma 3.5, which is one of the main lemmas required for our proof. We note that the distribution of x′ obtained from the above rejection sampling is very similar to the “hidden direction discrete Gaussian” used in [DK22]. The key differences are as follows: (i) on the hidden direction, x′s is close to a discrete Gaussian plus extra Gaussian noise (instead of simply being a discrete Gaussian), (ii) x′⊥s and x′s are not perfectly independent. More importantly, by taking different values for y′ and σscale, we can obtain distributions with the same hidden direction, but their discrete Gaussian on the hidden direction has different spacing (T ) and offset (y′). To obtain a computational hardness reduction, our goal will be to simulate the instances from [DK22] by replacing the hidden direction discrete Gaussians with the noisy versions that we obtain from this rejection sampling. We next discuss this procedure and see why a naive implementation of it does not produce a PTF with Massart noise. 3.2 Intuition for the Hard Instance The natural thing to try is to simulate the conditional distributions from [DK22] by replacing the hidden direction discrete Gaussian terms in (1) with similar distributions obtained from rejection sampling. In particular, Lemma 3.3 says that we can obtain a distribution which is close to this hidden direction Gaussian plus a small amount of Gaussian noise. Unfortunately, this extra noise will cause problems for our construction. Recall that the support of xs | (y = +1) was ⋃ i∈Z [it, it+ (i+ 1)ϵ], and the support of xs | (y = −1) was ⋃ i∈Z [it+ t/2, it+ t/2 + (i+ 1)ϵ] for [DK22]. With the extra noise, there is a decaying density tail in both sides of each [it, it + (i + 1)ϵ] interval in the support of xs | (y = +1). The same holds for each interval in the support of xs | (y = −1). Recalling the three cases of these intervals discussed earlier, this leads to the following issue. In the second case, the intervals have length within [t/2, t]; thus, the intervals [it, it+ (i+ 1)ϵ] and [it+ t/2, it+ t/2 + (i+ 1)ϵ] overlap, i.e., it + (i + 1)ϵ ≥ it + t/2. On the right side of [it, it + (i + 1)ϵ], in the support of xs | (y = −1), there is a small region of values for u, where Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u]. This causes the labels y = +1 and y = −1 to be equally likely over that small region, violating the Massart condition. (We note that for the first case, there is also this kind of small region that Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u] caused by the noise tail. However, the probability density of this region is negligibly small, as we will later see in Lemma 3.9.) We can address this by carving out empty spaces in the [it+ t/2, it+ t/2 + (i+ 1)ϵ] intervals for xs | (y = −1), so that these decaying parts can fit into. Since this only needs to be done for intervals of Case 2, at most O(t/ϵ) many such slots are needed. It should be noted that no finite slot will totally prevent this from occurring. However, we only need the slot to be wide enough so that the decay of the error implies that there is negligible mass in the overlap (which can be treated as an error). We also need to discuss another technical detail. In the last section, we defined the rejection sampling process as taking (1/σscale) ◦ DNx+Zn,σscale | (y = y ′), where we can control the offset by y′ and spacing by σscale (Lemma 3.3). This distribution is effectively a noisy version of a discrete Gaussian. Therefore, we can produce a noisy version of the hard instances of [DK22] by taking a mixture of these noisy discrete Gaussians. Unfortunately the noise rate of one of these instances will be σnoise. This quantity depends on the spacing T of the discrete Gaussian, which varies across the mixture we would like to take. This inconsistent noise rate is inconvenient for our analysis. However, we can fix the issue by adding extra noise artificially to each of the discrete Gaussians in our mixture, so that they will all have a uniform noise rate σnoise; see Algorithm 1 and Lemma 3.5. The last bit of technical detail is that instead of doing the rejection for y = y′, which has 0 acceptance probability, we will only reject if y is not corresponding to any discrete Gaussian we need. Then we do another rejection to make sure that the magnitude of discrete Gaussians in the mixture is correct. In the next subsection, we introduce the complete rejection sampling method. 3.3 The Full Hard Instance Construction We first introduce the complete rejection algorithm, and then explain how the hard instance is produced using it. Below we provide proof overviews; omitted proofs can be found in Appendix C. 3.3.1 The Complete Rejection Algorithm The rejection sampling algorithm is the following. The sampling process produces the noisy variant of the distribution which, for some carefully selected set B ⊆ [0, 1], has PDF function 1 λ(B) ∫ B DNk+(t+k−ψ)Z,1dk in the hidden direction, as we will see in Lemma 3.5. Algorithm 1 Rejection Sampling Algorithm Inputs: A sample (x, y) ∈ Rn1 × R1 and the input parameters are t, ϵ, ψ ∈ R>0, where ψ + ϵ ≤ t, B ⊆ [ψ,ψ + ϵ], δ ∈ (0, 1). In addition, the parameters satisfy items (i)-(iii) of Condition 3.1. Output: REJECT or a sample x′ ∈ Rn. 1. Reject unless there is a k ∈ B such that y = kt+k−ψ . 2. Furthermore, reject with probability 1− t 2 (t+k−ψ)2 . 3. Let SR = 1 − 4(t + ϵ)2σ2, σscale = SR(t+k−ψ)√n and σadd = √ (1−SR)σ2scale−SR(σ/ √ n)2 SR . Then, sample independent noise xadd ∼ DNRn,σadd and output x ′ ∼ (1/σscale) ◦DNx+xadd+Zn,σscale . Notice that the parameter SR does not depend on y, whereas σscale, σadd do depend on y. For convenience, let us use the following notation for the output distributions. Definition 3.4 (Output Distribution of Rejection Sampling). Let Dalternativet,ϵ,ψ,B,δ be the distributions of x′ produced by Algorithm 1 (conditioned that the algorithm accepts) given that (x, y) are sampled as follows: let x ∼ U(Rn1 ), z ∼ DNσ , and then let y = mod1(⟨x, s⟩+ z), where s ∈ {±1}n is the secret. Furthermore, let Dnullt,ϵ,ψ,B,δ be a similar distribution, but when x ∼ U(Rn1 ), y ∼ U(R1) are independent. Note that Dalternativet,ϵ,ψ,B,δ depends on s, but we do not explicitly denote this in our notation. Alternative Hypothesis Analysis The main properties of Dalternativet,ϵ,ψ,B,δ are summarized in the following lemma. Essentially, the lemma states that for this distribution Dalternativet,ϵ,ψ,B,δ , the marginal distribution on the hidden direction s is pointwise close to the convolution sum of D′ and a Gaussian noise, where D′ is a linear combination of discrete Gaussians. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s always stays pointwise close to a Gaussian. Lemma 3.5. Let x′ ∼ Dalternativet,ϵ,ψ,B,δ . Then we have the following: (i) For x′s, we have that for any u ∈ R, Px′s(u) = (1 ± O(δ))PD′⋆DNσnoise (u) , where D ′ = 1λ(B) ∫ B DNk+(t+k−ψ)Z,σsignaldk , σsignal = √ SR, and σnoise = √ 1− SR = 2(t + ϵ)σ. (SR is defined in Algorithm 1), (ii) x′⊥s is “nearly independent” of x′s; namely, for any l ∈ R and u ∈ Rn−1, we have that Px′⊥s|x′s=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Null Hypothesis Analysis For Dnullt,ϵ,ψ,B,δ , we can show that it is pointwise close to DNRn,1: Lemma 3.6. For any u ∈ Rn, we have that PDnullt,ϵ,ψ,B,δ(u) = (1±O(δ))PDNRn,1(u) . 3.3.2 The Reduction Algorithm With the rejection sampling algorithm (Algorithm 1) at our disposal, we can now give the full construction of the hard instance. We use Dt,ϵ,ψ+,B+,δ for x | y = +1, Dt,ϵ,ψ−,B−,δ for x | y = −1 (with a carefully chosen pair of (B+, ψ+) and (B−, ψ−), as we discussed in Section 3.2), and take a proper marginal distribution of y to build a joint distribution of (x, y). We introduce a reduction algorithm that, given samples from our LWE problem (either from the null or the alternative hypothesis), produces i.i.d. samples (x, y) from a joint distribution with the following properties: 1. If the input LWE problem is the null hypothesis, then x | y = +1 and x | y = −1 are close in total variation distance. Therefore, no hypothesis for predicting y in terms of x can do much better than the best constant hypothesis. 2. If the input LWE problem is the alternative hypothesis, then the joint distribution of (x, y) we build is close to a distribution D that satisfies O(η) Massart condition with respect to a degree-O(t/ϵ) PTF, and there is a degree-O(t/ϵ) PTF with small error on D. We formalize the idea from Section 3.2 here. For x | y = +1, we will use ψ+ def = 0 and B+ def = [0, ϵ]. For x | y = −1, we take ψ− def = t/2, which is also the same as [DK22]; but instead of taking B− def = [t/2, t/2 + ϵ], we will need to carve out the slots on B−. First, we define the mapping g : R− [−1.5t, 0.5t] 7→ [0.5t, t], as follows: for i ∈ Z and b ∈ Rt, we have that g(it+ t/2 + b) def = { b i+1 + t/2 if i ≥ 0; b−t i+2 + t/2 if i < 0. This function maps a location it+ t/2 + b to the corresponding place we need to carve out on B−, which is defined in Algorithm 2. These intervals are chosen so that the decaying density part of +1 can fit in, as we discussed in Section 3.2. Now we introduce the algorithm that reduces LWE to learning Massart PTFs. We similarly define the output distributions of the algorithms in the two cases as follows: Definition 3.7. Let DalternativePTF be mixture of Dalternativet,ϵ,ψ+,B+,δ and D alternative t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1−η and η respectively. Similarly, letDnullPTF be mixture ofDnullt,ϵ,ψ+,B+,δ andD null t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1− η and η respectively. The following observation is immediate from the algorithm. Observation 3.8. In the alternative (resp. null) hypothesis case, the output distribution of Algorithm 2, conditioned on not failing, is the same as m′ i.i.d. samples drawn from DalternativePTF (resp. D null PTF). Alternative Hypothesis Analysis We prove that there exists a degree-O(t/ϵ) PTF such that DalternativePTF is close to (in total variation distance) satisfying the O(η) Massart noise condition with respect to this PTF, and this PTF has small error with respect to DalternativePTF . Lemma 3.9. DalternativePTF is O(δ/m′) close in total variation distance to a distribution Dtruncated such that there is a degree-O(t/ϵ) PTF sign(p(x)) that: (i) E(x,y)∼Dtruncated [sign(p(x)) ̸= y] ≤ exp(−Ω(t4/ϵ2)), (ii) Dtruncated satisfies the O(η) Massart noise condition with respect to sign(p(x)). Null Hypothesis Analysis The reader is referred to Lemma C.8 in Appendix C for the null hypothesis analysis. Algorithm 2 Reducing LWE to Learning PTFs with Massart Noise Inputs: m samples from an instance of LWE(m,Rn1 , {±1}n,Nσ,mod1). The input parameters are m′ ∈ N, t, ϵ ∈ R>0, δ ∈ (0, 1), and η > 0 a sufficiently small value. In addition, the parameters satisfy Condition 3.1. Output: m′ many samples in Rn × {±1} or FAIL. 1. We take ψ+ = 0, B+ = [0, ϵ], ψ− = t/2 and B− def = [t/2, t/2 + ϵ]− t ϵ−1⋃ i= t2ϵ−1 g([it− 2c′ϵ, it])− t ϵ−1⋃ i= t2ϵ−1 g([it+ (i+ 1)ϵ, it+ (i+ 1)ϵ+ 2c′ϵ]) − − t2ϵ−1⋃ i=− tϵ−1 g([it+ (i+ 1)ϵ− 2c′ϵ, it+ (i+ 1)ϵ])− − t2ϵ−1⋃ i=− tϵ−1 g([it, it+ 2c′ϵ]) . 2. Repeat the following m′ times. If at any point the algorithm attempts to use more than m LWE samples from the input, then output FAIL. (a) With probability 1 − η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ+, B = B+, δ. Add (x′,+1) to the output samples. (b) With probability η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ−, B = B−, δ. Add (x′,−1) to the output samples. Putting Everything Together Having reduced LWE to learning Massart PTFs, we can apply a Veronese mapping on the samples; this PTF becomes an LTF on the Veronese mapping. Since we use degree-O(t/ϵ) Veronese mapping, the dimension for the Massart LTF problem is N = (n+ 1)O(t/ϵ). 4 Discussion Our result rules out the existence of polynomial time algorithms achieving error smaller than Ω(η), where η is the upper bound on the noise rate, even of the optimal error is very small, assuming the subexponential time hardness of LWE. A technical open question is whether the constant factor in the Ω(η)-term of our lower bound can be improved to the value C = 1; this would match known algorithms exactly. (As mentioned in the introduction, such a sharp lower bound has been recently established in the SQ model [NT22], improving on [DK22].) It is also worth noting that our reduction rules out polynomial-time algorithms, but does not rule out, e.g., subexponential or even quasipolynomial time algorithms with improved error guarantees. We believe that obtaining stronger hardness for these problems would require substantially new ideas, as our runtime lower bounds are essentially the same as the best time lower bounds for learning in the (much stronger) agnostic noise model or in restricted models of computation (like SQ). This seems related to the requirement that our bounds require subexponential hardness of LWE in our assumption. As the strongest possible assumptions only allow us to prove quasi-polynomial lower bounds, any substantially weaker assumption will likely fail to prove super-polynomial ones. Acknowledgments Ilias Diakonikolas was supported by NSF Medium Award CCF-2107079, NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant. Daniel M. Kane was supported by NSF Medium Award CCF-2107547, NSF Award CCF-1553288 (CAREER), a Sloan Research Fellowship, and a grant from CasperLabs. Lisheng Ren was supported by NSF Award CCF-1652862 (CAREER) and a DARPA Learning with Less Labels (LwLL) grant.
1. What is the focus of the paper regarding learning halfspaces from samples with noisy labels? 2. What are the strengths of the proposed approach, particularly in its originality and interest to the learning theory community? 3. Do you have any questions or concerns regarding the paper's construction and parameters? 4. How does the reviewer assess the paper's technicality and proof techniques? 5. Are there any errors or typos in the paper that need correction?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Learning halfspaces from samples with noisy labels is a classical problem in learning theory. In the Massart noise model with parameter eta, for some unknown halfspace f_w(x) = sign(<x,w>), the samples (x,y) are drawn as x ~ D (for some arbitrary unknown D) and y in {-1,1} such that Pr_{x ~ D}[f_w(x) = y] >= 1-eta. A recent sequence of works has shown how to learn (with polynomial time and sample complexity) a halfspace with error at most eta + eps. But in polynomial samples (with no time complexity restriction) it's possible to learn a halfspace with error at most OPT + eps, where OPT is the error of the best halfspace (it could be that OPT << eta). It remains open whether it's possible to do in polynomial time. There is evidence via statistical query lower bounds suggesting that this goal is impossible. The current paper presents additional evidence, in the form of a conditional (but unrestricted) lower bound under the Learning With Errors assumption. Specifically, the main result of this paper is that if LWE cannot be solved in strongly subexponential time 2^{n^{1-Omega(1)}}, then no polynomial time learning algorithm can learn halfspaces with eta-Massart noise in polynomial time with error better than Omega(eta), even under the promise that OPT << eta (concretely, eta is constant and OPT < 2^{-log^{1-zeta}(n)} for constant zeta>0). Strengths And Weaknesses The contribution of this paper is original and of interest to the learning theory community. The paper is for the most part cleanly written, although there are a large number of parameters in the construction and perhaps more intuition could be given about them (e.g. Condition 3.1 and the main theorem). The overview of the prior SQ lower bound construction was particularly helpful, although a few parts were tricky to follow (specifically, the claims in lines 223-227). Lemma 3.3 also could use a bit more explanation (e.g. explaining in plaintext the sampling procedure for x'). The proofs are rather technical and I wasn't able to check them all, but the overview of the various obstacles and how they are overcome makes sense. The only potential weakness is that this paper is strictly of interest to the learning theory segment of the NeurIPS community; also, while technical and certainly non-trivial I am not sure whether the proof techniques are of broader interest. Nonetheless, within learning theory the problem solved by this paper is of sufficient interest that I would recommend acceptance. Questions line 54: should be 2^{-log^{1-zeta}(n)}? line 224: why is this exp(-Omega(t^2/epsilon)^2)? Should it be exp(-Omega(t/epsilon)^2)? line 312: D^{expand} is only defined in the appendix line 313: there is no Fact A.5, I guess this should be Fact A.4? Limitations Yes
NIPS
Title Cryptographic Hardness of Learning Halfspaces with Massart Noise Abstract We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (x, y) ∈ R × {±1}, where the distribution of x is arbitrary and the label y is a Massart corruption of f(x), for an unknown halfspace f : R → {±1}, with flipping probability η(x) ≤ η < 1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomialtime Massart halfspace learner can achieve error better than Ω(η), even if the optimal 0-1 error is small, namely OPT = 2− log (N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. N/A c(N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. 1 Introduction A halfspace or linear threshold function (LTF) is any function hw,t : RN → {±1} of the form hw,t(x) := sign(⟨w,x⟩ − t), where the vector w ∈ RN is called the weight vector, t ∈ R is called the threshold, and sign : R → {±1} is defined by sign(t) = 1 if t ≥ 0 and sign(t) = −1 otherwise. Halfspaces are a central concept class in machine learning, extensively investigated since the 1950s [Ros58, Nov62, MP68]. Here we study the computational complexity of learning halfspaces in Valiant’s (distribution independent) PAC model [Val84], when the labels have been corrupted by Massart noise [MN06]. We define the Massart noise model below. Definition 1.1 (Massart Noise). We say that a joint distribution D of labeled examples (x, y), supported on RN × {±1}, satisfies the Massart noise condition with noise parameter η ∈ [0, 1/2) with respect to a concept class C of Boolean-valued functions on RN if there is a concept c ∈ C such that for all x0 ∈ RN we have that η(x0) def = Pr(x,y)∼D[c(x) ̸= y | x = x0] ≤ η. The Massart PAC learning problem for the concept class C is the following: Given i.i.d. samples from a Massart distribution D, as in Definition 1.1, the goal is to output a hypothesis with small 0-1 error. In this work, we study the computational complexity of the Massart PAC learning problem, when the underlying concept class C is the class of halfspaces on RN . In its above form, the Massart noise model was defined in [MN06]. An essentially equivalent noise model had been defined in the 80s by Sloan and Rivest [Slo88, RS94, Slo96], and a very similar definition had been considered even earlier by Vapnik [Vap82]. The Massart model is a classical semi-random noise model that is more realistic than Random Classification Noise (RCN) In contrast to RCN, Massart noise allows for variations in misclassification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). rates (without a priori knowledge of which inputs are more likely to be misclassified). Asymmetric misclassification rates arise in a number of applications, including in human annotation noise [BK09]. Consequently, learning algorithms that can tolerate Massart noise are less brittle than those that depend on the uniformity of RCN. The agnostic model [Hau92, KSS94], where the noise can be fully adversarial, is of course even more robust; unfortunately, it is computationally hard to obtain agnostic learners with any non-trivial guarantees, even for basic settings. We now return to the class of halfspaces, which is the focus of this work. We recall that PAC learning halfspaces with RCN is known to be solvable in polynomial time (to any desired accuracy) [BFKV96]. On the other hand, agnostic PAC learning of halfspaces is known to computationally hard (even for weak learning) [GR06, FGKP06, Dan16]. The computational task of PAC learning halfspaces corrupted by Massart noise is a classical problem in machine learning theory that has been posed by several authors since the 1980s [Slo88, Coh97, Blu03]. Until recently, no progress had been made on the efficient PAC learnability of Massart halfspaces. [DGT19] made the first algorithmic progress on this problem: they gave a poly(N, 1/ϵ)-time learning algorithm with error guarantee of η+ ϵ. Subsequent work made a number of refinements to this algorithmic result, including giving an efficient proper learner [CKMY20] and developing an efficient learner with strongly polynomial sample complexity [DKT21]. In a related direction, [DIK+21] gave an efficient boosting algorithm achieving error η+ ϵ for any concept class, assuming the existence of a weak learner for the class. The error bound of η can be very far from the information-theoretically optimum error of OPT, where OPT = RLTF(D) ≤ η. Indeed, known polynomial-time algorithms only guarantee error ≈ η even if OPT is very small, i.e., OPT ≪ η. This prompts the following question: Question 1.1. Is there an efficient learning algorithm for Massart halfspaces with a relative error guarantee? Specifically, if OPT ≪ η is it possible to achieve error significantly better than η? Our main result (Theorem 1.2) answers this question in the negative, assuming the subexponentialtime hardness of the classical Learning with Errors (LWE) problem (Assumption 2.4). In other words, we essentially resolve the efficient PAC learnability of Massart halfspaces, under a widely-believed cryptographic assumption. 1.1 Our Results Before we state our main result, we recall the setup of the Learning with Errors (LWE) problem. In the LWE problem, we are given samples (x1, y1), . . . , (xm, ym) and the goal is to distinguish between the following two cases: (i) Each xi is drawn uniformly at random (u.a.r.) from Znq , and there is a hidden secret vector s ∈ Znq such that yi = ⟨xi, s⟩+ zi, where zi ∈ Zq is discrete Gaussian noise (independent of xi); (ii) Each xi and each yi are independent and are sampled u.a.r. from Znq and Zq respectively. Formal definitions of LWE (Definition 2.3) and related distributions together with the precise computational hardness assumption (Assumption 2.4) we rely on are given in Section 2. Our main result can now be stated as follows: Theorem 1.2 (Informal Main Theorem). Assume that LWE cannot be solved in 2n 1−Ω(1) time. Then, for any constant ζ > 0, there is no polynomial-time learning algorithm for Massart halfspaces on RN that can output a hypothesis with 0-1 error smaller than Ω(η), even when OPT ≤ 2− log1−ζ N and the Massart noise parameter η is a small positive constant. The reader is also referred to Theorem D.1 in the Appendix for a more detailed formal statement. Theorem 1.2 is the first computational hardness result for PAC learning halfspaces (and, in fact, any non-trivial concept class) in the presence of Massart noise. Our result rules out even improper PAC learning, where the learner is allowed to output any polynomially evaluatable hypothesis. As a corollary, it follows that the algorithm given in [DGT19] is essentially the best possible, even when assuming that OPT is almost inverse polynomially small (in the dimension N ). We also remark that this latter assumption is also nearly the best possible: if OPT is o(ϵ/N), then we can just draw Ω(N/ϵ) samples and output any halfspace that agrees with these samples. We note that a line of work has established qualitatively similar hardness in the Statistical Query (SQ) model [Kea98] — a natural, yet restricted, model of computation. Specifically, [CKMY20] established a super-polynomial SQ lower bound for learning within error of OPT + o(1). Subse- quently, [DK22] gave a near-optimal super-polynomial SQ lower bound: their result rules out the existence of efficient SQ algorithms that achieve error better than Ω(η), even if OPT = 2log 1−ζ N . Building on the techniques of [DK22], more recent work [NT22] established an SQ lower bound for learning to error better than η, even if OPT = 2log 1−ζ N — matching the guarantees of known algorithms exactly. While the SQ model is quite broad, it is also restricted. That is, the aforementioned prior results do not have any implications for the class of all polynomial-time algorithms. Interestingly, as we will explain in the proceeding discussion, our computational hardness reduction is inspired by the SQ-hard instances constructed in [DK22]. 1.2 Brief Technical Overview Here we give a high-level overview of our approach. Our reduction proceeds in two steps. The first is to reduce the standard LWE problem (as described above) to a different “continuous” LWE problem more suitable for our purposes. In particular, we consider the problem where the x samples are taken uniformly from Rn/Zn, y is either taken to be an independent random element of R/Z or is taken to be ⟨x, s⟩ mod 1 plus a small amount of (continuous) Gaussian noise, where s is some unknown vector in {±1}n. This reduction follows from existing techniques [Mic18a, GVV22]. The second step — which is the main technical contribution of our work — is reducing this continuous LWE problem to that of learning halfspaces with Massart noise. The basic idea is to perform a rejection sampling procedure that allows us to take LWE samples (x, y) and produce some new samples (x̃, ỹ). We will do this so that if y is independent of x, then ỹ is (nearly) independent of x̃; but if y = ⟨x, s⟩ + noise, then ỹ is a halfspace of x̃ with a small amount of Massart noise. An algorithm capable of learning halfspaces with Massart noise (with appropriate parameters) would be able to distinguish these cases by learning a hypothesis h and then looking at the probability that h(x̃) ̸= ỹ. In the case where ỹ was a halfspace with noise, this would necessarily be small; but in the case where x̃ and ỹ were independent, it could not be. In order to manage this reduction, we will attempt to produce a distribution (x̃, ỹ) similar to the SQ-hard instances of Massart halfspaces constructed in [DK22]. These instances can best be thought of as instances of a random variable (x′, y′) in Rn × {±1}, where y′ is given by a low-degree polynomial threshold function (PTF) of x′ with a small amount of Massart noise. Then, letting x̃ be the Veronese map applied to x′, we see that any low-degree polynomial in x′ is a linear function of x̃, and so ỹ = y′ is an LTF of x̃ plus a small amount of Massart noise. As for how the distribution over (x′, y′) is constructed in [DK22], essentially the conditional distribution of x′ on y′ = 1 and on y′ = −1 are carefully chosen mixtures of discrete Gaussians in the v-direction (for some randomly chosen unit vector v), and independent standard Gaussians in the orthogonal directions. () Our goal will be to find a way to perform rejection sampling on the distribution (x, y) to produce a distribution of this form. In pursuit of this, for some small real number b and some a ∈ [0, b), we let x′ be a random Gaussian subject to x′ ≡ bx (mod b) (in the coordinate-wise sense) conditioned on by ≡ a (mod b). We note that if we ignore the noise in the definition of y, this implies that ⟨x′, s⟩ ≡ ⟨bx, s⟩ ≡ b ⟨x, s⟩ ≡ by ≡ a (mod b) (recalling that s ∈ {±1}n). In fact, it is not hard to see that the resulting distribution on x′ is close to a standard Gaussian conditioned on ⟨x′, s⟩ ≡ a (mod b). In other words, x′ is close to a discrete Gaussian with spacing b/∥s∥2 and offset a/∥s∥2 in the s-direction, and an independent standard Gaussian in orthogonal directions. Furthermore, this x′ can be obtained from (x, y) samples by rejection sampling: taking many samples until one is found with by ≡ a (mod b), and then returning a random x′ with x′ ≡ bx (mod b). By taking an appropriate mixture of these distributions, we can manufacture a distribution close to the hard instances in [DK22]. This intuition is explained in detail in Section 3.1; see Lemma 3.3. (We note that Lemma 3.3 is included only for the purposes of intuition; it is a simpler version of Lemma 3.5, which is one of the main lemmas used to prove our main theorem.) Unfortunately, as will be discussed in Section 3.2, applying this construction directly does not quite work. This is because the small noise in the definition of y leads to a small amount of noise in the final values of ⟨x′, s⟩. This gives us distributions that are fairly similar to the hard instances of [DK22], but leads to small regions of values for u, where the following condition holds: Pr(y′ = +1 | x′ = u) = Pr(y′ = −1 | x′ = u). Unfortunately, the latter condition cannot hold if y′ is a function of x′ with Massart noise. In order to fix this issue, we need to modify the construction by carving intervals out of the support of x′ conditioned on y′ = −1, in order to eliminate these mixed regions. This procedure is discussed in detail in Section 3.3.2. 1.3 Additional Related Work There have also been several recent works showing reductions from LWE or lattice problems to other learning problems. Concurrent and independent work to ours [Tie22] showed hardness of weakly agnostically learning halfspaces, based on a worst-case lattice problem (via a reduction from “continuous” LWE). Two recent works obtained hardness for the unsupervised problem of learning mixtures of Gaussians (GMMs), assuming hardness of (variants of) the LWE problem. Specifically, [BRST21] defined a continuous version of LWE (whose hardness they established) and reduced it to the problem of learning GMMs. More recently, [GVV22] obtained a direct reduction from LWE to a (different) continuous version of LWE; and leveraged this connection to obtain quantitatively stronger hardness for learning GMMs. It is worth noting that for the purposes of our reduction, we require as a starting point a continuous version of LWE that differs from the one defined in [BRST21]. Specifically, we require that the distribution on x is uniform on [0, 1]n (instead of a Gaussian, as in [BRST21]) and the secret vector is binary. The hardness of this continuous version essentially follows from [Mic18b, GVV22]. 2 Preliminaries For x, s ∈ Rn with s ̸= 0, let xs def= ⟨x, s⟩/∥s∥2 be the length of the projection of x in the s direction, and x⊥s ∈ Rn−1 be the projection1 of x on the orthogonal complement of s. For f, g : U → R, we write f(u) ∝ g(u) if there is c ∈ R such that f(u) = cg(u) for all u ∈ U . We use X ∼ D to denote a random variable X with distribution D. We use PD or PX for the corresponding probability mass function (pmf) or density function (pdf), and PrD or PrX for the measure function of the distribution. We use DX to denote the distribution of the random variable X . For S ⊆ Rn, we will use λ(S) to denote the n-dimensional volume of S. Let U(S) denote the uniform distribution on S. For a distribution D on Rn and S ⊆ Rn, we denote by D | S the conditional distribution of X ∼ D given X ∈ S. Let Ds (resp. D⊥s) be the distribution of xs (resp. x⊥s), where x ∼ D. For distributions D1, D2, we use D1 +D2 to denote the pseudo-distribution with measure function PrD1+D2(A) = PrD1(A) + PrD2(A). For a ∈ R, let aD denote the pseudo-distribution with measure function aPrD. On the other hand, let a ◦D denote the distribution of aX , where X ∼ D. We use D1 ⋆ D2 to denote the convolution of distributions D1, D2. We will use LTFN for the class of halfspaces on RN ; when N is clear from the context, we may discard it and simply write LTF. For q ∈ N, we use Zq def = {0, 1, · · · , q − 1} and Rq def = [0, q). We use modq : Rn 7→ Rnq to denote the function that applies modq(x) on each coordinate of x. We use DNRn,σ to denote the n-dimensional Gaussian distribution with mean 0 and covariance matrix σ2/(2π) · In and use DNσ as a short hand for DNR,σ. In some cases, we will use N (0, In) for the standard (i.e., zero mean and identity covariance) multivariate Gaussian, Definition 2.1 (Partially Supported Gaussian Distribution). For σ ∈ R+ and x ∈ Rn, let ρσ(x) def = σ−n exp ( −π(∥x∥2/σ)2 ) . For any countable set S ⊆ Rn, we let ρσ(S) def = ∑ x∈S ρσ(x), and let DNS,σ be the distribution supported on S with pmf PDNS,σ (x) = ρσ(x)/ρσ(S). Definition 2.2 (Discrete Gaussian). For T ∈ R+, y ∈ R and σ ∈ R+, we define the “T -spaced, y-offset discrete Gaussian distribution with σ scale” to be the distribution of DNTZ+y,σ . Learning with Errors (LWE) We use the following definition of LWE, which allows for flexible distributions of samples, secrets, and noises. Here m is the number of samples, n is the dimension, and q is the ring size. Definition 2.3 (Generic LWE). Let m,n, q ∈ N, and let Dsample, Dsecret, Dnoise be distributions on Rn,Rn,R respectively. In the LWE(m,Dsample, Dsecret, Dnoise,modq) problem, we are given m independent samples (x, y) and want to distinguish between the following two cases: (i) Alternative 1More precisely, let B⊥s ∈ Rn×(n−1) for the matrix whose columns form an (arbitrary) orthonormal basis for the orthogonal complement of s, and let x⊥s def= (B⊥s)T x. hypothesis: s is drawn from Dsecret. Then, each sample is generated by taking x ∼ Dsample, z ∼ Dnoise, and letting y = modq(⟨x, s⟩+ z); and (ii) Null hypothesis: x, y are independent and each has the same marginal distribution as above. When a distribution in LWE is uniform over some set S, we may abbreviate U(S) merely as S. Note that LWE(m,Znq ,Znq , DNZ,σ,modq) to the classical LWE problem. Computational Hardness Assumption for LWE As alluded to earlier, the assumption for our hardness result is the hardness of the (classic) LWE problem, with the parameters stated below. Assumption 2.4 (Standard LWE Assumption (see, e.g., [LP11])). Let c > 0 be a sufficiently large constant. For any constant β ∈ (0, 1), κ ∈ N, LWE(2O(nβ),Znq ,Znq , DNZ,σ,modq) with q ≤ nκ and σ = c √ n cannot be solved in 2O(n β) time with 2−O(n β) advantage. We recall that [Reg09, Pei09] gave a polynomial-time quantum reduction from approximating (the decision version of) the Shortest Vector Problem (GapSVP) to LWE (with similar n, q, σ parameters). Our hardness assumption is the widely believed sub-exponential hardness of LWE. We note that the fastest known algorithm for GapSVP takes 2O(n) time [ALNS20]. Thus, refuting the conjecture would be a major breakthrough. A similar assumption was also used in [GVV22] to establish computational hardness of learning Gaussian mixtures. Our use of a sub-exponential hardness of LWE is not a coincidence; see Section 4. As mentioned earlier, we will use a different variant of LWE, where the sample is from Rn1 , the secret is from {±1}n, and the noise is drawn from a continuous Gaussian distribution. The hardness of this variant is stated below. The proof, which follows from [Mic18a, GVV22], is deferred to Appendix B. Lemma 2.5. Under Assumption 2.4, for any β ∈ (0, 1) and γ ∈ R+, there is no 2O(n β) time algorithm to solve LWE ( 2O(n β),Rn1 , {±1}n, DNO(n−γ),mod1 ) with 2−O(n β) advantage. Decisional Massart Halfspace Problem For a distribution D on labeled examples and a concept class C, we let RC(D) def = minh∈C Pr(x,y)∼D[h(x) ̸= y] be the error of the best classifier in C with respect to D. We will prove hardness for the following decision version of learning Massart halfspaces. This will directly imply hardness for the corresponding learning (search) problem. Definition 2.6 (Testing Halfspaces with Massart Noise). For n,N ∈ N, η,OPT ∈ (0, 1/2), let Massart(m,N, η,OPT) denote the problem of distinguishing, given m i.i.d. samples from D on RN × {±1}, between the following two cases: (i) Alternative hypothesis: D satisfies the Massart halfspace condition with noise parameter η and RLTF(D) ≤ OPT; and (ii) Null hypothesis: the Bayes optimal classifier has cη error, where c > 0 is a sufficiently small universal constant. 3 Reduction from LWE to Learning Massart Halfspaces In this section, we establish Theorem 1.2. Some intermediate technical lemmas have been deferred to the Appendix C. Our starting point is the problem LWE(m,Rn1 , {±1}n, DNσ ,mod1). Note that, by Lemma 2.5, Assumption 2.4 implies the hardness of LWE(m,Rn1 , {±1}n, DNσ ,mod1). We will reduce this variant of LWE to the decision/testing version of Massart halfspaces (Definition 2.6). Our reduction will employ multiple underlying parameters, which are required to satisfy a set of conditions. For convenience, we list these conditions below. Condition 3.1. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, δ ∈ (0, 1), satisfy: (i) t/ϵ is a sufficiently large even integer, (ii) σ ≤ √ n, (iii) 1 t √ n ≥ √ c log(n/δ), where c is a sufficiently large universal constant, (iv) ( c ′ϵ c′′tσ ) 2 ≥ log(m′/δ), where c′ > 0 is a sufficiently small universal constant and c′′ > 0 is a sufficiently large universal constant. The main theorem of this work is stated below. Theorem 3.2. Let n,m,m′ ∈ N, t, ϵ, σ ∈ R+, ϵ′, δ ∈ (0, 1) satisfy Condition 3.1 and η < 1/2. Moreover, assume that m′ = c(ϵ/t)m, where c > 0 is a sufficiently small universal constant and m(ϵ/t)2 is sufficiently large, and N = (n + 1)d, where d/(t/ϵ) is sufficiently large. Suppose that there is no T + poly(m,N, log(1/δ))-time algorithm for solving LWE(m,Rn1 , {±1}n, DNσ ,mod1) with ϵ′ −O(δ) advantage. Then there is no T time algorithm for solving Massart(m′, N, η,OPT) with 2ϵ′ advantage, where OPT = exp(−Ω(t4/ϵ2)). Note that Theorem 3.2, combined with Lemma 2.5, can be easily used to prove Theorem 1.2 (e.g., by plugging in t = n−0.5−Θ(ζ), ϵ = Θ(n−1.5) in the above statement); see Appendix D. As such, we devote the remainder of the body of this paper to give an overview to the proof of Theorem 3.2. High-level Overview The starting point of our computational hardness reduction is the family of SQ-hard instances obtained in [DK22]. At a high-level, these instances are constructed using mixtures of “hidden direction” discrete Gaussian distributions, i.e., distributions that are discrete Gaussians in a hidden direction and continuous Gaussians on the orthogonal directions. In Section 3.1, we note that by using an appropriate rejection sampling procedure on the LWE samples (drawn from the alternative hypothesis), we obtain a distribution very similar to the “hidden direction discrete Gaussian”. A crucial difference in our setting is the existence of a small amount of additional “noise”. A natural attempt is to replace the discrete Gaussians in [DK22] with the noisy ones obtained from our rejection sampling procedure. This produces problems similar to the hard instances from [DK22]. Unfortunately, the extra noise in our construction means that the naive version of this construction will not work; even with small amounts of noise, the resulting distributions will not satisfy the assumptions of a PTF with Massart noise. In Section 3.2, we elaborate on this issue and the modifications we need to make to our construction in order to overcome it. In Section 3.3, we provide the complete construction of our Massart PTF hard instance. Overview of the [DK22] SQ-hard Construction [DK22] showed SQ-hardness for the following hypothesis testing version of the problem (which implies hardness for the learning problem): For an input distribution D on Rn × {±1}, distinguish between the cases where D is a specific distribution Dnull in which x and y are independent or where D belongs to a class of alternative hypothesis distributions Dalternative. In particular, for D ∈ Dalternative, y will be given by a low-degree PTF in x with a small amount of Massart noise. As we will be trying to reproduce it, it is important for us to understand this alternative hypothesis distribution. Each distribution in Dalternative is parameterized by a hidden direction s ∈ Sn−1. We will denote the corresponding distribution by Ds. Ds is constructed so that x⊥s ∼ DNRn−1,1 is independent of x s and y. This means that we can specify Ds by describing the simpler distribution of (xs, y) ∈ R × {±1}. For (xs, y), we have that y = +1 with probability 1− η. The distributions of xs conditioned on y = ±1 are defined to be mixtures of discrete Gaussians as follows: Dxs|(y=+1) = 1 ϵ ∫ ϵ 0 DNu+(t+u)Z,1du and Dxs|(y=−1) = 1 ϵ ∫ t/2+ϵ t/2 DNu+(t+u−t/2)Z,1du . (1) As we noted, both xs | (y = +1) and xs | (y = −1) are mixtures of discrete Gaussians. Combining this with the fact that x⊥s ∼ N (n, In−1), this indicates that x | (y = +1) and x | (y = −1) are mixtures of “hidden direction discrete Gaussians” — with different spacing and offset for their support on the hidden direction. These conditional distributions were carefully selected to ensure that y is a Massart PTF of x with small error. To see why this is, notice that the support of xs | (y = +1) is ⋃ i∈Z [it, it+ (i+1)ϵ], while the support of xs | (y = −1) is ⋃ i∈Z [it+ t/2, it+ t/2+ (i+1)ϵ]; both supports are unions of intervals. Consider the implications of this for three different ranges of xs: 1. For xs ∈ [−t2/(2ϵ), t2/(2ϵ)], the intervals have lengths in [0, t/2]; thus, the +1 intervals and the −1 intervals do not overlap at all. 2. For xs ∈ [−t2/ϵ,−t2/(2ϵ)) ∪ (t2/(2ϵ), t2/ϵ], the intervals have lengths in [t/2, t]; thus, the +1 intervals and the −1 intervals overlap, so that their union covers the space. We note that in this case there are gaps between the +1 intervals; specifically, there are at most O(t/ϵ) such gaps. 3. For xs ∈ (−∞,−t2/ϵ)∪ (t2/ϵ,∞), the intervals have lengths in [t,∞), so the +1 intervals cover the space by themselves. Consider the degree-O(t/ϵ) PTF sign(p(x)) such that sign(p(x)) = +1 iff xs ∈ ⋃ i∈Z [it, it+(i+1)ϵ]. In particular, sign(p(x)) = 1 for x in the support of the conditional distribution on y = 1. Note that the PTF sign(p(x)) has zero error in the first case; thus, its total 0-1 error is at most exp(−Ω(t2/ϵ)2). Moreover, since the probability of y = 1 is substantially larger than the probability of y = −1, it is not hard to see that for any x with sign(p(x)) = 1 that Pr[y = 1 | x = x] > 1−O(η). This implies that y is given by sign(p(x)) with Massart noise O(η). 3.1 Basic Rejection Sampling Procedure In this subsection, we show that by performing rejection sampling on LWE samples, one can obtain a distribution similar to the “hidden direction discrete Gaussian”. For the sake of intuition, we start with the following simple lemma. The lemma essentially states that, doing rejection sampling on LWE samples, gives a distribution with the following properties: On the hidden direction s, the distribution is pointwise close to the convolutional sum of a discrete Gaussian and a continuous Gaussian noise. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s stays pointwise close to a Gaussian. Note that this distribution closely resembles the “hidden direction discrete Gaussian” in [DK22]. Lemma 3.3. Let (x, y) be a sample of the LWE(m,Rn1 , {±1}n, DNσ ,mod1) from the alternative hypothesis case, let y′ be any constant in [0, 1), and let x′ ∼ (1/σscale) ◦ DNx+Zn,σscale | (y = y ′) . Then we have the following: (i) For x′s, we have that for any u ∈ R it holds that Px′s(u) = (1±O(δ))PD′⋆DNσnoise (u) , whereD ′ = DNT (y′+Z),σsignal , and T = SR/(n 1/2σscale), σsignal = √ SR, σnoise = √ 1− SR, and SR = σ 2 scale σ2scale+σ 2/n , (ii) x′⊥s is “nearly independent” of x′s, namely for any l ∈ R and u ∈ Rn−1 we have that Px⊥s|xs=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Lemma 3.3 is a special case of Lemma 3.5, which is one of the main lemmas required for our proof. We note that the distribution of x′ obtained from the above rejection sampling is very similar to the “hidden direction discrete Gaussian” used in [DK22]. The key differences are as follows: (i) on the hidden direction, x′s is close to a discrete Gaussian plus extra Gaussian noise (instead of simply being a discrete Gaussian), (ii) x′⊥s and x′s are not perfectly independent. More importantly, by taking different values for y′ and σscale, we can obtain distributions with the same hidden direction, but their discrete Gaussian on the hidden direction has different spacing (T ) and offset (y′). To obtain a computational hardness reduction, our goal will be to simulate the instances from [DK22] by replacing the hidden direction discrete Gaussians with the noisy versions that we obtain from this rejection sampling. We next discuss this procedure and see why a naive implementation of it does not produce a PTF with Massart noise. 3.2 Intuition for the Hard Instance The natural thing to try is to simulate the conditional distributions from [DK22] by replacing the hidden direction discrete Gaussian terms in (1) with similar distributions obtained from rejection sampling. In particular, Lemma 3.3 says that we can obtain a distribution which is close to this hidden direction Gaussian plus a small amount of Gaussian noise. Unfortunately, this extra noise will cause problems for our construction. Recall that the support of xs | (y = +1) was ⋃ i∈Z [it, it+ (i+ 1)ϵ], and the support of xs | (y = −1) was ⋃ i∈Z [it+ t/2, it+ t/2 + (i+ 1)ϵ] for [DK22]. With the extra noise, there is a decaying density tail in both sides of each [it, it + (i + 1)ϵ] interval in the support of xs | (y = +1). The same holds for each interval in the support of xs | (y = −1). Recalling the three cases of these intervals discussed earlier, this leads to the following issue. In the second case, the intervals have length within [t/2, t]; thus, the intervals [it, it+ (i+ 1)ϵ] and [it+ t/2, it+ t/2 + (i+ 1)ϵ] overlap, i.e., it + (i + 1)ϵ ≥ it + t/2. On the right side of [it, it + (i + 1)ϵ], in the support of xs | (y = −1), there is a small region of values for u, where Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u]. This causes the labels y = +1 and y = −1 to be equally likely over that small region, violating the Massart condition. (We note that for the first case, there is also this kind of small region that Pr[y′ = +1 | xs = u] = Pr[y′ = −1 | xs = u] caused by the noise tail. However, the probability density of this region is negligibly small, as we will later see in Lemma 3.9.) We can address this by carving out empty spaces in the [it+ t/2, it+ t/2 + (i+ 1)ϵ] intervals for xs | (y = −1), so that these decaying parts can fit into. Since this only needs to be done for intervals of Case 2, at most O(t/ϵ) many such slots are needed. It should be noted that no finite slot will totally prevent this from occurring. However, we only need the slot to be wide enough so that the decay of the error implies that there is negligible mass in the overlap (which can be treated as an error). We also need to discuss another technical detail. In the last section, we defined the rejection sampling process as taking (1/σscale) ◦ DNx+Zn,σscale | (y = y ′), where we can control the offset by y′ and spacing by σscale (Lemma 3.3). This distribution is effectively a noisy version of a discrete Gaussian. Therefore, we can produce a noisy version of the hard instances of [DK22] by taking a mixture of these noisy discrete Gaussians. Unfortunately the noise rate of one of these instances will be σnoise. This quantity depends on the spacing T of the discrete Gaussian, which varies across the mixture we would like to take. This inconsistent noise rate is inconvenient for our analysis. However, we can fix the issue by adding extra noise artificially to each of the discrete Gaussians in our mixture, so that they will all have a uniform noise rate σnoise; see Algorithm 1 and Lemma 3.5. The last bit of technical detail is that instead of doing the rejection for y = y′, which has 0 acceptance probability, we will only reject if y is not corresponding to any discrete Gaussian we need. Then we do another rejection to make sure that the magnitude of discrete Gaussians in the mixture is correct. In the next subsection, we introduce the complete rejection sampling method. 3.3 The Full Hard Instance Construction We first introduce the complete rejection algorithm, and then explain how the hard instance is produced using it. Below we provide proof overviews; omitted proofs can be found in Appendix C. 3.3.1 The Complete Rejection Algorithm The rejection sampling algorithm is the following. The sampling process produces the noisy variant of the distribution which, for some carefully selected set B ⊆ [0, 1], has PDF function 1 λ(B) ∫ B DNk+(t+k−ψ)Z,1dk in the hidden direction, as we will see in Lemma 3.5. Algorithm 1 Rejection Sampling Algorithm Inputs: A sample (x, y) ∈ Rn1 × R1 and the input parameters are t, ϵ, ψ ∈ R>0, where ψ + ϵ ≤ t, B ⊆ [ψ,ψ + ϵ], δ ∈ (0, 1). In addition, the parameters satisfy items (i)-(iii) of Condition 3.1. Output: REJECT or a sample x′ ∈ Rn. 1. Reject unless there is a k ∈ B such that y = kt+k−ψ . 2. Furthermore, reject with probability 1− t 2 (t+k−ψ)2 . 3. Let SR = 1 − 4(t + ϵ)2σ2, σscale = SR(t+k−ψ)√n and σadd = √ (1−SR)σ2scale−SR(σ/ √ n)2 SR . Then, sample independent noise xadd ∼ DNRn,σadd and output x ′ ∼ (1/σscale) ◦DNx+xadd+Zn,σscale . Notice that the parameter SR does not depend on y, whereas σscale, σadd do depend on y. For convenience, let us use the following notation for the output distributions. Definition 3.4 (Output Distribution of Rejection Sampling). Let Dalternativet,ϵ,ψ,B,δ be the distributions of x′ produced by Algorithm 1 (conditioned that the algorithm accepts) given that (x, y) are sampled as follows: let x ∼ U(Rn1 ), z ∼ DNσ , and then let y = mod1(⟨x, s⟩+ z), where s ∈ {±1}n is the secret. Furthermore, let Dnullt,ϵ,ψ,B,δ be a similar distribution, but when x ∼ U(Rn1 ), y ∼ U(R1) are independent. Note that Dalternativet,ϵ,ψ,B,δ depends on s, but we do not explicitly denote this in our notation. Alternative Hypothesis Analysis The main properties of Dalternativet,ϵ,ψ,B,δ are summarized in the following lemma. Essentially, the lemma states that for this distribution Dalternativet,ϵ,ψ,B,δ , the marginal distribution on the hidden direction s is pointwise close to the convolution sum of D′ and a Gaussian noise, where D′ is a linear combination of discrete Gaussians. Moreover, on all the other directions ⊥ s, the distribution is nearly independent of its value on s, in the sense that conditioning on any value on s, the distribution on ⊥ s always stays pointwise close to a Gaussian. Lemma 3.5. Let x′ ∼ Dalternativet,ϵ,ψ,B,δ . Then we have the following: (i) For x′s, we have that for any u ∈ R, Px′s(u) = (1 ± O(δ))PD′⋆DNσnoise (u) , where D ′ = 1λ(B) ∫ B DNk+(t+k−ψ)Z,σsignaldk , σsignal = √ SR, and σnoise = √ 1− SR = 2(t + ϵ)σ. (SR is defined in Algorithm 1), (ii) x′⊥s is “nearly independent” of x′s; namely, for any l ∈ R and u ∈ Rn−1, we have that Px′⊥s|x′s=l(u) = (1±O(δ))PDN Rn−1,1 (u) . Null Hypothesis Analysis For Dnullt,ϵ,ψ,B,δ , we can show that it is pointwise close to DNRn,1: Lemma 3.6. For any u ∈ Rn, we have that PDnullt,ϵ,ψ,B,δ(u) = (1±O(δ))PDNRn,1(u) . 3.3.2 The Reduction Algorithm With the rejection sampling algorithm (Algorithm 1) at our disposal, we can now give the full construction of the hard instance. We use Dt,ϵ,ψ+,B+,δ for x | y = +1, Dt,ϵ,ψ−,B−,δ for x | y = −1 (with a carefully chosen pair of (B+, ψ+) and (B−, ψ−), as we discussed in Section 3.2), and take a proper marginal distribution of y to build a joint distribution of (x, y). We introduce a reduction algorithm that, given samples from our LWE problem (either from the null or the alternative hypothesis), produces i.i.d. samples (x, y) from a joint distribution with the following properties: 1. If the input LWE problem is the null hypothesis, then x | y = +1 and x | y = −1 are close in total variation distance. Therefore, no hypothesis for predicting y in terms of x can do much better than the best constant hypothesis. 2. If the input LWE problem is the alternative hypothesis, then the joint distribution of (x, y) we build is close to a distribution D that satisfies O(η) Massart condition with respect to a degree-O(t/ϵ) PTF, and there is a degree-O(t/ϵ) PTF with small error on D. We formalize the idea from Section 3.2 here. For x | y = +1, we will use ψ+ def = 0 and B+ def = [0, ϵ]. For x | y = −1, we take ψ− def = t/2, which is also the same as [DK22]; but instead of taking B− def = [t/2, t/2 + ϵ], we will need to carve out the slots on B−. First, we define the mapping g : R− [−1.5t, 0.5t] 7→ [0.5t, t], as follows: for i ∈ Z and b ∈ Rt, we have that g(it+ t/2 + b) def = { b i+1 + t/2 if i ≥ 0; b−t i+2 + t/2 if i < 0. This function maps a location it+ t/2 + b to the corresponding place we need to carve out on B−, which is defined in Algorithm 2. These intervals are chosen so that the decaying density part of +1 can fit in, as we discussed in Section 3.2. Now we introduce the algorithm that reduces LWE to learning Massart PTFs. We similarly define the output distributions of the algorithms in the two cases as follows: Definition 3.7. Let DalternativePTF be mixture of Dalternativet,ϵ,ψ+,B+,δ and D alternative t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1−η and η respectively. Similarly, letDnullPTF be mixture ofDnullt,ϵ,ψ+,B+,δ andD null t,ϵ,ψ−,B−,δ with +1 and −1 labels and weights 1− η and η respectively. The following observation is immediate from the algorithm. Observation 3.8. In the alternative (resp. null) hypothesis case, the output distribution of Algorithm 2, conditioned on not failing, is the same as m′ i.i.d. samples drawn from DalternativePTF (resp. D null PTF). Alternative Hypothesis Analysis We prove that there exists a degree-O(t/ϵ) PTF such that DalternativePTF is close to (in total variation distance) satisfying the O(η) Massart noise condition with respect to this PTF, and this PTF has small error with respect to DalternativePTF . Lemma 3.9. DalternativePTF is O(δ/m′) close in total variation distance to a distribution Dtruncated such that there is a degree-O(t/ϵ) PTF sign(p(x)) that: (i) E(x,y)∼Dtruncated [sign(p(x)) ̸= y] ≤ exp(−Ω(t4/ϵ2)), (ii) Dtruncated satisfies the O(η) Massart noise condition with respect to sign(p(x)). Null Hypothesis Analysis The reader is referred to Lemma C.8 in Appendix C for the null hypothesis analysis. Algorithm 2 Reducing LWE to Learning PTFs with Massart Noise Inputs: m samples from an instance of LWE(m,Rn1 , {±1}n,Nσ,mod1). The input parameters are m′ ∈ N, t, ϵ ∈ R>0, δ ∈ (0, 1), and η > 0 a sufficiently small value. In addition, the parameters satisfy Condition 3.1. Output: m′ many samples in Rn × {±1} or FAIL. 1. We take ψ+ = 0, B+ = [0, ϵ], ψ− = t/2 and B− def = [t/2, t/2 + ϵ]− t ϵ−1⋃ i= t2ϵ−1 g([it− 2c′ϵ, it])− t ϵ−1⋃ i= t2ϵ−1 g([it+ (i+ 1)ϵ, it+ (i+ 1)ϵ+ 2c′ϵ]) − − t2ϵ−1⋃ i=− tϵ−1 g([it+ (i+ 1)ϵ− 2c′ϵ, it+ (i+ 1)ϵ])− − t2ϵ−1⋃ i=− tϵ−1 g([it, it+ 2c′ϵ]) . 2. Repeat the following m′ times. If at any point the algorithm attempts to use more than m LWE samples from the input, then output FAIL. (a) With probability 1 − η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ+, B = B+, δ. Add (x′,+1) to the output samples. (b) With probability η, repeat the following until Algorithm 1 accepts and output x′: run Algorithm 1 with the next unused LWE sample from the input and parameters t, ϵ, ψ = ψ−, B = B−, δ. Add (x′,−1) to the output samples. Putting Everything Together Having reduced LWE to learning Massart PTFs, we can apply a Veronese mapping on the samples; this PTF becomes an LTF on the Veronese mapping. Since we use degree-O(t/ϵ) Veronese mapping, the dimension for the Massart LTF problem is N = (n+ 1)O(t/ϵ). 4 Discussion Our result rules out the existence of polynomial time algorithms achieving error smaller than Ω(η), where η is the upper bound on the noise rate, even of the optimal error is very small, assuming the subexponential time hardness of LWE. A technical open question is whether the constant factor in the Ω(η)-term of our lower bound can be improved to the value C = 1; this would match known algorithms exactly. (As mentioned in the introduction, such a sharp lower bound has been recently established in the SQ model [NT22], improving on [DK22].) It is also worth noting that our reduction rules out polynomial-time algorithms, but does not rule out, e.g., subexponential or even quasipolynomial time algorithms with improved error guarantees. We believe that obtaining stronger hardness for these problems would require substantially new ideas, as our runtime lower bounds are essentially the same as the best time lower bounds for learning in the (much stronger) agnostic noise model or in restricted models of computation (like SQ). This seems related to the requirement that our bounds require subexponential hardness of LWE in our assumption. As the strongest possible assumptions only allow us to prove quasi-polynomial lower bounds, any substantially weaker assumption will likely fail to prove super-polynomial ones. Acknowledgments Ilias Diakonikolas was supported by NSF Medium Award CCF-2107079, NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant. Daniel M. Kane was supported by NSF Medium Award CCF-2107547, NSF Award CCF-1553288 (CAREER), a Sloan Research Fellowship, and a grant from CasperLabs. Lisheng Ren was supported by NSF Award CCF-1652862 (CAREER) and a DARPA Learning with Less Labels (LwLL) grant.
1. What is the focus and contribution of the paper on learning halfspaces with Massart noise? 2. What are the strengths of the proposed approach, particularly in terms of cryptographic hardness results? 3. What are the weaknesses of the paper, especially regarding its relevance and significance compared to prior works? 4. Do you have any concerns about the reduction from continuous LWE to Massart halfspaces? 5. What are the limitations of the proposed method, and what are some potential algorithmic approaches that could bypass the SQ lower bounds for this task?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper provides cryptographic hardness results for the problem of learning halfspaces with Massart noise. The authors give a reduction of LWE to Massart halfspace learning and show that no polynomial time algorithm for Massart halfspaces can achieve error better than the one achieved by the existing algorithms. Prior to this work, there were only SQ lower bounds for this task. Strengths And Weaknesses The paper provides a cryptographic hardness result for the problem of learning Massart halfspaces. Such cryptographic hardness results have been recently obtained for various other learning tasks. Also, recent work has established hardness results for Massart halfspaces in the SQ model. The main result of this work is that, assuming the sub-exponential hardness of the LWE problem, there is no efficient learning algorithm for Massart halfspaces with error better than η even if the optimal classifier can achieve much smaller error. The paper is well-written and the result is quite clear. The technical contribution of the paper is a reduction from continuous LWE to Massart halfspaces. In general, this result is an interesting addition to the literature of robust supervised statistics. I am in general positive towards acceptance; however, I would like to understand the importance of this hardness result, given the existing SQ lower bounds. Questions While I find the contribution nice, it is not clear to me how significant is such a hardness result given the existing SQ hardness results for the Massart halfspaces problem. Which algorithmic approaches that bypass the SQ lower bounds could be potentially used to efficiently tackle this task? Is it only Gaussian elimination and some LLL based algorithms? Limitations The authors address the limitations and propose future directions.
NIPS
Title Constrained episodic reinforcement learning in concave-convex and knapsack settings Abstract We propose an algorithm for tabular episodic reinforcement learning (RL) with constraints. We provide a modular analysis with strong theoretical guarantees for two general settings. First is the convex-concave setting: maximization of a concave reward function subject to constraints that expected values of some vector quantities (such as the use of unsafe actions) lie in a convex set. Second is the knapsack setting: maximization of reward subject to the constraint that the total consumption of any of the specified resources does not exceed specified levels during the whole learning process. Previous work in constrained RL is limited to linear expectation constraints (a special case of convex-concave setting), or focuses on feasibility question, or on single-episode settings. Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in constrained episodic benchmarks. 1 Introduction Standard reinforcement learning (RL) approaches seek to maximize a scalar reward (Sutton and Barto, 1998, 2018; Schulman et al., 2015; Mnih et al., 2015), but in many settings this is insufficient, because the desired properties of the agent behavior are better described using constraints. For example, an autonomous vehicle should not only get to the destination, but should also respect safety, fuel efficiency, and human comfort constraints along the way (Le et al., 2019); a robot should not only fulfill its task, but should also control its wear and tear, for example, by limiting the torque exerted on its motors (Tessler et al., 2019). Moreover, in many settings, we wish to satisfy such constraints already during training and not only during the deployment. For example, a power grid, an autonomous vehicle, or a real robotic hardware should avoid costly failures, where the hardware is damaged or humans are harmed, already during training (Leike et al., 2017; Ray et al., 2020). Constraints are also key in additional sequential decision making applications, such as dynamic pricing with limited supply (e.g., Besbes and Zeevi, 2009; Babaioff et al., 2015), scheduling of resources on a computer cluster (Mao et al., 2016), and imitation learning, where the goal is to stay close to an expert behavior (Syed and Schapire, 2007; Ziebart et al., 2008; Sun et al., 2019). In this paper we study constrained episodic reinforcement learning, which encompasses all of these applications. An important characteristic of our approach, distinguishing it from previous work (e.g., Altman, 1999; Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019; Ray et al., 2020), is our focus on efficient exploration, leading to reduced sample complexity. Notably, the modularity of 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. our approach enables extensions to more complex settings such as (i) maximizing concave objectives under convex constraints, and (ii) reinforcement learning under hard constraints, where the learner has to stop when some constraint is violated (e.g., a car runs out of gas). For these extensions, which we refer to as concave-convex setting and knapsack setting, we provide the first regret guarantees in the episodic setting (see related work below for a detailed comparison). Moreover, our guarantees are anytime, meaning that the constraint violations are bounded at any point during learning, even if the learning process is interrupted. This is important for those applications where the system continues to learn after it is deployed. Our approach relies on the principle of optimism under uncertainty to efficiently explore. Our learning algorithms optimize their actions with respect to a model based on the empirical statistics, while optimistically overestimating rewards and underestimating the resource consumption (i.e., overestimating the distance from the constraint). This idea was previously introduced in multiarmed bandits (Agrawal and Devanur, 2014); extending it to episodic reinforcement learning poses additional challenges since the policy space is exponential in the episode horizon. Circumventing these challenges, we provide a modular way to analyze this approach in the basic setting where both rewards and constraints are linear (Section 3) and then transfer this result to the more complicated concave-convex and knapsack settings (Sections 4 and 5). We empirically compare our approach with the only previous works that can handle convex constraints and show that our algorithmic innovations lead to significant empirical improvements (Section 6). Related work. Sample-efficient exploration in constrained episodic reinforcement learning has only recently started to receive attention. Most previous works on episodic reinforcement learning focus on unconstrained settings (Jaksch et al., 2010; Azar et al., 2017; Dann et al., 2017). A notable exception is the work of Cheung (2019) and Tarbouriech and Lazaric (2019). Both of these works consider vectorial feedback and aggregate reward functions, and provide theoretical guarantees for the reinforcement learning setting with a single episode, but require a strong reachability or communication assumption, which is not needed in the episodic setting studied here. Also, compared to Cheung (2019), our results for the knapsack setting allow for a significantly smaller budget, as we illustrate in Section 5. Moreover, our approach is based on a tighter bonus, which leads to a superior empirical performance (see Section 6). Recently, there have also been several concurrent and independent works on sample-efficient exploration for reinforcement learning with constraints (Singh et al., 2020; Efroni et al., 2020; Qiu et al., 2020; Ding et al., 2020; Zheng and Ratliff, 2020). Unlike our work, all of these approaches focus on linear reward objective and linear constraints and do not handle the concave-convex and knapsack settings that we consider. Constrained reinforcement learning has also been studied in settings that do not focus on sampleefficient exploration (Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019). Among these, only Miryoosefi et al. (2019) handle convex constraints, albeit without a reward objective (they solve the feasibility problem). Since these works do not focus on sample-efficient exploration, their performance drastically deteriorates when the task requires exploration (as we show in Section 6). Sample-efficient exploration under constraints has been studied in multi-armed bandits, starting with a line of work on dynamic pricing with limited supply (Besbes and Zeevi, 2009, 2011; Babaioff et al., 2015; Wang et al., 2014). A general setting for bandits with global knapsack constraints (bandits with knapsacks) was defined and solved by Badanidiyuru et al. (2018) (see also Ch. 10 of Slivkins, 2019). Within this literature, the closest to ours is the work of Agrawal and Devanur (2014), who study bandits with concave objectives and convex constraints. Our work is directly inspired by theirs and lifts their techniques to the more general episodic reinforcement learning setting. 2 Model and preliminaries In episodic reinforcement learning, a learner repeatedly interacts with an environment across K episodes. The environment includes the state space S , the action spaceA, the episode horizon H , and the initial state s0.1 To capture constrained settings, the environment includes a set D of d resources where each i ∈ D has a capacity constraint ξ(i) ∈ R+. The above are fixed and known to the learner. 1A fixed and known initial state is without loss of generality. In general, there is a fixed but unknown distribution ρ from which the initial state is drawn before each episode. We modify the MDP by adding a new state s0 as initial state, such that the next state is sampled from ρ for any action. Then ρ is “included” within the transition probabilities. The extra state s0 does not contribute any reward and does not consume any resources. Constrained Markov decision process. We work with MDPs that have resource consumption in addition to rewards. Formally, a constrained MDP (CMDP) is a tripleM = (p, r, c) that describes transition probabilities p : S ×A → ∆(S), rewards r : S ×A → [0, 1], and resource consumption c : S ×A → [0, 1]d. For convenience, we denote c(s, a, i) = ci(s, a). We allow stochastic rewards and consumptions, in which case r and c refer to the conditional expectations, conditioned on s and a (our definitions and algorithms are based on this conditional expectation rather than the full conditional distribution). We use the above definition to describe two kinds of CMDPs. The true CMDPM? = (p?, r?, c?) is fixed but unknown to the learner. Selecting action a at state s results in rewards and consumptions drawn from (possibly correlated) distributions with means r?(s, a) and c?(s, a) and supports in [0, 1] and [0, 1]d respectively. Next states are generated from transition probabilities p?(s, a). The second kind of CMDP arises in our algorithm, which is model-based and at episode k uses a CMDPM(k). Episodic reinforcement learning protocol. At episode k ∈ [K], the learner commits to a policy πk = (πk,h) H h=1 where πk,h : S → ∆(A) specifies how to select actions at step h for every state. The learner starts from state sk,1 = s0. At step h = 1, . . . ,H , she selects an action ak,h ∼ πk,h(sk,h). The learner earns reward rk,h and suffers consumption ck,h, both drawn from the true CMDPM? on state-action pair (sk,h, ak,h) as described above, and transitions to state sk,h+1 ∼ p?(sk,h, ak,h). Objectives. In the basic setting (Section 3), the learner wishes to maximize reward while respecting the consumption constraints in expectation by competing favorably against the following benchmark: max π Eπ,p ? [ H∑ h=1 r? ( sh, ah )] s.t. ∀i ∈ D : Eπ,p ? [ H∑ h=1 c? ( sh, ah, i )] ≤ ξ(i), (1) where Eπ,p denotes the expectation over the run of policy π according to transitions p, and sh, ah are the induced random state-action pairs. We denote by π? the policy that maximizes this objective. For the basic setting, we track two performance measures: reward regret compares the learner’s total reward to the benchmark and consumption regret bounds excess in resource consumption: REWREG(k) := Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − 1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )] , (2) CONSREG(k) := max i∈D (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ) . (3) Our guarantees are anytime, i.e., they hold at any episode k and not only after the last episode. We also consider two extensions. In Section 4, we consider a concave reward objective and convex consumption constraints. In Section 5, we require consumption constraints to be satisfied with high probability under a cumulative budget across all K episodes, rather than in expectation in a single episode. Tabular MDPs. We assume that the state space S and the action space A are finite (tabular setting). We construct standard empirical estimates separately for each state-action pair (s, a), using the learner’s observations up to and not including a given episode k. Eqs. (4–7) define sample counts, empirical transition probabilities, empirical rewards, and empirical resource consumption.2 Nk(s, a) = max { 1, ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a} } , (4) p̂k(s ′|s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a, st,h+1 = s′}, (5) r̂k(s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] rt,h · 1{st,h = s, at,h = a}, (6) ĉk(s, a, i) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] ct,h,i · 1{st,h = s, at,h = a} ∀i ∈ D. (7) 2The max operator in Eq. (4) is to avoid dividing by 0. Preliminaries for theoretical analysis. The Q-function is a standard object in RL that tracks the learner’s expected performance if she starts from state s ∈ S at step h, selects action a ∈ A, and then follows a policy π under a model with transitions p for the remainder of the episode. We parameterize it by the objective function m : S ×A → [0, 1], which can be either a reward, i.e., m(s, a) = r(s, a), or consumption of some resource i ∈ D, i.e., m(s, a) = c(s, a, i). (For the unconstrained setting, the objective is the reward.) The performance of the policy in a particular step h is evaluated by the value function V which corresponds to the expected Q-function of the selected action (where the expectation is taken over the possibly randomized action selection of π). The Q and value functions can be both recursively defined by dynamic programming: Qπ,pm (s, a, h) = m(s, a) + ∑ s′∈S p(s′|s, a)V π,pm (s′, h+ 1), V π,pm (s, h) = Ea∼π(·|s) [ Qπ,pm (s, a, h) ] and V π,pm (s,H + 1) = 0. By slight abuse of notation, for m ∈ {r} ∪ {ci}i∈D, we denote by m? ∈ {r?} ∪ {c?i }i∈D the corresponding objectives with respect to the rewards and consumptions of the true CMDPM?. For objectives m? and transitions p?, the above are the Bellman equations of the system (Bellman, 1957). Estimating the Q-function based on the model parameters p and m rather than the ground truth parameters p? and m? introduces errors. These errors are localized across stages by the notion of Bellman error which contrasts the performance of policy π starting from stage h under the model parameters to a benchmark that behaves according to the model parameters starting from the next stage h+ 1 but uses the true parameters of the system in stage h. More formally, for objective m: BELLπ,pm (s, a, h) = Q π,p m (s, a, h)− ( m?(s, a) + ∑ s′∈S p?(s′|s, a)V π,pm (s′, h+ 1) ) . (8) Note that when the CMDP isM? (m = m?, p = p?), there is no mismatch and BELLπ,p ? m? = 0. 3 Warm-up algorithm and analysis in the basic setting In this section, we introduce a simple algorithm that allows to simultaneously bound reward and consumption regrets for the basic setting introduced in the previous section. Even in this basic setting, we provide the first sample-efficient guarantees in constrained episodic reinforcement learning.3 The modular analysis of the guarantees also allows us to subsequently extend (in Sections 4 and 5) the algorithm and guarantees to the more general concave-convex and knapsack settings. Our algorithm. At episode k, we construct an estimated CMDPM(k) = ( p(k), r(k), c(k) ) based on the observations collected so far. The estimates are bonus-enhanced (formalized below) to encourage more targeted exploration. Our algorithm CONRL selects a policy πk by solving the following constrained optimization problem which we refer to as BASICCONPLANNER(p(k), r(k), c(k)): max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ ξ(i). The above optimization problem is similar to the objective (1) but uses the estimated model instead of the (unknown to the learner) true model. We also note that this optimization problem can be optimally solved as it is a linear program on the occupation measures (Puterman, 2014), i.e., setting as variables the probability of each state-action pair and imposing flow conservation constraints with respect to the transitions. This program is described in Appendix A.1. Bonus-enhanced model. A standard approach to implement the principle of optimism under uncertainty is to introduce, at each episode k, a bonus term b̂k(s, a) that favors under-explored actions. Specifically, we add this bonus to the empirical rewards (6), and subtract it from the consumptions (7): r(k)(s, a) = r̂k(s, a) + b̂k(s, a) and c(k)(s, a, i) = ĉk(s, a, i)− b̂k(s, a) for each resource i. 3We refer the reader to the related work (in Section 1) for discussion on concurrent and independent papers. Unlike our results, these papers do not extend to either concave-convex or knapsack settings. Following the unconstrained analogues (Azar et al., 2017; Dann et al., 2017), we define the bonus as: b̂k(s, a) = H √ 2 ln ( 8SAH(d+ 1)k2/δ) Nk(s, a) , (9) where δ > 0 is the desired failure probability of the algorithm and Nk(s, a) is the number of times (s, a) pair is visited, c.f. (4), S = |S|, and A = |A|. Thus, under-explored actions have a larger bonus, and therefore appear more appealing to the planner. For estimated transition probabilities, we just use the empirical averages (5): p(k)(s′|s, a) = p̂(s′|s, a). Valid bonus and Bellman-error decomposition. For a bonus-enhanced model to achieve effective exploration, the resulting bonuses need to be valid, i.e., they should ensure that the estimated rewards overestimate the true rewards and the estimated consumptions underestimate the true consumptions. Definition 3.1. A bonus bk : S ×A → R is valid if, ∀s ∈ S, a ∈ A, h ∈ [H],m ∈ {r} ∪ {ci}i∈D:∣∣∣(m̂k(s, a)−m?(s, a))+ ∑ s′∈S ( p̂k(s ′|s, a)− p?(s′|s, a) ) V π ?,p? m? (s ′, h+ 1) ∣∣∣ ≤ bk(s, a). By classical concentration bounds (Appendix B.1), the bonus b̂k of Eq. (9) satisfies this condition: Lemma 3.2. With probability 1− δ, the bonus b̂k(s, a) is valid for all episodes k simultaneously. Our algorithm solves the BASICCONPLANNER optimization problem based on a bonus-enhanced model. When the bonuses are valid, we can upper bound the per-episode regret by the expected sum of Bellman errors across steps. This is the first part in classical unconstrained analyses and the following proposition extends this decomposition to constrained episodic reinforcement learning. The proof uses the so-called simulation lemma (Kearns and Singh, 2002) and is provided in Appendix B.3. Proposition 3.3. If b̂k(s, a) is valid for all episodes k simultaneously then the per-episode reward and consumption regrets can be upper bounded by the expected sum of Bellman errors (8): Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − Eπk,p ? [ H∑ h=1 r? ( sh, ah )] ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k)r(k) (sh, ah, h)∣∣∣] (10) ∀i ∈ D : Eπk,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k) c (k) i ( sh, ah, h )∣∣∣]. (11) Final guarantee. One difficulty with directly bounding the Bellman error is that the value function is not independent of the draws forming r(k)(s, a), c(k)(s, a), and p(k)(s′|s, a). Hence we cannot apply Hoeffding inequality directly. While Azar et al. (2017) propose a trick to get an O( √ S) bound on Bellman error in unconstrained settings, the trick relies on the crucial property of Bellman optimality: for an unconstrained MDP, its optimal policy π? satisfies the condition, V π ? r? (s, h) ≥ V πr?(s, h) for all s, h, π (i.e., π? is optimal at any state). However, when constraints exist, the optimal policy does not satisfy the Bellman optimality property. Indeed, we can only guarantee optimality with respect to the initial state distribution, i.e., V π ? r? (s0, 1) ≥ V πr?(s0, 1) for any π, but not everywhere else. This illustrates a fundamental difference between constrained MDPs and unconstrained MDPs. Thus we cannot directly apply the trick from Azar et al. (2017). Instead we follow an alternative approach of bounding the value function via an -net over the possible values. This analysis leads to a guarantee that is weaker by a factor of √ S than the unconstrained results. The proof is provided in Appendix B.6. Theorem 3.4. There exists an absolute constant c ∈ R+ such that, with probability at least 1− 3δ, reward and consumption regrets are both upper bounded by: c√ k · S √ AH3 · √ ln(k) ln ( SAH(d+ 1)k/δ ) + ck · S 3/2AH2 √ ln ( 2SAH(d+ 1)k/δ ) . Comparison to single-episode results. In single-episode setting, Cheung (2019) achieves √ S dependency under the further assumption that the transitions are sparse, i.e., ‖p?(s, a)‖0 S for all (s, a). We do not make such assumptions on the sparsity of the MDP and we note that the regret bound of Cheung (2019) scales linearly in S when ‖p?(s, a)‖0 = Θ(S). Also, the single-episode setting requires a strong reachability assumption, not present in the episodic setting. Remark 3.5. The aforementioned regret bound can be turned into a PAC bound of Õ ( S2AH3 2 ) by taking the uniform mixture of policies π1, π2, . . . , πk. 4 Concave-convex setting We now extend the algorithm and guarantees derived for the basic setting to when the objective is concave function of the accumulated reward and the constraints are expressed as a convex function of the cumulative consumptions. Our approach is modular, seamlessly building on the basic setting. Setting and objective. Formally, there is a concave reward-objective function f : R → R and a convex consumption-objective function g : Rd → R; the only assumption is that these functions are L-Lipschitz for some constant L, i.e., |f(x)−f(y)| ≤ L|x−y| for any x, y ∈ R, and |g(x)−g(y)| ≤ L‖x− y‖1 for any x, y ∈ Rd. Analogous to (1), the learner wishes to compete against the following benchmark which can be viewed as a reinforcement learning variant of the benchmark used by Agrawal and Devanur (2014) in multi-armed bandits: max π f ( Eπ,p ? [ H∑ h=1 r? ( sh, ah )]) s.t. g ( Eπ,p ? [ H∑ h=1 c? ( sh, ah )]) ≤ 0. (12) The reward and consumption regrets are therefore adapted to: CONVEXREWREG(k) := f ( Eπ ?,p? [ H∑ h=1 r? ( sh, ah )]) − f (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )]) , CONVEXCONSREG(k) := g (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah )]) . Our algorithm. As in the basic setting, we wish to create a bonus-enhanced model and optimize over it. To model the transition probabilites, we use empirical estimates p(k) = p̂k of Eq. (5) as before. However, since reward and consumption objectives are no longer monotone in the accumulated rewards and consumption respectively, it does not make sense to simply add or subtract b̂k (defined in Eq. 9) as we did before. Instead we compute the policy πk of episode k together with the model by solving the following optimization problem which we call CONVEXCONPLANNER: max π max r(k)∈[r̂k±b̂k] f ( Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )]) s.t. min c(k)∈[ĉk±b̂k·1] g ( Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah )]) ≤ 0. The above problem is convex in the occupation measures,4 i.e., the probability ρ(s, a, h) that the learner is at state-action-step (s, a, h) — c.f. Appendix A.2 for further discussion. max ρ max r∈[r̂k±b̂k] f ( ∑ s,a,h ρ(s, a, h)r(s, a) ) s.t. min c∈[ĉk±b̂k·1] g ( ∑ s,a,h ρ(s, a, h)c(s, a) ) ≤ 0 ∀s′, h : ∑ a ρ(s′, a, h+ 1) = ∑ s,a ρ(s, a, h)p̂k(s ′|s, a) ∀s, a, h : 0 ≤ ρ(s, a, h) ≤ 1 and ∑ s,a ρ(s, a, h) = 1. Guarantee for concave-convex setting. To extend the guarantee of the basic setting to the concaveconvex setting, we face an additional challenge: it is not immediately clear that the optimal policy π? is feasible for the CONVEXCONPLANNER program because CONVEXCONPLANNER is defined with respect to the empirical transition probabilities p(k).5 Moreover, whenH > 1, it is not straightforward to show that objective in the used model is always greater than the one in the true model as the used 4Under mild assumptions, this program can be solved in polynomial time similar to its bandit analogue of Lemma 4.3 in (Agrawal and Devanur, 2014). We note that in the basic setting, it reduces to just a linear program. 5Note that in multi-armed bandit concave-convex setting (Agrawal and Devanur, 2014), proving feasibility of the best arm is straightforward as there are no transitions. model transitions p(k)(s, a) can lead to different states than the ones encountered in the true model.6 We deal with both of these issues by introducing a novel application of the mean-value theorem to show that π? is indeed a feasible solution of that program and create a similar regret decomposition to Proposition 3.3 (see Proposition C.1 and more discussion in Appendix C.1); this allows us to plug in the results developed for the basic setting. The full proof is provided in Appendix C. Theorem 4.1. Let L be the Lipschitz constant for f and g and let REWREG and CONSREG be the reward and consumption regrets for the basic setting (Theorem 3.4) with the failure probability δ. With probability 1 − δ, our algorithm in the concave-convex setting has reward and consumption regret upper bounded by L · REWREG and Ld · CONSREG respectively. The linear dependence on d in the consumption regret above comes from the fact that we assume g is Lipschitz under `1 norm. 5 Knapsack setting Our last technical section extends the algorithm and guarantee of the basic setting to scenarios where the constraints are hard which is in accordance with most of the literature on bandits with knapsacks. The goal here is to achieve aggregate reward regret that is sublinear in the time horizon (in our case, the number of episodes K), while also respecting budget constraints for as small budgets as possible. We derive guarantees in terms of reward regret, as defined previously, and then argue that our guarantee extends to the seemingly stronger benchmark of the best dynamic policy. Setting and objective. Each resource i ∈ D has an aggregate budget Bi that the learner should not exceed over K episodes. Unlike the basic setting, where we track the consumption regret, here we view this as a hard constraint. As in most works on bandits with knapsacks, the algorithm is allowed to use a “null action” for an episode, i.e., an action that yields a zero reward and consumption when selected at the beginning of an episode. The learner wishes to maximize her aggregate reward while respecting these hard constraints. We reduce this problem to a specific variant of the basic problem (1) with ξ(i) = BiK . We modify the solution to (1) to take the null action if any constraint is violated and call the resulting benchmark π?. Note that π? satisfies constraints in expectation. At the end of this section, we explain how our algorithm also competes against a benchmark that is required to respect constraints deterministically (i.e., with probability one across all episodes). Our algorithm. In the basic setting of Section 3, we showed a reward regret guarantee and a consumption regret guarantee, proving that the average constraint violation is O(1/ √ K). Now we seek a stronger guarantee: the learned policy needs to satisfy budget constraints with high probability. Our algorithm optimizes a mathematical program KNAPSACKCONPLANNER (13) that strengthens the consumption constraints: max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ (1− )Bi K . (13) In the above, p(k), r(k), c(k) are exactly as in the basic setting and > 0 is instantiated in the theorem below. Note that the program (13) is feasible thanks to the existence of the null action. The following mixture policy induces a feasible solution: with probability 1 − , we play the optimal policy π? for the entire episode; with probability , we play the null action for the entire episode. Note that the above program can again be cast as a linear program in the occupancy measure space — c.f. Appendix A.3 for further discussion. Guarantee for knapsack setting. The guarantee of the basic setting on this tighter mathematical program seamlessly transfers to a reward guarantee that does not violate the hard constraints. Theorem 5.1. Assume that miniBi ≤ KH , i.e., constraints are non-vacuous. Let AGGREG(δ) be a bound on the aggregate (across episodes) reward or consumption regret for the soft-constraint setting (Theorem 3.4) with the failure probability δ. Let = AGGREG(δ)mini Bi . If miniBi > AGGREG(δ) then, with probability 1− δ, the reward regret in the hard-constraint setting is at most 2HAGGREG(δ)mini Bi and constraints are not violated. 6Again, this is not an issue in multi-armed bandits. The above theorem implies that the aggregate reward regret is sublinear in K as long as miniBi HAGGREG(δ). The analysis in the above main theorem (provided in Appendix D) is modular in the sense that it leverages the CONRL’s performance to solve (13) in a black-box manner. Smaller AGGREG(δ) from the basic soft-constraint setting immediately translates to smaller reward regret and smaller budget regime (i.e., miniBi can be smaller). In particular, using the AGGREG(δ) bound of Theorem 3.4, the reward regret is sublinear as long as miniBi = Ω( √ K). In contrast, previous work of Cheung (2019) can only deal with larger budget regime, i.e., miniBi = Ω(K2/3). Although the guarantees are not directly comparable as the latter is for the single-episode setting, which requires further reachability assumptions, the budget we can handle is significantly smaller and in the next section we show that our algorithm has superior empirical performance in episodic settings even when such assumptions are granted. Dynamic policy benchmark. The common benchmark used in bandits with knapsacks is not the best stationary policy π? that respects constraints in expectation but rather the best dynamic policy (i.e., a policy that makes decisions based on the history) that never violates hard constraints deterministically. In Appendix D, we show that the optimal dynamic policy (formally defined there) has reward less than policy π? (informally, this is because π? respects constraints in expectation while the dynamic policy has to satisfy constraints deterministically) and therefore the guarantee of Theorem 5.1 also applies against the optimal dynamic policy. 6 Empirical comparison to other concave-convex approaches In this section, we evaluate the performance of CONRL against previous approaches.7 Although our CONPLANNER (see Appendix A) can be solved exactly using linear programming (Altman, 1999), in our experiments, it suffices to use Lagrangian heuristic, denoted as LAGRCONPLANNER (see Appendix E.1). This Lagrangian heuristic only needs a planner for the unconstrained RL task. We consider two unconstrained RL algorithms as planners: value iteration and a model-based Advantage Actor-Critic (A2C) (Mnih et al., 2016) (based on fictitious samples drawn from the model provided as an input). The resulting variants of LAGRCONPLANNER are denoted CONRL-VALUE ITERATION 7Code is available at https://github.com/miryoosefi/ConRL and CONRL-A2C. We run our experiments on two grid-world environments Mars rover (Tessler et al., 2019) and Box (Leike et al., 2017).8 Mars rover. The agent must move from the initial position to the goal without crashing into rocks. If the agent reaches the goal or crashes into a rock it will stay in that cell for the remainder of the episode. Reward is 1 when the agent reaches the goal and 1/H afterwards. Consumption is 1 when the agent crashes into a rock and 1/H afterwards. The episode horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. Box. The agent must move a box from the initial position to the goal while avoiding corners (cells adjacent to at least two walls). If the agent reaches the goal it stays in that cell for the remainder of the episode. Reward is 1 when agent reaches the goal for the first time and 1/H afterwards; consumption is 1/H whenever the box is in a corner. Horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. We compare CONRL to previous constrained approaches (derived for either episodic or single-episode settings) in Figure 1. We keep track of three metrics: episode-level reward and consumption (the first two rows) and cumulative consumption (the third row). Episode-level metrics are based on the most recent episode in the first two columns, i.e., we plot Eπk [ ∑H h=1 r ? h] and Eπk [ ∑H h=1 c ? h]. In the third column, we plot the average across episodes so far, i.e., 1k ∑k t=1 Eπt [ ∑H h=1 r ? h] and 1 k ∑k t=1 Eπt [ ∑H h=1 c ? h], and we use the log scale for the x-axis. The cumulative consumption is∑k t=1 ∑H h=1 ct,h in all columns. See Appendix E for further details about experiments. Episodic setting. We first compare our algorithms to two episodic RL approaches: APPROPO (Miryoosefi et al., 2019) and RCPO (Tessler et al., 2019). We note that none of the previous approaches in this setting address sample-efficient exploration. In addition, most of them are limited to linear constraints, with the exception of APPROPO (Miryoosefi et al., 2019), which can handle general convex constraints.9 Both APPROPO and RCPO (used as a baseline by Miryoosefi et al., 2019) maintain and update a weight vectorλ, used to derive reward for an unconstrained RL algorithm, which we instantiate as A2C. APPROPO focuses on the feasibility problem, so it requires to specify a lower bound on the reward, which we set to 0.3 for Mars rover and 0.1 for Box. In the first two columns of Figure 1 we see that both versions of CONRL are able to solve the constrained RL task with a much smaller number of trajectories (see top two rows), and their overall consumption levels are substantially lower (the final row) than those of the previous approaches. Single-episode setting. Closest to our work is TFW-UCRL2 (Cheung, 2019), which is based on UCRL (Jaksch et al., 2010). However, that approach focuses on the single-episode setting and requires a strong reachability assumption. By connecting terminal states of our MDP to the intial state, we reduce our episodic setting to single-episode setting in which we can compare CONRL against TFW-UCRL2. Results for Mars rover are depicted in last column of Figure 1.10 Again, both versions of CONRL find the solution with a much smaller number of trajectories (note the log scale on the x-axis) and their overall consumption levels are much lower than those of TFW-UCRL2. This suggests that TFW-UCRL2 might be impractical in (at least some) episodic settings. 7 Conclusions In this paper we study two types of constraints in the framework of constrained tabular episodic reinforcement learning: concave rewards and convex constraints, and knapsacks constraints. Our algorithms achieve near-optimal regret in both settings, and experimentally we show that our approach outperforms prior works on constrained reinforcement learning. Regarding future work, it would be interesting to extend our framework to continuous state and action spaces. Potential directions include extensions to Lipschitz MDPs (Song and Sun, 2019) and MDPs with linear parameterization (Jin et al., 2019) where optimism-based exploration algorithms exist under the classic reinforcement learning setting without constraints. 8We are not aware of any benchmarks for convex/knapsack constraints. For transparency, we compare against prior works handling concave-convex or knapsack settings on established benchmarks for the linear case. 9In addition to that, trust region methods like CPO (Achiam et al., 2017) address a more restrictive setting and require constraint satisfaction at each iteration; for this reason, they are not included in the experiments. 10Due to a larger state space, it was computationally infeasible to run TFW-UCRL2 in the Box environment. Broader Impact Our work focuses on the theoretical foundations of reinforcement learning by addressing the important challenge of constrained optimization in reinforcement learning. We strongly believe that understanding the theoretical underpinnings of the main machine learning paradigms is essential and can guide principled and effective deployment of such methods. Beyond its theoretical contribution, our work may help the design of reinforcement learning algorithms that go beyond classical digital applications of RL (board games and video games) and extend to settings with complex and often competing objectives. We believe that constraints constitute a fundamental limitation in extending RL beyond the digital world, as they exist in a wide variety of sequential decision-making applications (robotics, medical treatment, education, advertising). Our work provides a paradigm to design algorithms with efficient exploration despite the presence of constraints. That said, one needs to ensure that an algorithm offers acceptable quality in applications. Any exploration method that does not rely on off-policy samples will inevitably violate constraints sometimes in order to learn. In some applications, this is totally acceptable: a car staying out of fuel in rare circumstances is not detrimental, an advertiser exhausting their budget some month is even less significant, a student dissatisfaction in an online test is unpleasant but probably acceptable. On the other hand, if the constraint violation involves critical issues like drug recommendation for severe diseases or decisions by self-driving cars that can cause physical harm to passengers then the algorithm needs to be carefully reviewed. It may be necessary to “prime” the algorithm with some data collected in advance (however costly it may be). One may need to make a judgement call on whether the ethical or societal standards are consistent with deploying an algorithm in a particular setting. To summarize, our work is theoretical in nature and makes significant progress on a problem at the heart of RL. It has the potential to guide deployment of constrained RL methods in many important applications and tackle a fundamental bottleneck in deploying RL beyond the digital world. However, an application needs to be carefully reviewed before deployment. Acknowledgments and Disclosure of Funding The authors would like to thank Rob Schapire for useful discussions that helped in the initial stages of this work. Part of the work was done when WS was at Microsoft Research NYC.
1. What is the main contribution of the paper regarding constrained MDPs? 2. What are the strengths of the proposed framework, particularly in exploration and mathematical reasoning? 3. What are the weaknesses of the paper, especially regarding experimentation and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a theoretical RL framework to solve constrained MDP problems under tabular and episodic settings. The distinguishing characteristic is that the paper focused on sample-efficient exploration, which is rarely related with constrained MDPs before. Strengths Overall this paper has a great motivation, and provides a thorough mathematics reasoning for the proposed model. This could be the start of a novel research area for solving constrained MDPs. Weaknesses My major concerns: 1. line 248 suggested linear programming could be used in ConPlanner, but instead the experiment tested on different unconstrained RL planners under Lagrangian heuristic. I think the papers should have compared results of different constrained problem solver. 2. The paper gave a formulation for knapsack setting, but didn’t test it on any real-world problems. While theoretical proof was plenty, the paper didn’t provide any empirical support, making this method less intuitive. 3. Although the paper claimed they compared the proposed framework with other concave-convex approaches, the problems they experimented on didn’t seem to be concave-convex. Grid world problem such as Mars rover applied in the paper has linear constraints instead of convex ones. On the other hand, in line 270, the paper suggested most previous methods are limited to linear constraints except one. Understandably, It is hard to find previous approaches or benchmarks to compare with under concave-convex settings. The paper should just state them as they were. 4. The vanilla reward function is simply the expected sum of reward, aka, f is nothing but an identity function. Is there any scenario we need to use a complex f function? The paper doesn't seem to offer such examples in the experiment part. 5. In the related work section, the paper mentions that the closest work are (Singh et al., 2020; Efroni et al., 2020; Qiu et al., 2020; Ding et al., 2020). As pointed out, one key difference is that these concurrent work are restricted to linear obj and linear constraint case. A more detailed comparison is welcomed to clarify. For example, what will the result be like when this paper reduces to linear obj and linear constraint? Is there any possibility of extending the concurrent papers' analysis methodology to the convex-concave case? Some minor problems: line 126&178: it would be better if paper add simple statement at where modularity comes in play for both basic and concave-convex setting line 202: in Theorem 4.1 it would be better if there is an explanation of what Ld is. line 206: it would be clearer if paper briefly explains how sublinear aggregate reward regret is related to policy being optimal. line 257: there should not be any reward when rover crashes into a rock. I believe it is a typo. line 269: a typo”previous”
NIPS
Title Constrained episodic reinforcement learning in concave-convex and knapsack settings Abstract We propose an algorithm for tabular episodic reinforcement learning (RL) with constraints. We provide a modular analysis with strong theoretical guarantees for two general settings. First is the convex-concave setting: maximization of a concave reward function subject to constraints that expected values of some vector quantities (such as the use of unsafe actions) lie in a convex set. Second is the knapsack setting: maximization of reward subject to the constraint that the total consumption of any of the specified resources does not exceed specified levels during the whole learning process. Previous work in constrained RL is limited to linear expectation constraints (a special case of convex-concave setting), or focuses on feasibility question, or on single-episode settings. Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in constrained episodic benchmarks. 1 Introduction Standard reinforcement learning (RL) approaches seek to maximize a scalar reward (Sutton and Barto, 1998, 2018; Schulman et al., 2015; Mnih et al., 2015), but in many settings this is insufficient, because the desired properties of the agent behavior are better described using constraints. For example, an autonomous vehicle should not only get to the destination, but should also respect safety, fuel efficiency, and human comfort constraints along the way (Le et al., 2019); a robot should not only fulfill its task, but should also control its wear and tear, for example, by limiting the torque exerted on its motors (Tessler et al., 2019). Moreover, in many settings, we wish to satisfy such constraints already during training and not only during the deployment. For example, a power grid, an autonomous vehicle, or a real robotic hardware should avoid costly failures, where the hardware is damaged or humans are harmed, already during training (Leike et al., 2017; Ray et al., 2020). Constraints are also key in additional sequential decision making applications, such as dynamic pricing with limited supply (e.g., Besbes and Zeevi, 2009; Babaioff et al., 2015), scheduling of resources on a computer cluster (Mao et al., 2016), and imitation learning, where the goal is to stay close to an expert behavior (Syed and Schapire, 2007; Ziebart et al., 2008; Sun et al., 2019). In this paper we study constrained episodic reinforcement learning, which encompasses all of these applications. An important characteristic of our approach, distinguishing it from previous work (e.g., Altman, 1999; Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019; Ray et al., 2020), is our focus on efficient exploration, leading to reduced sample complexity. Notably, the modularity of 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. our approach enables extensions to more complex settings such as (i) maximizing concave objectives under convex constraints, and (ii) reinforcement learning under hard constraints, where the learner has to stop when some constraint is violated (e.g., a car runs out of gas). For these extensions, which we refer to as concave-convex setting and knapsack setting, we provide the first regret guarantees in the episodic setting (see related work below for a detailed comparison). Moreover, our guarantees are anytime, meaning that the constraint violations are bounded at any point during learning, even if the learning process is interrupted. This is important for those applications where the system continues to learn after it is deployed. Our approach relies on the principle of optimism under uncertainty to efficiently explore. Our learning algorithms optimize their actions with respect to a model based on the empirical statistics, while optimistically overestimating rewards and underestimating the resource consumption (i.e., overestimating the distance from the constraint). This idea was previously introduced in multiarmed bandits (Agrawal and Devanur, 2014); extending it to episodic reinforcement learning poses additional challenges since the policy space is exponential in the episode horizon. Circumventing these challenges, we provide a modular way to analyze this approach in the basic setting where both rewards and constraints are linear (Section 3) and then transfer this result to the more complicated concave-convex and knapsack settings (Sections 4 and 5). We empirically compare our approach with the only previous works that can handle convex constraints and show that our algorithmic innovations lead to significant empirical improvements (Section 6). Related work. Sample-efficient exploration in constrained episodic reinforcement learning has only recently started to receive attention. Most previous works on episodic reinforcement learning focus on unconstrained settings (Jaksch et al., 2010; Azar et al., 2017; Dann et al., 2017). A notable exception is the work of Cheung (2019) and Tarbouriech and Lazaric (2019). Both of these works consider vectorial feedback and aggregate reward functions, and provide theoretical guarantees for the reinforcement learning setting with a single episode, but require a strong reachability or communication assumption, which is not needed in the episodic setting studied here. Also, compared to Cheung (2019), our results for the knapsack setting allow for a significantly smaller budget, as we illustrate in Section 5. Moreover, our approach is based on a tighter bonus, which leads to a superior empirical performance (see Section 6). Recently, there have also been several concurrent and independent works on sample-efficient exploration for reinforcement learning with constraints (Singh et al., 2020; Efroni et al., 2020; Qiu et al., 2020; Ding et al., 2020; Zheng and Ratliff, 2020). Unlike our work, all of these approaches focus on linear reward objective and linear constraints and do not handle the concave-convex and knapsack settings that we consider. Constrained reinforcement learning has also been studied in settings that do not focus on sampleefficient exploration (Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019). Among these, only Miryoosefi et al. (2019) handle convex constraints, albeit without a reward objective (they solve the feasibility problem). Since these works do not focus on sample-efficient exploration, their performance drastically deteriorates when the task requires exploration (as we show in Section 6). Sample-efficient exploration under constraints has been studied in multi-armed bandits, starting with a line of work on dynamic pricing with limited supply (Besbes and Zeevi, 2009, 2011; Babaioff et al., 2015; Wang et al., 2014). A general setting for bandits with global knapsack constraints (bandits with knapsacks) was defined and solved by Badanidiyuru et al. (2018) (see also Ch. 10 of Slivkins, 2019). Within this literature, the closest to ours is the work of Agrawal and Devanur (2014), who study bandits with concave objectives and convex constraints. Our work is directly inspired by theirs and lifts their techniques to the more general episodic reinforcement learning setting. 2 Model and preliminaries In episodic reinforcement learning, a learner repeatedly interacts with an environment across K episodes. The environment includes the state space S , the action spaceA, the episode horizon H , and the initial state s0.1 To capture constrained settings, the environment includes a set D of d resources where each i ∈ D has a capacity constraint ξ(i) ∈ R+. The above are fixed and known to the learner. 1A fixed and known initial state is without loss of generality. In general, there is a fixed but unknown distribution ρ from which the initial state is drawn before each episode. We modify the MDP by adding a new state s0 as initial state, such that the next state is sampled from ρ for any action. Then ρ is “included” within the transition probabilities. The extra state s0 does not contribute any reward and does not consume any resources. Constrained Markov decision process. We work with MDPs that have resource consumption in addition to rewards. Formally, a constrained MDP (CMDP) is a tripleM = (p, r, c) that describes transition probabilities p : S ×A → ∆(S), rewards r : S ×A → [0, 1], and resource consumption c : S ×A → [0, 1]d. For convenience, we denote c(s, a, i) = ci(s, a). We allow stochastic rewards and consumptions, in which case r and c refer to the conditional expectations, conditioned on s and a (our definitions and algorithms are based on this conditional expectation rather than the full conditional distribution). We use the above definition to describe two kinds of CMDPs. The true CMDPM? = (p?, r?, c?) is fixed but unknown to the learner. Selecting action a at state s results in rewards and consumptions drawn from (possibly correlated) distributions with means r?(s, a) and c?(s, a) and supports in [0, 1] and [0, 1]d respectively. Next states are generated from transition probabilities p?(s, a). The second kind of CMDP arises in our algorithm, which is model-based and at episode k uses a CMDPM(k). Episodic reinforcement learning protocol. At episode k ∈ [K], the learner commits to a policy πk = (πk,h) H h=1 where πk,h : S → ∆(A) specifies how to select actions at step h for every state. The learner starts from state sk,1 = s0. At step h = 1, . . . ,H , she selects an action ak,h ∼ πk,h(sk,h). The learner earns reward rk,h and suffers consumption ck,h, both drawn from the true CMDPM? on state-action pair (sk,h, ak,h) as described above, and transitions to state sk,h+1 ∼ p?(sk,h, ak,h). Objectives. In the basic setting (Section 3), the learner wishes to maximize reward while respecting the consumption constraints in expectation by competing favorably against the following benchmark: max π Eπ,p ? [ H∑ h=1 r? ( sh, ah )] s.t. ∀i ∈ D : Eπ,p ? [ H∑ h=1 c? ( sh, ah, i )] ≤ ξ(i), (1) where Eπ,p denotes the expectation over the run of policy π according to transitions p, and sh, ah are the induced random state-action pairs. We denote by π? the policy that maximizes this objective. For the basic setting, we track two performance measures: reward regret compares the learner’s total reward to the benchmark and consumption regret bounds excess in resource consumption: REWREG(k) := Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − 1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )] , (2) CONSREG(k) := max i∈D (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ) . (3) Our guarantees are anytime, i.e., they hold at any episode k and not only after the last episode. We also consider two extensions. In Section 4, we consider a concave reward objective and convex consumption constraints. In Section 5, we require consumption constraints to be satisfied with high probability under a cumulative budget across all K episodes, rather than in expectation in a single episode. Tabular MDPs. We assume that the state space S and the action space A are finite (tabular setting). We construct standard empirical estimates separately for each state-action pair (s, a), using the learner’s observations up to and not including a given episode k. Eqs. (4–7) define sample counts, empirical transition probabilities, empirical rewards, and empirical resource consumption.2 Nk(s, a) = max { 1, ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a} } , (4) p̂k(s ′|s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a, st,h+1 = s′}, (5) r̂k(s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] rt,h · 1{st,h = s, at,h = a}, (6) ĉk(s, a, i) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] ct,h,i · 1{st,h = s, at,h = a} ∀i ∈ D. (7) 2The max operator in Eq. (4) is to avoid dividing by 0. Preliminaries for theoretical analysis. The Q-function is a standard object in RL that tracks the learner’s expected performance if she starts from state s ∈ S at step h, selects action a ∈ A, and then follows a policy π under a model with transitions p for the remainder of the episode. We parameterize it by the objective function m : S ×A → [0, 1], which can be either a reward, i.e., m(s, a) = r(s, a), or consumption of some resource i ∈ D, i.e., m(s, a) = c(s, a, i). (For the unconstrained setting, the objective is the reward.) The performance of the policy in a particular step h is evaluated by the value function V which corresponds to the expected Q-function of the selected action (where the expectation is taken over the possibly randomized action selection of π). The Q and value functions can be both recursively defined by dynamic programming: Qπ,pm (s, a, h) = m(s, a) + ∑ s′∈S p(s′|s, a)V π,pm (s′, h+ 1), V π,pm (s, h) = Ea∼π(·|s) [ Qπ,pm (s, a, h) ] and V π,pm (s,H + 1) = 0. By slight abuse of notation, for m ∈ {r} ∪ {ci}i∈D, we denote by m? ∈ {r?} ∪ {c?i }i∈D the corresponding objectives with respect to the rewards and consumptions of the true CMDPM?. For objectives m? and transitions p?, the above are the Bellman equations of the system (Bellman, 1957). Estimating the Q-function based on the model parameters p and m rather than the ground truth parameters p? and m? introduces errors. These errors are localized across stages by the notion of Bellman error which contrasts the performance of policy π starting from stage h under the model parameters to a benchmark that behaves according to the model parameters starting from the next stage h+ 1 but uses the true parameters of the system in stage h. More formally, for objective m: BELLπ,pm (s, a, h) = Q π,p m (s, a, h)− ( m?(s, a) + ∑ s′∈S p?(s′|s, a)V π,pm (s′, h+ 1) ) . (8) Note that when the CMDP isM? (m = m?, p = p?), there is no mismatch and BELLπ,p ? m? = 0. 3 Warm-up algorithm and analysis in the basic setting In this section, we introduce a simple algorithm that allows to simultaneously bound reward and consumption regrets for the basic setting introduced in the previous section. Even in this basic setting, we provide the first sample-efficient guarantees in constrained episodic reinforcement learning.3 The modular analysis of the guarantees also allows us to subsequently extend (in Sections 4 and 5) the algorithm and guarantees to the more general concave-convex and knapsack settings. Our algorithm. At episode k, we construct an estimated CMDPM(k) = ( p(k), r(k), c(k) ) based on the observations collected so far. The estimates are bonus-enhanced (formalized below) to encourage more targeted exploration. Our algorithm CONRL selects a policy πk by solving the following constrained optimization problem which we refer to as BASICCONPLANNER(p(k), r(k), c(k)): max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ ξ(i). The above optimization problem is similar to the objective (1) but uses the estimated model instead of the (unknown to the learner) true model. We also note that this optimization problem can be optimally solved as it is a linear program on the occupation measures (Puterman, 2014), i.e., setting as variables the probability of each state-action pair and imposing flow conservation constraints with respect to the transitions. This program is described in Appendix A.1. Bonus-enhanced model. A standard approach to implement the principle of optimism under uncertainty is to introduce, at each episode k, a bonus term b̂k(s, a) that favors under-explored actions. Specifically, we add this bonus to the empirical rewards (6), and subtract it from the consumptions (7): r(k)(s, a) = r̂k(s, a) + b̂k(s, a) and c(k)(s, a, i) = ĉk(s, a, i)− b̂k(s, a) for each resource i. 3We refer the reader to the related work (in Section 1) for discussion on concurrent and independent papers. Unlike our results, these papers do not extend to either concave-convex or knapsack settings. Following the unconstrained analogues (Azar et al., 2017; Dann et al., 2017), we define the bonus as: b̂k(s, a) = H √ 2 ln ( 8SAH(d+ 1)k2/δ) Nk(s, a) , (9) where δ > 0 is the desired failure probability of the algorithm and Nk(s, a) is the number of times (s, a) pair is visited, c.f. (4), S = |S|, and A = |A|. Thus, under-explored actions have a larger bonus, and therefore appear more appealing to the planner. For estimated transition probabilities, we just use the empirical averages (5): p(k)(s′|s, a) = p̂(s′|s, a). Valid bonus and Bellman-error decomposition. For a bonus-enhanced model to achieve effective exploration, the resulting bonuses need to be valid, i.e., they should ensure that the estimated rewards overestimate the true rewards and the estimated consumptions underestimate the true consumptions. Definition 3.1. A bonus bk : S ×A → R is valid if, ∀s ∈ S, a ∈ A, h ∈ [H],m ∈ {r} ∪ {ci}i∈D:∣∣∣(m̂k(s, a)−m?(s, a))+ ∑ s′∈S ( p̂k(s ′|s, a)− p?(s′|s, a) ) V π ?,p? m? (s ′, h+ 1) ∣∣∣ ≤ bk(s, a). By classical concentration bounds (Appendix B.1), the bonus b̂k of Eq. (9) satisfies this condition: Lemma 3.2. With probability 1− δ, the bonus b̂k(s, a) is valid for all episodes k simultaneously. Our algorithm solves the BASICCONPLANNER optimization problem based on a bonus-enhanced model. When the bonuses are valid, we can upper bound the per-episode regret by the expected sum of Bellman errors across steps. This is the first part in classical unconstrained analyses and the following proposition extends this decomposition to constrained episodic reinforcement learning. The proof uses the so-called simulation lemma (Kearns and Singh, 2002) and is provided in Appendix B.3. Proposition 3.3. If b̂k(s, a) is valid for all episodes k simultaneously then the per-episode reward and consumption regrets can be upper bounded by the expected sum of Bellman errors (8): Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − Eπk,p ? [ H∑ h=1 r? ( sh, ah )] ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k)r(k) (sh, ah, h)∣∣∣] (10) ∀i ∈ D : Eπk,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k) c (k) i ( sh, ah, h )∣∣∣]. (11) Final guarantee. One difficulty with directly bounding the Bellman error is that the value function is not independent of the draws forming r(k)(s, a), c(k)(s, a), and p(k)(s′|s, a). Hence we cannot apply Hoeffding inequality directly. While Azar et al. (2017) propose a trick to get an O( √ S) bound on Bellman error in unconstrained settings, the trick relies on the crucial property of Bellman optimality: for an unconstrained MDP, its optimal policy π? satisfies the condition, V π ? r? (s, h) ≥ V πr?(s, h) for all s, h, π (i.e., π? is optimal at any state). However, when constraints exist, the optimal policy does not satisfy the Bellman optimality property. Indeed, we can only guarantee optimality with respect to the initial state distribution, i.e., V π ? r? (s0, 1) ≥ V πr?(s0, 1) for any π, but not everywhere else. This illustrates a fundamental difference between constrained MDPs and unconstrained MDPs. Thus we cannot directly apply the trick from Azar et al. (2017). Instead we follow an alternative approach of bounding the value function via an -net over the possible values. This analysis leads to a guarantee that is weaker by a factor of √ S than the unconstrained results. The proof is provided in Appendix B.6. Theorem 3.4. There exists an absolute constant c ∈ R+ such that, with probability at least 1− 3δ, reward and consumption regrets are both upper bounded by: c√ k · S √ AH3 · √ ln(k) ln ( SAH(d+ 1)k/δ ) + ck · S 3/2AH2 √ ln ( 2SAH(d+ 1)k/δ ) . Comparison to single-episode results. In single-episode setting, Cheung (2019) achieves √ S dependency under the further assumption that the transitions are sparse, i.e., ‖p?(s, a)‖0 S for all (s, a). We do not make such assumptions on the sparsity of the MDP and we note that the regret bound of Cheung (2019) scales linearly in S when ‖p?(s, a)‖0 = Θ(S). Also, the single-episode setting requires a strong reachability assumption, not present in the episodic setting. Remark 3.5. The aforementioned regret bound can be turned into a PAC bound of Õ ( S2AH3 2 ) by taking the uniform mixture of policies π1, π2, . . . , πk. 4 Concave-convex setting We now extend the algorithm and guarantees derived for the basic setting to when the objective is concave function of the accumulated reward and the constraints are expressed as a convex function of the cumulative consumptions. Our approach is modular, seamlessly building on the basic setting. Setting and objective. Formally, there is a concave reward-objective function f : R → R and a convex consumption-objective function g : Rd → R; the only assumption is that these functions are L-Lipschitz for some constant L, i.e., |f(x)−f(y)| ≤ L|x−y| for any x, y ∈ R, and |g(x)−g(y)| ≤ L‖x− y‖1 for any x, y ∈ Rd. Analogous to (1), the learner wishes to compete against the following benchmark which can be viewed as a reinforcement learning variant of the benchmark used by Agrawal and Devanur (2014) in multi-armed bandits: max π f ( Eπ,p ? [ H∑ h=1 r? ( sh, ah )]) s.t. g ( Eπ,p ? [ H∑ h=1 c? ( sh, ah )]) ≤ 0. (12) The reward and consumption regrets are therefore adapted to: CONVEXREWREG(k) := f ( Eπ ?,p? [ H∑ h=1 r? ( sh, ah )]) − f (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )]) , CONVEXCONSREG(k) := g (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah )]) . Our algorithm. As in the basic setting, we wish to create a bonus-enhanced model and optimize over it. To model the transition probabilites, we use empirical estimates p(k) = p̂k of Eq. (5) as before. However, since reward and consumption objectives are no longer monotone in the accumulated rewards and consumption respectively, it does not make sense to simply add or subtract b̂k (defined in Eq. 9) as we did before. Instead we compute the policy πk of episode k together with the model by solving the following optimization problem which we call CONVEXCONPLANNER: max π max r(k)∈[r̂k±b̂k] f ( Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )]) s.t. min c(k)∈[ĉk±b̂k·1] g ( Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah )]) ≤ 0. The above problem is convex in the occupation measures,4 i.e., the probability ρ(s, a, h) that the learner is at state-action-step (s, a, h) — c.f. Appendix A.2 for further discussion. max ρ max r∈[r̂k±b̂k] f ( ∑ s,a,h ρ(s, a, h)r(s, a) ) s.t. min c∈[ĉk±b̂k·1] g ( ∑ s,a,h ρ(s, a, h)c(s, a) ) ≤ 0 ∀s′, h : ∑ a ρ(s′, a, h+ 1) = ∑ s,a ρ(s, a, h)p̂k(s ′|s, a) ∀s, a, h : 0 ≤ ρ(s, a, h) ≤ 1 and ∑ s,a ρ(s, a, h) = 1. Guarantee for concave-convex setting. To extend the guarantee of the basic setting to the concaveconvex setting, we face an additional challenge: it is not immediately clear that the optimal policy π? is feasible for the CONVEXCONPLANNER program because CONVEXCONPLANNER is defined with respect to the empirical transition probabilities p(k).5 Moreover, whenH > 1, it is not straightforward to show that objective in the used model is always greater than the one in the true model as the used 4Under mild assumptions, this program can be solved in polynomial time similar to its bandit analogue of Lemma 4.3 in (Agrawal and Devanur, 2014). We note that in the basic setting, it reduces to just a linear program. 5Note that in multi-armed bandit concave-convex setting (Agrawal and Devanur, 2014), proving feasibility of the best arm is straightforward as there are no transitions. model transitions p(k)(s, a) can lead to different states than the ones encountered in the true model.6 We deal with both of these issues by introducing a novel application of the mean-value theorem to show that π? is indeed a feasible solution of that program and create a similar regret decomposition to Proposition 3.3 (see Proposition C.1 and more discussion in Appendix C.1); this allows us to plug in the results developed for the basic setting. The full proof is provided in Appendix C. Theorem 4.1. Let L be the Lipschitz constant for f and g and let REWREG and CONSREG be the reward and consumption regrets for the basic setting (Theorem 3.4) with the failure probability δ. With probability 1 − δ, our algorithm in the concave-convex setting has reward and consumption regret upper bounded by L · REWREG and Ld · CONSREG respectively. The linear dependence on d in the consumption regret above comes from the fact that we assume g is Lipschitz under `1 norm. 5 Knapsack setting Our last technical section extends the algorithm and guarantee of the basic setting to scenarios where the constraints are hard which is in accordance with most of the literature on bandits with knapsacks. The goal here is to achieve aggregate reward regret that is sublinear in the time horizon (in our case, the number of episodes K), while also respecting budget constraints for as small budgets as possible. We derive guarantees in terms of reward regret, as defined previously, and then argue that our guarantee extends to the seemingly stronger benchmark of the best dynamic policy. Setting and objective. Each resource i ∈ D has an aggregate budget Bi that the learner should not exceed over K episodes. Unlike the basic setting, where we track the consumption regret, here we view this as a hard constraint. As in most works on bandits with knapsacks, the algorithm is allowed to use a “null action” for an episode, i.e., an action that yields a zero reward and consumption when selected at the beginning of an episode. The learner wishes to maximize her aggregate reward while respecting these hard constraints. We reduce this problem to a specific variant of the basic problem (1) with ξ(i) = BiK . We modify the solution to (1) to take the null action if any constraint is violated and call the resulting benchmark π?. Note that π? satisfies constraints in expectation. At the end of this section, we explain how our algorithm also competes against a benchmark that is required to respect constraints deterministically (i.e., with probability one across all episodes). Our algorithm. In the basic setting of Section 3, we showed a reward regret guarantee and a consumption regret guarantee, proving that the average constraint violation is O(1/ √ K). Now we seek a stronger guarantee: the learned policy needs to satisfy budget constraints with high probability. Our algorithm optimizes a mathematical program KNAPSACKCONPLANNER (13) that strengthens the consumption constraints: max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ (1− )Bi K . (13) In the above, p(k), r(k), c(k) are exactly as in the basic setting and > 0 is instantiated in the theorem below. Note that the program (13) is feasible thanks to the existence of the null action. The following mixture policy induces a feasible solution: with probability 1 − , we play the optimal policy π? for the entire episode; with probability , we play the null action for the entire episode. Note that the above program can again be cast as a linear program in the occupancy measure space — c.f. Appendix A.3 for further discussion. Guarantee for knapsack setting. The guarantee of the basic setting on this tighter mathematical program seamlessly transfers to a reward guarantee that does not violate the hard constraints. Theorem 5.1. Assume that miniBi ≤ KH , i.e., constraints are non-vacuous. Let AGGREG(δ) be a bound on the aggregate (across episodes) reward or consumption regret for the soft-constraint setting (Theorem 3.4) with the failure probability δ. Let = AGGREG(δ)mini Bi . If miniBi > AGGREG(δ) then, with probability 1− δ, the reward regret in the hard-constraint setting is at most 2HAGGREG(δ)mini Bi and constraints are not violated. 6Again, this is not an issue in multi-armed bandits. The above theorem implies that the aggregate reward regret is sublinear in K as long as miniBi HAGGREG(δ). The analysis in the above main theorem (provided in Appendix D) is modular in the sense that it leverages the CONRL’s performance to solve (13) in a black-box manner. Smaller AGGREG(δ) from the basic soft-constraint setting immediately translates to smaller reward regret and smaller budget regime (i.e., miniBi can be smaller). In particular, using the AGGREG(δ) bound of Theorem 3.4, the reward regret is sublinear as long as miniBi = Ω( √ K). In contrast, previous work of Cheung (2019) can only deal with larger budget regime, i.e., miniBi = Ω(K2/3). Although the guarantees are not directly comparable as the latter is for the single-episode setting, which requires further reachability assumptions, the budget we can handle is significantly smaller and in the next section we show that our algorithm has superior empirical performance in episodic settings even when such assumptions are granted. Dynamic policy benchmark. The common benchmark used in bandits with knapsacks is not the best stationary policy π? that respects constraints in expectation but rather the best dynamic policy (i.e., a policy that makes decisions based on the history) that never violates hard constraints deterministically. In Appendix D, we show that the optimal dynamic policy (formally defined there) has reward less than policy π? (informally, this is because π? respects constraints in expectation while the dynamic policy has to satisfy constraints deterministically) and therefore the guarantee of Theorem 5.1 also applies against the optimal dynamic policy. 6 Empirical comparison to other concave-convex approaches In this section, we evaluate the performance of CONRL against previous approaches.7 Although our CONPLANNER (see Appendix A) can be solved exactly using linear programming (Altman, 1999), in our experiments, it suffices to use Lagrangian heuristic, denoted as LAGRCONPLANNER (see Appendix E.1). This Lagrangian heuristic only needs a planner for the unconstrained RL task. We consider two unconstrained RL algorithms as planners: value iteration and a model-based Advantage Actor-Critic (A2C) (Mnih et al., 2016) (based on fictitious samples drawn from the model provided as an input). The resulting variants of LAGRCONPLANNER are denoted CONRL-VALUE ITERATION 7Code is available at https://github.com/miryoosefi/ConRL and CONRL-A2C. We run our experiments on two grid-world environments Mars rover (Tessler et al., 2019) and Box (Leike et al., 2017).8 Mars rover. The agent must move from the initial position to the goal without crashing into rocks. If the agent reaches the goal or crashes into a rock it will stay in that cell for the remainder of the episode. Reward is 1 when the agent reaches the goal and 1/H afterwards. Consumption is 1 when the agent crashes into a rock and 1/H afterwards. The episode horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. Box. The agent must move a box from the initial position to the goal while avoiding corners (cells adjacent to at least two walls). If the agent reaches the goal it stays in that cell for the remainder of the episode. Reward is 1 when agent reaches the goal for the first time and 1/H afterwards; consumption is 1/H whenever the box is in a corner. Horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. We compare CONRL to previous constrained approaches (derived for either episodic or single-episode settings) in Figure 1. We keep track of three metrics: episode-level reward and consumption (the first two rows) and cumulative consumption (the third row). Episode-level metrics are based on the most recent episode in the first two columns, i.e., we plot Eπk [ ∑H h=1 r ? h] and Eπk [ ∑H h=1 c ? h]. In the third column, we plot the average across episodes so far, i.e., 1k ∑k t=1 Eπt [ ∑H h=1 r ? h] and 1 k ∑k t=1 Eπt [ ∑H h=1 c ? h], and we use the log scale for the x-axis. The cumulative consumption is∑k t=1 ∑H h=1 ct,h in all columns. See Appendix E for further details about experiments. Episodic setting. We first compare our algorithms to two episodic RL approaches: APPROPO (Miryoosefi et al., 2019) and RCPO (Tessler et al., 2019). We note that none of the previous approaches in this setting address sample-efficient exploration. In addition, most of them are limited to linear constraints, with the exception of APPROPO (Miryoosefi et al., 2019), which can handle general convex constraints.9 Both APPROPO and RCPO (used as a baseline by Miryoosefi et al., 2019) maintain and update a weight vectorλ, used to derive reward for an unconstrained RL algorithm, which we instantiate as A2C. APPROPO focuses on the feasibility problem, so it requires to specify a lower bound on the reward, which we set to 0.3 for Mars rover and 0.1 for Box. In the first two columns of Figure 1 we see that both versions of CONRL are able to solve the constrained RL task with a much smaller number of trajectories (see top two rows), and their overall consumption levels are substantially lower (the final row) than those of the previous approaches. Single-episode setting. Closest to our work is TFW-UCRL2 (Cheung, 2019), which is based on UCRL (Jaksch et al., 2010). However, that approach focuses on the single-episode setting and requires a strong reachability assumption. By connecting terminal states of our MDP to the intial state, we reduce our episodic setting to single-episode setting in which we can compare CONRL against TFW-UCRL2. Results for Mars rover are depicted in last column of Figure 1.10 Again, both versions of CONRL find the solution with a much smaller number of trajectories (note the log scale on the x-axis) and their overall consumption levels are much lower than those of TFW-UCRL2. This suggests that TFW-UCRL2 might be impractical in (at least some) episodic settings. 7 Conclusions In this paper we study two types of constraints in the framework of constrained tabular episodic reinforcement learning: concave rewards and convex constraints, and knapsacks constraints. Our algorithms achieve near-optimal regret in both settings, and experimentally we show that our approach outperforms prior works on constrained reinforcement learning. Regarding future work, it would be interesting to extend our framework to continuous state and action spaces. Potential directions include extensions to Lipschitz MDPs (Song and Sun, 2019) and MDPs with linear parameterization (Jin et al., 2019) where optimism-based exploration algorithms exist under the classic reinforcement learning setting without constraints. 8We are not aware of any benchmarks for convex/knapsack constraints. For transparency, we compare against prior works handling concave-convex or knapsack settings on established benchmarks for the linear case. 9In addition to that, trust region methods like CPO (Achiam et al., 2017) address a more restrictive setting and require constraint satisfaction at each iteration; for this reason, they are not included in the experiments. 10Due to a larger state space, it was computationally infeasible to run TFW-UCRL2 in the Box environment. Broader Impact Our work focuses on the theoretical foundations of reinforcement learning by addressing the important challenge of constrained optimization in reinforcement learning. We strongly believe that understanding the theoretical underpinnings of the main machine learning paradigms is essential and can guide principled and effective deployment of such methods. Beyond its theoretical contribution, our work may help the design of reinforcement learning algorithms that go beyond classical digital applications of RL (board games and video games) and extend to settings with complex and often competing objectives. We believe that constraints constitute a fundamental limitation in extending RL beyond the digital world, as they exist in a wide variety of sequential decision-making applications (robotics, medical treatment, education, advertising). Our work provides a paradigm to design algorithms with efficient exploration despite the presence of constraints. That said, one needs to ensure that an algorithm offers acceptable quality in applications. Any exploration method that does not rely on off-policy samples will inevitably violate constraints sometimes in order to learn. In some applications, this is totally acceptable: a car staying out of fuel in rare circumstances is not detrimental, an advertiser exhausting their budget some month is even less significant, a student dissatisfaction in an online test is unpleasant but probably acceptable. On the other hand, if the constraint violation involves critical issues like drug recommendation for severe diseases or decisions by self-driving cars that can cause physical harm to passengers then the algorithm needs to be carefully reviewed. It may be necessary to “prime” the algorithm with some data collected in advance (however costly it may be). One may need to make a judgement call on whether the ethical or societal standards are consistent with deploying an algorithm in a particular setting. To summarize, our work is theoretical in nature and makes significant progress on a problem at the heart of RL. It has the potential to guide deployment of constrained RL methods in many important applications and tackle a fundamental bottleneck in deploying RL beyond the digital world. However, an application needs to be carefully reviewed before deployment. Acknowledgments and Disclosure of Funding The authors would like to thank Rob Schapire for useful discussions that helped in the initial stages of this work. Part of the work was done when WS was at Microsoft Research NYC.
1. What is the focus and contribution of the paper on constraint MDP? 2. What are the strengths of the proposed approach, particularly in terms of theory and motivation? 3. What are the weaknesses of the paper regarding its writing style and experiment section? 4. Do you have any concerns or questions about the proof and analysis provided in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Authors considered a constraint MDP problem in episodic setting. They proposed a model based method, that learns the transition, reward, constraint with MLE estimate, and form an optimization problem with it. For strategic exploration they used OFU and given the concentration inequalities they proved optimism. Further they extended their algorithm to convex-concave and knapsack setting. It's important to note that the method is for tabular setting and optimization problem is convex in occupancy measures, so efficiently solvable. Additionally they performed some experiments. Strengths Theory : paper is very strong theoretically (although I have some question later), proofs are interesting and for convex-concave and knapsack setting non-trivial. However I found the proofs of basic algorithm fairly trivial. Motivation : I strongly believe this is a very important problem, and of value to the community, and has been an under-explored part of the field. Weaknesses Writing : I find the paper hard to follow. In order to understand the paper I had to go back and forth to the appendix. I believe authors can put the experiments in the appendix (where for me is of lower importance) and spend more time describing the proofs in the main text and make the intuition clear. (But I understand that field at this point requires a lot of experiments, so I wait to hear other reviewers perspective before making a suggestion here but for me this paper is more of a theoretical contribution)
NIPS
Title Constrained episodic reinforcement learning in concave-convex and knapsack settings Abstract We propose an algorithm for tabular episodic reinforcement learning (RL) with constraints. We provide a modular analysis with strong theoretical guarantees for two general settings. First is the convex-concave setting: maximization of a concave reward function subject to constraints that expected values of some vector quantities (such as the use of unsafe actions) lie in a convex set. Second is the knapsack setting: maximization of reward subject to the constraint that the total consumption of any of the specified resources does not exceed specified levels during the whole learning process. Previous work in constrained RL is limited to linear expectation constraints (a special case of convex-concave setting), or focuses on feasibility question, or on single-episode settings. Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in constrained episodic benchmarks. 1 Introduction Standard reinforcement learning (RL) approaches seek to maximize a scalar reward (Sutton and Barto, 1998, 2018; Schulman et al., 2015; Mnih et al., 2015), but in many settings this is insufficient, because the desired properties of the agent behavior are better described using constraints. For example, an autonomous vehicle should not only get to the destination, but should also respect safety, fuel efficiency, and human comfort constraints along the way (Le et al., 2019); a robot should not only fulfill its task, but should also control its wear and tear, for example, by limiting the torque exerted on its motors (Tessler et al., 2019). Moreover, in many settings, we wish to satisfy such constraints already during training and not only during the deployment. For example, a power grid, an autonomous vehicle, or a real robotic hardware should avoid costly failures, where the hardware is damaged or humans are harmed, already during training (Leike et al., 2017; Ray et al., 2020). Constraints are also key in additional sequential decision making applications, such as dynamic pricing with limited supply (e.g., Besbes and Zeevi, 2009; Babaioff et al., 2015), scheduling of resources on a computer cluster (Mao et al., 2016), and imitation learning, where the goal is to stay close to an expert behavior (Syed and Schapire, 2007; Ziebart et al., 2008; Sun et al., 2019). In this paper we study constrained episodic reinforcement learning, which encompasses all of these applications. An important characteristic of our approach, distinguishing it from previous work (e.g., Altman, 1999; Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019; Ray et al., 2020), is our focus on efficient exploration, leading to reduced sample complexity. Notably, the modularity of 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. our approach enables extensions to more complex settings such as (i) maximizing concave objectives under convex constraints, and (ii) reinforcement learning under hard constraints, where the learner has to stop when some constraint is violated (e.g., a car runs out of gas). For these extensions, which we refer to as concave-convex setting and knapsack setting, we provide the first regret guarantees in the episodic setting (see related work below for a detailed comparison). Moreover, our guarantees are anytime, meaning that the constraint violations are bounded at any point during learning, even if the learning process is interrupted. This is important for those applications where the system continues to learn after it is deployed. Our approach relies on the principle of optimism under uncertainty to efficiently explore. Our learning algorithms optimize their actions with respect to a model based on the empirical statistics, while optimistically overestimating rewards and underestimating the resource consumption (i.e., overestimating the distance from the constraint). This idea was previously introduced in multiarmed bandits (Agrawal and Devanur, 2014); extending it to episodic reinforcement learning poses additional challenges since the policy space is exponential in the episode horizon. Circumventing these challenges, we provide a modular way to analyze this approach in the basic setting where both rewards and constraints are linear (Section 3) and then transfer this result to the more complicated concave-convex and knapsack settings (Sections 4 and 5). We empirically compare our approach with the only previous works that can handle convex constraints and show that our algorithmic innovations lead to significant empirical improvements (Section 6). Related work. Sample-efficient exploration in constrained episodic reinforcement learning has only recently started to receive attention. Most previous works on episodic reinforcement learning focus on unconstrained settings (Jaksch et al., 2010; Azar et al., 2017; Dann et al., 2017). A notable exception is the work of Cheung (2019) and Tarbouriech and Lazaric (2019). Both of these works consider vectorial feedback and aggregate reward functions, and provide theoretical guarantees for the reinforcement learning setting with a single episode, but require a strong reachability or communication assumption, which is not needed in the episodic setting studied here. Also, compared to Cheung (2019), our results for the knapsack setting allow for a significantly smaller budget, as we illustrate in Section 5. Moreover, our approach is based on a tighter bonus, which leads to a superior empirical performance (see Section 6). Recently, there have also been several concurrent and independent works on sample-efficient exploration for reinforcement learning with constraints (Singh et al., 2020; Efroni et al., 2020; Qiu et al., 2020; Ding et al., 2020; Zheng and Ratliff, 2020). Unlike our work, all of these approaches focus on linear reward objective and linear constraints and do not handle the concave-convex and knapsack settings that we consider. Constrained reinforcement learning has also been studied in settings that do not focus on sampleefficient exploration (Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019). Among these, only Miryoosefi et al. (2019) handle convex constraints, albeit without a reward objective (they solve the feasibility problem). Since these works do not focus on sample-efficient exploration, their performance drastically deteriorates when the task requires exploration (as we show in Section 6). Sample-efficient exploration under constraints has been studied in multi-armed bandits, starting with a line of work on dynamic pricing with limited supply (Besbes and Zeevi, 2009, 2011; Babaioff et al., 2015; Wang et al., 2014). A general setting for bandits with global knapsack constraints (bandits with knapsacks) was defined and solved by Badanidiyuru et al. (2018) (see also Ch. 10 of Slivkins, 2019). Within this literature, the closest to ours is the work of Agrawal and Devanur (2014), who study bandits with concave objectives and convex constraints. Our work is directly inspired by theirs and lifts their techniques to the more general episodic reinforcement learning setting. 2 Model and preliminaries In episodic reinforcement learning, a learner repeatedly interacts with an environment across K episodes. The environment includes the state space S , the action spaceA, the episode horizon H , and the initial state s0.1 To capture constrained settings, the environment includes a set D of d resources where each i ∈ D has a capacity constraint ξ(i) ∈ R+. The above are fixed and known to the learner. 1A fixed and known initial state is without loss of generality. In general, there is a fixed but unknown distribution ρ from which the initial state is drawn before each episode. We modify the MDP by adding a new state s0 as initial state, such that the next state is sampled from ρ for any action. Then ρ is “included” within the transition probabilities. The extra state s0 does not contribute any reward and does not consume any resources. Constrained Markov decision process. We work with MDPs that have resource consumption in addition to rewards. Formally, a constrained MDP (CMDP) is a tripleM = (p, r, c) that describes transition probabilities p : S ×A → ∆(S), rewards r : S ×A → [0, 1], and resource consumption c : S ×A → [0, 1]d. For convenience, we denote c(s, a, i) = ci(s, a). We allow stochastic rewards and consumptions, in which case r and c refer to the conditional expectations, conditioned on s and a (our definitions and algorithms are based on this conditional expectation rather than the full conditional distribution). We use the above definition to describe two kinds of CMDPs. The true CMDPM? = (p?, r?, c?) is fixed but unknown to the learner. Selecting action a at state s results in rewards and consumptions drawn from (possibly correlated) distributions with means r?(s, a) and c?(s, a) and supports in [0, 1] and [0, 1]d respectively. Next states are generated from transition probabilities p?(s, a). The second kind of CMDP arises in our algorithm, which is model-based and at episode k uses a CMDPM(k). Episodic reinforcement learning protocol. At episode k ∈ [K], the learner commits to a policy πk = (πk,h) H h=1 where πk,h : S → ∆(A) specifies how to select actions at step h for every state. The learner starts from state sk,1 = s0. At step h = 1, . . . ,H , she selects an action ak,h ∼ πk,h(sk,h). The learner earns reward rk,h and suffers consumption ck,h, both drawn from the true CMDPM? on state-action pair (sk,h, ak,h) as described above, and transitions to state sk,h+1 ∼ p?(sk,h, ak,h). Objectives. In the basic setting (Section 3), the learner wishes to maximize reward while respecting the consumption constraints in expectation by competing favorably against the following benchmark: max π Eπ,p ? [ H∑ h=1 r? ( sh, ah )] s.t. ∀i ∈ D : Eπ,p ? [ H∑ h=1 c? ( sh, ah, i )] ≤ ξ(i), (1) where Eπ,p denotes the expectation over the run of policy π according to transitions p, and sh, ah are the induced random state-action pairs. We denote by π? the policy that maximizes this objective. For the basic setting, we track two performance measures: reward regret compares the learner’s total reward to the benchmark and consumption regret bounds excess in resource consumption: REWREG(k) := Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − 1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )] , (2) CONSREG(k) := max i∈D (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ) . (3) Our guarantees are anytime, i.e., they hold at any episode k and not only after the last episode. We also consider two extensions. In Section 4, we consider a concave reward objective and convex consumption constraints. In Section 5, we require consumption constraints to be satisfied with high probability under a cumulative budget across all K episodes, rather than in expectation in a single episode. Tabular MDPs. We assume that the state space S and the action space A are finite (tabular setting). We construct standard empirical estimates separately for each state-action pair (s, a), using the learner’s observations up to and not including a given episode k. Eqs. (4–7) define sample counts, empirical transition probabilities, empirical rewards, and empirical resource consumption.2 Nk(s, a) = max { 1, ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a} } , (4) p̂k(s ′|s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a, st,h+1 = s′}, (5) r̂k(s, a) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] rt,h · 1{st,h = s, at,h = a}, (6) ĉk(s, a, i) = 1 Nk(s, a) ∑ t∈[k−1], h∈[H] ct,h,i · 1{st,h = s, at,h = a} ∀i ∈ D. (7) 2The max operator in Eq. (4) is to avoid dividing by 0. Preliminaries for theoretical analysis. The Q-function is a standard object in RL that tracks the learner’s expected performance if she starts from state s ∈ S at step h, selects action a ∈ A, and then follows a policy π under a model with transitions p for the remainder of the episode. We parameterize it by the objective function m : S ×A → [0, 1], which can be either a reward, i.e., m(s, a) = r(s, a), or consumption of some resource i ∈ D, i.e., m(s, a) = c(s, a, i). (For the unconstrained setting, the objective is the reward.) The performance of the policy in a particular step h is evaluated by the value function V which corresponds to the expected Q-function of the selected action (where the expectation is taken over the possibly randomized action selection of π). The Q and value functions can be both recursively defined by dynamic programming: Qπ,pm (s, a, h) = m(s, a) + ∑ s′∈S p(s′|s, a)V π,pm (s′, h+ 1), V π,pm (s, h) = Ea∼π(·|s) [ Qπ,pm (s, a, h) ] and V π,pm (s,H + 1) = 0. By slight abuse of notation, for m ∈ {r} ∪ {ci}i∈D, we denote by m? ∈ {r?} ∪ {c?i }i∈D the corresponding objectives with respect to the rewards and consumptions of the true CMDPM?. For objectives m? and transitions p?, the above are the Bellman equations of the system (Bellman, 1957). Estimating the Q-function based on the model parameters p and m rather than the ground truth parameters p? and m? introduces errors. These errors are localized across stages by the notion of Bellman error which contrasts the performance of policy π starting from stage h under the model parameters to a benchmark that behaves according to the model parameters starting from the next stage h+ 1 but uses the true parameters of the system in stage h. More formally, for objective m: BELLπ,pm (s, a, h) = Q π,p m (s, a, h)− ( m?(s, a) + ∑ s′∈S p?(s′|s, a)V π,pm (s′, h+ 1) ) . (8) Note that when the CMDP isM? (m = m?, p = p?), there is no mismatch and BELLπ,p ? m? = 0. 3 Warm-up algorithm and analysis in the basic setting In this section, we introduce a simple algorithm that allows to simultaneously bound reward and consumption regrets for the basic setting introduced in the previous section. Even in this basic setting, we provide the first sample-efficient guarantees in constrained episodic reinforcement learning.3 The modular analysis of the guarantees also allows us to subsequently extend (in Sections 4 and 5) the algorithm and guarantees to the more general concave-convex and knapsack settings. Our algorithm. At episode k, we construct an estimated CMDPM(k) = ( p(k), r(k), c(k) ) based on the observations collected so far. The estimates are bonus-enhanced (formalized below) to encourage more targeted exploration. Our algorithm CONRL selects a policy πk by solving the following constrained optimization problem which we refer to as BASICCONPLANNER(p(k), r(k), c(k)): max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ ξ(i). The above optimization problem is similar to the objective (1) but uses the estimated model instead of the (unknown to the learner) true model. We also note that this optimization problem can be optimally solved as it is a linear program on the occupation measures (Puterman, 2014), i.e., setting as variables the probability of each state-action pair and imposing flow conservation constraints with respect to the transitions. This program is described in Appendix A.1. Bonus-enhanced model. A standard approach to implement the principle of optimism under uncertainty is to introduce, at each episode k, a bonus term b̂k(s, a) that favors under-explored actions. Specifically, we add this bonus to the empirical rewards (6), and subtract it from the consumptions (7): r(k)(s, a) = r̂k(s, a) + b̂k(s, a) and c(k)(s, a, i) = ĉk(s, a, i)− b̂k(s, a) for each resource i. 3We refer the reader to the related work (in Section 1) for discussion on concurrent and independent papers. Unlike our results, these papers do not extend to either concave-convex or knapsack settings. Following the unconstrained analogues (Azar et al., 2017; Dann et al., 2017), we define the bonus as: b̂k(s, a) = H √ 2 ln ( 8SAH(d+ 1)k2/δ) Nk(s, a) , (9) where δ > 0 is the desired failure probability of the algorithm and Nk(s, a) is the number of times (s, a) pair is visited, c.f. (4), S = |S|, and A = |A|. Thus, under-explored actions have a larger bonus, and therefore appear more appealing to the planner. For estimated transition probabilities, we just use the empirical averages (5): p(k)(s′|s, a) = p̂(s′|s, a). Valid bonus and Bellman-error decomposition. For a bonus-enhanced model to achieve effective exploration, the resulting bonuses need to be valid, i.e., they should ensure that the estimated rewards overestimate the true rewards and the estimated consumptions underestimate the true consumptions. Definition 3.1. A bonus bk : S ×A → R is valid if, ∀s ∈ S, a ∈ A, h ∈ [H],m ∈ {r} ∪ {ci}i∈D:∣∣∣(m̂k(s, a)−m?(s, a))+ ∑ s′∈S ( p̂k(s ′|s, a)− p?(s′|s, a) ) V π ?,p? m? (s ′, h+ 1) ∣∣∣ ≤ bk(s, a). By classical concentration bounds (Appendix B.1), the bonus b̂k of Eq. (9) satisfies this condition: Lemma 3.2. With probability 1− δ, the bonus b̂k(s, a) is valid for all episodes k simultaneously. Our algorithm solves the BASICCONPLANNER optimization problem based on a bonus-enhanced model. When the bonuses are valid, we can upper bound the per-episode regret by the expected sum of Bellman errors across steps. This is the first part in classical unconstrained analyses and the following proposition extends this decomposition to constrained episodic reinforcement learning. The proof uses the so-called simulation lemma (Kearns and Singh, 2002) and is provided in Appendix B.3. Proposition 3.3. If b̂k(s, a) is valid for all episodes k simultaneously then the per-episode reward and consumption regrets can be upper bounded by the expected sum of Bellman errors (8): Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − Eπk,p ? [ H∑ h=1 r? ( sh, ah )] ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k)r(k) (sh, ah, h)∣∣∣] (10) ∀i ∈ D : Eπk,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k) c (k) i ( sh, ah, h )∣∣∣]. (11) Final guarantee. One difficulty with directly bounding the Bellman error is that the value function is not independent of the draws forming r(k)(s, a), c(k)(s, a), and p(k)(s′|s, a). Hence we cannot apply Hoeffding inequality directly. While Azar et al. (2017) propose a trick to get an O( √ S) bound on Bellman error in unconstrained settings, the trick relies on the crucial property of Bellman optimality: for an unconstrained MDP, its optimal policy π? satisfies the condition, V π ? r? (s, h) ≥ V πr?(s, h) for all s, h, π (i.e., π? is optimal at any state). However, when constraints exist, the optimal policy does not satisfy the Bellman optimality property. Indeed, we can only guarantee optimality with respect to the initial state distribution, i.e., V π ? r? (s0, 1) ≥ V πr?(s0, 1) for any π, but not everywhere else. This illustrates a fundamental difference between constrained MDPs and unconstrained MDPs. Thus we cannot directly apply the trick from Azar et al. (2017). Instead we follow an alternative approach of bounding the value function via an -net over the possible values. This analysis leads to a guarantee that is weaker by a factor of √ S than the unconstrained results. The proof is provided in Appendix B.6. Theorem 3.4. There exists an absolute constant c ∈ R+ such that, with probability at least 1− 3δ, reward and consumption regrets are both upper bounded by: c√ k · S √ AH3 · √ ln(k) ln ( SAH(d+ 1)k/δ ) + ck · S 3/2AH2 √ ln ( 2SAH(d+ 1)k/δ ) . Comparison to single-episode results. In single-episode setting, Cheung (2019) achieves √ S dependency under the further assumption that the transitions are sparse, i.e., ‖p?(s, a)‖0 S for all (s, a). We do not make such assumptions on the sparsity of the MDP and we note that the regret bound of Cheung (2019) scales linearly in S when ‖p?(s, a)‖0 = Θ(S). Also, the single-episode setting requires a strong reachability assumption, not present in the episodic setting. Remark 3.5. The aforementioned regret bound can be turned into a PAC bound of Õ ( S2AH3 2 ) by taking the uniform mixture of policies π1, π2, . . . , πk. 4 Concave-convex setting We now extend the algorithm and guarantees derived for the basic setting to when the objective is concave function of the accumulated reward and the constraints are expressed as a convex function of the cumulative consumptions. Our approach is modular, seamlessly building on the basic setting. Setting and objective. Formally, there is a concave reward-objective function f : R → R and a convex consumption-objective function g : Rd → R; the only assumption is that these functions are L-Lipschitz for some constant L, i.e., |f(x)−f(y)| ≤ L|x−y| for any x, y ∈ R, and |g(x)−g(y)| ≤ L‖x− y‖1 for any x, y ∈ Rd. Analogous to (1), the learner wishes to compete against the following benchmark which can be viewed as a reinforcement learning variant of the benchmark used by Agrawal and Devanur (2014) in multi-armed bandits: max π f ( Eπ,p ? [ H∑ h=1 r? ( sh, ah )]) s.t. g ( Eπ,p ? [ H∑ h=1 c? ( sh, ah )]) ≤ 0. (12) The reward and consumption regrets are therefore adapted to: CONVEXREWREG(k) := f ( Eπ ?,p? [ H∑ h=1 r? ( sh, ah )]) − f (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )]) , CONVEXCONSREG(k) := g (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah )]) . Our algorithm. As in the basic setting, we wish to create a bonus-enhanced model and optimize over it. To model the transition probabilites, we use empirical estimates p(k) = p̂k of Eq. (5) as before. However, since reward and consumption objectives are no longer monotone in the accumulated rewards and consumption respectively, it does not make sense to simply add or subtract b̂k (defined in Eq. 9) as we did before. Instead we compute the policy πk of episode k together with the model by solving the following optimization problem which we call CONVEXCONPLANNER: max π max r(k)∈[r̂k±b̂k] f ( Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )]) s.t. min c(k)∈[ĉk±b̂k·1] g ( Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah )]) ≤ 0. The above problem is convex in the occupation measures,4 i.e., the probability ρ(s, a, h) that the learner is at state-action-step (s, a, h) — c.f. Appendix A.2 for further discussion. max ρ max r∈[r̂k±b̂k] f ( ∑ s,a,h ρ(s, a, h)r(s, a) ) s.t. min c∈[ĉk±b̂k·1] g ( ∑ s,a,h ρ(s, a, h)c(s, a) ) ≤ 0 ∀s′, h : ∑ a ρ(s′, a, h+ 1) = ∑ s,a ρ(s, a, h)p̂k(s ′|s, a) ∀s, a, h : 0 ≤ ρ(s, a, h) ≤ 1 and ∑ s,a ρ(s, a, h) = 1. Guarantee for concave-convex setting. To extend the guarantee of the basic setting to the concaveconvex setting, we face an additional challenge: it is not immediately clear that the optimal policy π? is feasible for the CONVEXCONPLANNER program because CONVEXCONPLANNER is defined with respect to the empirical transition probabilities p(k).5 Moreover, whenH > 1, it is not straightforward to show that objective in the used model is always greater than the one in the true model as the used 4Under mild assumptions, this program can be solved in polynomial time similar to its bandit analogue of Lemma 4.3 in (Agrawal and Devanur, 2014). We note that in the basic setting, it reduces to just a linear program. 5Note that in multi-armed bandit concave-convex setting (Agrawal and Devanur, 2014), proving feasibility of the best arm is straightforward as there are no transitions. model transitions p(k)(s, a) can lead to different states than the ones encountered in the true model.6 We deal with both of these issues by introducing a novel application of the mean-value theorem to show that π? is indeed a feasible solution of that program and create a similar regret decomposition to Proposition 3.3 (see Proposition C.1 and more discussion in Appendix C.1); this allows us to plug in the results developed for the basic setting. The full proof is provided in Appendix C. Theorem 4.1. Let L be the Lipschitz constant for f and g and let REWREG and CONSREG be the reward and consumption regrets for the basic setting (Theorem 3.4) with the failure probability δ. With probability 1 − δ, our algorithm in the concave-convex setting has reward and consumption regret upper bounded by L · REWREG and Ld · CONSREG respectively. The linear dependence on d in the consumption regret above comes from the fact that we assume g is Lipschitz under `1 norm. 5 Knapsack setting Our last technical section extends the algorithm and guarantee of the basic setting to scenarios where the constraints are hard which is in accordance with most of the literature on bandits with knapsacks. The goal here is to achieve aggregate reward regret that is sublinear in the time horizon (in our case, the number of episodes K), while also respecting budget constraints for as small budgets as possible. We derive guarantees in terms of reward regret, as defined previously, and then argue that our guarantee extends to the seemingly stronger benchmark of the best dynamic policy. Setting and objective. Each resource i ∈ D has an aggregate budget Bi that the learner should not exceed over K episodes. Unlike the basic setting, where we track the consumption regret, here we view this as a hard constraint. As in most works on bandits with knapsacks, the algorithm is allowed to use a “null action” for an episode, i.e., an action that yields a zero reward and consumption when selected at the beginning of an episode. The learner wishes to maximize her aggregate reward while respecting these hard constraints. We reduce this problem to a specific variant of the basic problem (1) with ξ(i) = BiK . We modify the solution to (1) to take the null action if any constraint is violated and call the resulting benchmark π?. Note that π? satisfies constraints in expectation. At the end of this section, we explain how our algorithm also competes against a benchmark that is required to respect constraints deterministically (i.e., with probability one across all episodes). Our algorithm. In the basic setting of Section 3, we showed a reward regret guarantee and a consumption regret guarantee, proving that the average constraint violation is O(1/ √ K). Now we seek a stronger guarantee: the learned policy needs to satisfy budget constraints with high probability. Our algorithm optimizes a mathematical program KNAPSACKCONPLANNER (13) that strengthens the consumption constraints: max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ (1− )Bi K . (13) In the above, p(k), r(k), c(k) are exactly as in the basic setting and > 0 is instantiated in the theorem below. Note that the program (13) is feasible thanks to the existence of the null action. The following mixture policy induces a feasible solution: with probability 1 − , we play the optimal policy π? for the entire episode; with probability , we play the null action for the entire episode. Note that the above program can again be cast as a linear program in the occupancy measure space — c.f. Appendix A.3 for further discussion. Guarantee for knapsack setting. The guarantee of the basic setting on this tighter mathematical program seamlessly transfers to a reward guarantee that does not violate the hard constraints. Theorem 5.1. Assume that miniBi ≤ KH , i.e., constraints are non-vacuous. Let AGGREG(δ) be a bound on the aggregate (across episodes) reward or consumption regret for the soft-constraint setting (Theorem 3.4) with the failure probability δ. Let = AGGREG(δ)mini Bi . If miniBi > AGGREG(δ) then, with probability 1− δ, the reward regret in the hard-constraint setting is at most 2HAGGREG(δ)mini Bi and constraints are not violated. 6Again, this is not an issue in multi-armed bandits. The above theorem implies that the aggregate reward regret is sublinear in K as long as miniBi HAGGREG(δ). The analysis in the above main theorem (provided in Appendix D) is modular in the sense that it leverages the CONRL’s performance to solve (13) in a black-box manner. Smaller AGGREG(δ) from the basic soft-constraint setting immediately translates to smaller reward regret and smaller budget regime (i.e., miniBi can be smaller). In particular, using the AGGREG(δ) bound of Theorem 3.4, the reward regret is sublinear as long as miniBi = Ω( √ K). In contrast, previous work of Cheung (2019) can only deal with larger budget regime, i.e., miniBi = Ω(K2/3). Although the guarantees are not directly comparable as the latter is for the single-episode setting, which requires further reachability assumptions, the budget we can handle is significantly smaller and in the next section we show that our algorithm has superior empirical performance in episodic settings even when such assumptions are granted. Dynamic policy benchmark. The common benchmark used in bandits with knapsacks is not the best stationary policy π? that respects constraints in expectation but rather the best dynamic policy (i.e., a policy that makes decisions based on the history) that never violates hard constraints deterministically. In Appendix D, we show that the optimal dynamic policy (formally defined there) has reward less than policy π? (informally, this is because π? respects constraints in expectation while the dynamic policy has to satisfy constraints deterministically) and therefore the guarantee of Theorem 5.1 also applies against the optimal dynamic policy. 6 Empirical comparison to other concave-convex approaches In this section, we evaluate the performance of CONRL against previous approaches.7 Although our CONPLANNER (see Appendix A) can be solved exactly using linear programming (Altman, 1999), in our experiments, it suffices to use Lagrangian heuristic, denoted as LAGRCONPLANNER (see Appendix E.1). This Lagrangian heuristic only needs a planner for the unconstrained RL task. We consider two unconstrained RL algorithms as planners: value iteration and a model-based Advantage Actor-Critic (A2C) (Mnih et al., 2016) (based on fictitious samples drawn from the model provided as an input). The resulting variants of LAGRCONPLANNER are denoted CONRL-VALUE ITERATION 7Code is available at https://github.com/miryoosefi/ConRL and CONRL-A2C. We run our experiments on two grid-world environments Mars rover (Tessler et al., 2019) and Box (Leike et al., 2017).8 Mars rover. The agent must move from the initial position to the goal without crashing into rocks. If the agent reaches the goal or crashes into a rock it will stay in that cell for the remainder of the episode. Reward is 1 when the agent reaches the goal and 1/H afterwards. Consumption is 1 when the agent crashes into a rock and 1/H afterwards. The episode horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. Box. The agent must move a box from the initial position to the goal while avoiding corners (cells adjacent to at least two walls). If the agent reaches the goal it stays in that cell for the remainder of the episode. Reward is 1 when agent reaches the goal for the first time and 1/H afterwards; consumption is 1/H whenever the box is in a corner. Horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action. We compare CONRL to previous constrained approaches (derived for either episodic or single-episode settings) in Figure 1. We keep track of three metrics: episode-level reward and consumption (the first two rows) and cumulative consumption (the third row). Episode-level metrics are based on the most recent episode in the first two columns, i.e., we plot Eπk [ ∑H h=1 r ? h] and Eπk [ ∑H h=1 c ? h]. In the third column, we plot the average across episodes so far, i.e., 1k ∑k t=1 Eπt [ ∑H h=1 r ? h] and 1 k ∑k t=1 Eπt [ ∑H h=1 c ? h], and we use the log scale for the x-axis. The cumulative consumption is∑k t=1 ∑H h=1 ct,h in all columns. See Appendix E for further details about experiments. Episodic setting. We first compare our algorithms to two episodic RL approaches: APPROPO (Miryoosefi et al., 2019) and RCPO (Tessler et al., 2019). We note that none of the previous approaches in this setting address sample-efficient exploration. In addition, most of them are limited to linear constraints, with the exception of APPROPO (Miryoosefi et al., 2019), which can handle general convex constraints.9 Both APPROPO and RCPO (used as a baseline by Miryoosefi et al., 2019) maintain and update a weight vectorλ, used to derive reward for an unconstrained RL algorithm, which we instantiate as A2C. APPROPO focuses on the feasibility problem, so it requires to specify a lower bound on the reward, which we set to 0.3 for Mars rover and 0.1 for Box. In the first two columns of Figure 1 we see that both versions of CONRL are able to solve the constrained RL task with a much smaller number of trajectories (see top two rows), and their overall consumption levels are substantially lower (the final row) than those of the previous approaches. Single-episode setting. Closest to our work is TFW-UCRL2 (Cheung, 2019), which is based on UCRL (Jaksch et al., 2010). However, that approach focuses on the single-episode setting and requires a strong reachability assumption. By connecting terminal states of our MDP to the intial state, we reduce our episodic setting to single-episode setting in which we can compare CONRL against TFW-UCRL2. Results for Mars rover are depicted in last column of Figure 1.10 Again, both versions of CONRL find the solution with a much smaller number of trajectories (note the log scale on the x-axis) and their overall consumption levels are much lower than those of TFW-UCRL2. This suggests that TFW-UCRL2 might be impractical in (at least some) episodic settings. 7 Conclusions In this paper we study two types of constraints in the framework of constrained tabular episodic reinforcement learning: concave rewards and convex constraints, and knapsacks constraints. Our algorithms achieve near-optimal regret in both settings, and experimentally we show that our approach outperforms prior works on constrained reinforcement learning. Regarding future work, it would be interesting to extend our framework to continuous state and action spaces. Potential directions include extensions to Lipschitz MDPs (Song and Sun, 2019) and MDPs with linear parameterization (Jin et al., 2019) where optimism-based exploration algorithms exist under the classic reinforcement learning setting without constraints. 8We are not aware of any benchmarks for convex/knapsack constraints. For transparency, we compare against prior works handling concave-convex or knapsack settings on established benchmarks for the linear case. 9In addition to that, trust region methods like CPO (Achiam et al., 2017) address a more restrictive setting and require constraint satisfaction at each iteration; for this reason, they are not included in the experiments. 10Due to a larger state space, it was computationally infeasible to run TFW-UCRL2 in the Box environment. Broader Impact Our work focuses on the theoretical foundations of reinforcement learning by addressing the important challenge of constrained optimization in reinforcement learning. We strongly believe that understanding the theoretical underpinnings of the main machine learning paradigms is essential and can guide principled and effective deployment of such methods. Beyond its theoretical contribution, our work may help the design of reinforcement learning algorithms that go beyond classical digital applications of RL (board games and video games) and extend to settings with complex and often competing objectives. We believe that constraints constitute a fundamental limitation in extending RL beyond the digital world, as they exist in a wide variety of sequential decision-making applications (robotics, medical treatment, education, advertising). Our work provides a paradigm to design algorithms with efficient exploration despite the presence of constraints. That said, one needs to ensure that an algorithm offers acceptable quality in applications. Any exploration method that does not rely on off-policy samples will inevitably violate constraints sometimes in order to learn. In some applications, this is totally acceptable: a car staying out of fuel in rare circumstances is not detrimental, an advertiser exhausting their budget some month is even less significant, a student dissatisfaction in an online test is unpleasant but probably acceptable. On the other hand, if the constraint violation involves critical issues like drug recommendation for severe diseases or decisions by self-driving cars that can cause physical harm to passengers then the algorithm needs to be carefully reviewed. It may be necessary to “prime” the algorithm with some data collected in advance (however costly it may be). One may need to make a judgement call on whether the ethical or societal standards are consistent with deploying an algorithm in a particular setting. To summarize, our work is theoretical in nature and makes significant progress on a problem at the heart of RL. It has the potential to guide deployment of constrained RL methods in many important applications and tackle a fundamental bottleneck in deploying RL beyond the digital world. However, an application needs to be carefully reviewed before deployment. Acknowledgments and Disclosure of Funding The authors would like to thank Rob Schapire for useful discussions that helped in the initial stages of this work. Part of the work was done when WS was at Microsoft Research NYC.
1. What are the main contributions and strengths of the paper regarding the online MDP problem? 2. What are the weaknesses or areas for improvement in the paper, particularly in terms of regret bounds and numerical experiments? 3. Do you have any questions or concerns about the proposed algorithm, its implementation, or the results presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper studies an online MDP problem with concave rewards and convex constraints in an episodic setting, where the model parameters on the rewards and resource consumption are not known. The authors proposed an optimism based algorithm, and quantify the performance guarantee in terms of the reward regret and the constraint regret. The main result is sqrt{K} expected regret bounds for those two regrets. The authors also studied a hard constraint version of the problem, where the resource constraints must be satisfied always. Strengths The results incorporate general concave rewards and convex constraints, which provide a general model to capture a variety of applications. Personally, I feel that the notion of aggregate rewards for RL has been long neglected, in view of its numerous applications in RL exploration. This work fills in the gap in the case of episodic MDP, which makes myself to be slightly in favor of acceptance than reject. The regret bounds essentially match those for the scalar reward setting, and the authors also point out the non-trivial steps in generalizing from (Agrawal and Devanur 2011) to the current episodic MDP setting. Weaknesses 1. While in the "Strengths" I remarked that regret bounds essentially match those for the scalar reward setting, I feel that there is still some gap in the bound. In Theorem 4.1, the consumption regret upper bound of "Ld · CONSREG" has a rather loose dependence on d, as in it does not match the dependence of \| 1_d \| in (Agrawal and Devanur 2011). Here, I denote 1_d as the d-dimensional all 1 vector, and g is L Lipschitz continuous with respect to the norm \|\cdot\|. 2. In the main results, the authors show that the constraint regrets are bounded in expectation. Do these bound translate to a high probability bound, in the same way as (Badanidiyuru 2013, Agrawal and Devanur 2014, Cheung 2019)? 3. Another place that needs substantial improvement is the numerical experiments. While the authors have provided additional details for the numerical experiments in Appendix E, there are quite a few places that require clarification: - How does the plots in Figure 1 corroborate with the regret bounds? More precisely, it is not clear how a trajectory means in the online model. For example, if I look at the top left plot for RCPO, it reports that at No. of trajectories = 500, the reward is \approx 1.65. What does it mean? Does it mean that if I run the RCPO with 500 episodes than the empirical average reward realizes as 1.65, or does it mean something else? - Can the authors provide plots about the cumulative regret of the algorithms (at least for the proposed algorithms, in case they don't make sense for existing algorithms like RCPO for some reason)? This provide a more direct way to empirically evaluate the proposed algorithms. - In relation to the previous point, in footnote 7, it remarks that the bottom row corresponds to "the aggregate actual constraint incurred during training". However, in an online model, how is the notion of "during training" defined? - It is not clear why the authors include A2C, which requires the access to the latent model on p as shown in the Appendix (so it seems to violate the model assumption of not knowing p for example?) - I am in fact quite confused by column C. Is there any reason why TFW-UCRL2 is run with significantly more trajectories than the other two algorithms ConRL-A2C and ConRL-Value Iteration? - When I tried to dig deeper in Appendix E, it is stated that “TFW-UCRL2 gives fixed weights to reward and constraint violation and maximizes a scalar function. Therefore we tuned TFW-UCRL2 for different weights and the reported result is for the best weight.” Nevertheless, to my knowledge, the TFW-UCRL2 in fact assign dynamic weights () to the penalty function for the constraints, and those weights do not need tuning in the sense that the dynamic update of the weights Can the authors provide a high level sketch on how they implemented the TFW-UCRL2 in the online episodic setting?