venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the focus of the paper regarding neural networks? 2. How does the proposed method differ from traditional approaches? 3. What are the strengths of the paper, especially in terms of clarity and presentation? 4. Do you have any concerns or questions regarding the theoretical analysis or comparisons with other works? 5. What are your overall impressions of the paper and its contributions to the field?
Review
Review The authors propose the "anisotropic neural network", a novel method for intrinsic deep learning, i.e. directly computing functions computed on a non-Euclidean domain. They are particularly interested in computing correspondences between different non-Euclidean objects, e.g. computing corresponding points on a body in different poses. I'm not familiar with this area, but I think that the anisotropic network works as follows: You learn (?) several heat kernels, which each implement a mapping from the intrinsic space onto a polar system of coordinates. For each of the heat kernels, several filters are applied. The i'th filter from each kernel is summed to form the i'th output plane. The authors show qualitative results on several benchmarks, which strongly outperform the prior GCNN approach.The paper is presented very clearly, giving ample references, excellent diagrams, and clear explanation of the problem, theoretical work, and evaluation. I wonder if some of the explanation in Sec. 2 and 3 can be made a bit more accessible to a wider audience. I don't have suitable background to evaluate the theoretical approach or comparison with other methods for instrinsic learning on non-Euclidean domains. The qualitative results, both in isolation and in comparison to prior work, look quite remarkable to me. The correspondence mapping shown in the supplementary material is particularly impressive. Even though I am not familiar with this area, I would love to learn more about this work.
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the focus and contribution of the paper on Anisotropic CNN? 2. What are the strengths of the proposed approach, particularly in terms of its application in shape correspondence? 3. What are the weaknesses of the paper, especially regarding its experimental design and comparison with other works? 4. Do you have any concerns about the appropriateness of the paper for NIPS? 5. What are some potential improvements that could be made to enhance the performance of ACNN?
Review
Review This paper introduces Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. This is basically a machine learning application of computer graphics and geometry processing applications. The experiments (on benchmarks such as SHREC’16 Partial) show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings.This paper is generally well written. The application of finding shape correspondence is quite important for many fields, especially for shape analysis related fields. The proposed approach seems quite effective as well. The experiments are convincing in general, although some additional improvements can be made. For example, more datasets and also comparisons against more methods. It is also interesting to try more advanced network architectures and see if this can improve the performance. It is mentioned that "First 80 shapes for training and the remaining 20 for testing". I assume these 80 shapes are from 8 subjects, and the 20 testing ones are from the remaining 2 subjects? In this case, it makes more sense to use leave-one-out considering that there are only 10 subjects. Overall the quality of this paper is good and I didn't notice any major flaw. My main concern is that whether this fits the scope of NIPS. This is more like a graphics paper (most compared methods are published in that community as well). Other than this, I have no concerns.
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the focus and contribution of the paper on intrinsic correspondence? 2. What are the strengths and weaknesses of the proposed Anisotropic Convolutional Neural Network (ACNN)? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. What are the limitations regarding the experimental comparisons and relevance to other works?
Review
Review The paper aims at learning intrinsic correspondence using convolutional neural network, The proposed method, Anisotropic Convolutional Neural Network (ACNN), is a variant of CNN that can deal with non-Euclidean domains. Overall, the paper is well written.(1) The novelty. The paper is not so novel. Basically, the proposed method extends ADD [5] to deep learning framework, which is proposed in GCNN [16]. Meanwhile, this paper is not the first one to learn shape correspondence using deep learning. In this sense, the importance of this paper might be limited. (2) The experiments. The experimental comparison is sufficient. However, only one method [5] is based on deep learning as the proposed method. Such a comparison is somewhat unfair. As the most relevant paper, why [16] is not compared?
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the focus of the paper in terms of CNN operations? 2. How does the proposed method, ACNN, differ from other manifold CNNs? 3. What is the open problem in designing convolutional neural networks on manifolds? 4. What is the concern regarding the invariance assumption in the proposed method? 5. What additional analysis would provide further satisfaction regarding the output distribution from the patch operator?
Review
Review This paper proposes a novel type of CNN that operates on Riemannian manifolds. Different from existing such manifold CNNs, the proposed method uses anisotropic diffusion kernels to reparameterize functions defined in a local neighborhood of a point on surface. This new architecture, called ACNN, is tested on the point correspondence problem between shapes. Experiments show that it outperforms previous approaches on standard benchmark datasets of both complete and partial 3D shapes.It is still an open problem of what the best practice is to design convolutional neural networks on manifold. Unlike CNNs on Euclidean space, parallel translation and the statistical invariance coupled with this operation are not well defined. This core problem has stimulated several recent attempts, and this paper also falls into this category. In my opinion, it is reasonable to use heat diffusion process to define the proximity of a point. However, it is still unclear why the invariance assumption should hold using this parameterization. I would be more satisfied if the authors could do more in-depth statistical analysis over the distribution of the output from the patch operator, as defined in Eq (7).
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the main contribution of the paper, and how does it build upon previous research? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other methods? 3. How does the paper demonstrate the effectiveness of the proposed method, and what are the implications of its potential impact? 4. Are there any areas where the presentation could be improved, such as clarifying assumptions or providing more context for readers unfamiliar with certain concepts? 5. Are there any minor errors or inaccuracies in the paper that could be addressed for greater precision?
Review
Review This paper proposes a novel model called Anisotropic Convolutional Neural Network (ACNN), which generalizes classical convolutional neural networks to non-Euclidean domains. This work builds on two methods, namely, the Geodesic Convolutional Neural Network ("Geodesic Convolutional Neural Networks on Riemannian Manifolds") and the concept of Anisotropic Diffusion Descriptors. While sharing many foundational ideas, it describes a new "fused" approach resembling the idea of the Geodesic Convolutional Neural Network, but using kernels of the anisotropic Laplacian operator and avoiding a construction of geodesic local patches (which, if I understand everything correctly, appears to be an artificial and inconvenient construct). The suggested approach is demonstrated to outperform other approaches when applied to the problems of full and partial mesh correspondence.> Technical quality. I believe the paper to be of a high technical quality. The calculations are sound and appear to not have any major flaws or mistakes (other than small inaccuracies outlined below). The experimental methods used to evaluate the proposed method seem to be appropriate. > Novelty/originality. This work builds on the publication "Geodesic Convolutional Neural Networks on Riemannian Manifolds" and the idea of Anisotropic Diffusion Descriptors. It shares the foundational concepts and notation with these articles, but proposes a novel "fused" approach combining the strengths of these two prior publications. While not entirely novel in its details, the combination of two approaches seems to be a successful idea and as such a promising "step in the right direction". > Potential impact or usefulness. This work proposes a model which shows a significantly improved performance compared to prior approaches and seems to show a potential of having a large impact in its specialized sub-field. > Clarity and presentation. The paper is well written. However, the clarity of the presentation is lacking, seemingly due to the fact that the authors expect the reader to be familiar with prior publications on Geodesic Convolutional Neural Networks and Anisotropic Diffusion Descriptors. Some of the minor inaccuracies/problems include: (a) the conductivity tensor defined in Eq. (2) should have an additional x-dependent multiplier; (b) in Eq. (7), D should have subscripts; (c) the subsection about the discretization of the anisotropic Laplacian operator should include at least some references (Anisotropic Diffusion Descriptors, ...?); (d) instead of "the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator ...", the sentence should read "the solution of heat equation (1) with D=D_{\alpha \theta} at time t is obtained by applying the anisotropic heat operator ..."; (e) although the meaning of the notation < a,b >_H should be clear for the majority of the readers, including a definition or a reference would improve the clarity of the article.
NIPS
Title Learning shape correspondence with anisotropic convolutional neural networks Abstract Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the computer graphics and geometry processing communities is limited due to the non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. 1 Introduction In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods [17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the need of learning from data invariance that is otherwise impossible to model axiomatically. In the past years, we have witnessed the emergence of learning-based approaches for 3D shape analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26]. Extrinsic deep learning. Many machine learning techniques successfully working on images were tried “as is” on 3D geometric data, represented for this purpose in some way “digestible” by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations. Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in [16]. While producing impressive results on several shape correspondence and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds. A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced. Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5]. Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture. Compared to GCNN, our construction of the “patch operator” is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks. We show that the proposed framework outperforms GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks. 2 Background We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X . Let T x X denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian metric is an inner product h·, ·i T x X : T x X ⇥ T x X ! R on the tangent plane, depending smoothly on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to isometric (metric-preserving) deformations. Heat diffusion on manifolds is governed by the heat equation, which has the most general form f t (x, t) = div X (D(x)r X f(x, t)), (1) with appropriate boundary conditions if necessary. Here r X and div X denote the intrinsic gradient and divergence operators, and f(x, t) is the temperature at point x at time t. D(x) is the thermal conductivity tensor (2⇥ 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis v m ,v M of principal curvature directions, used a thermal conductivity tensor of the form D ↵✓ (x) = R ✓ (x) ↵ 1 R> ✓ (x), (2) where the 2 ⇥ 2 matrix R ✓ (x) performs rotation of ✓ w.r.t. to the maximum curvature direction v M (x), and ↵ > 0 is a parameter controlling the degree of anisotropy (↵ = 1 corresponds to the classical isotropic case). We refer to the operator ↵✓ f(x) = div X (D ↵✓ (x)r X f(x)) as the anisotropic Laplacian, and denote by { ↵✓i , ↵✓i } i 0 its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions) satisfying ↵✓ ↵✓i (x) = ↵✓i ↵✓i (x). Given some initial heat distribution f 0 (x) = f(x, 0), the solution of heat equation (1) at time t is obtained by applying the anisotropic heat operator Ht ↵✓ = e t ↵✓ to f 0 , f(x, t) = Ht ↵✓ f 0 (x) = Z X f 0 (⇠)h ↵✓t (x, ⇠) d⇠ , (3) where h ↵✓t (x, ⇠) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as h ↵✓t (x, ⇠) = X k 0 e t ↵✓k ↵✓k (x) ↵✓k (⇠). (4) Appealing to the signal processing intuition, the eigenvalues play the role of ‘frequencies’, e t acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using anisotropic heat kernels (considering the diagonal h ↵✓t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ↵✓k ). ↵ij ij ✓ i j k h R✓ûm R✓ûM ûm ûM n̂ êkj êki êhi êhj Discretization. In the discrete setting, the surface X is sampled at n points V = {x 1 , . . . ,x n }. The points are connected by edges E and faces F , forming a manifold triangular mesh (V,E, F ). To each triangle ijk 2 F , we attach an orthonormal reference frame U ijk = ( ˆu M , ˆu m , ˆn), where ˆn is the unit normal vector to the triangle and ˆu M , ˆu m 2 R3 are the directions of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent vectors is expressed w.r.t. U ijk as a 3⇥ 3 matrix ⇣ ↵ 1 0 ⌘ . The discretization of the anisotropic Laplacian takes the form of an n ⇥ n sparse matrix L = S 1W. The mass matrix S is a diagonal matrix of area elements s i = 1 3 P jk:ijk2F Aijk, where A ijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights w ij = 8 >< >: 1 2 ⇣ hˆe kj ,ˆe ki iH ✓ sin↵ ij + hˆe hj ,ˆe hi iH ✓ sin ij ⌘ (i, j) 2 E; P k 6=i wik i = j; 0 else , (5) where the notation is according to the inset figure, and the shear matrix H ✓ = R ✓ U ijk ⇣ ↵ 1 0 ⌘ U> ijk R> ✓ encodes the anisotropic scaling up to an orthogonal basis change. Here R ✓ denotes the 3 ⇥ 3 rotation matrix, rotating the basis vectors U ijk on each triangle around the normal ˆn by angle ✓. 3 Intrinsic deep learning This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea of “weight sharing”, wherein a small set of templates (filters) is applied to different parts of the data. In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location. One of the major problems in applying the same paradigm to non-Euclidean domains is the lack of shift-invariance, the template now has to be location-dependent. Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN to triangular meshes based on geodesic local patches. The core of this method is the construction of local geodesic polar coordinates using a procedure previously employed for intrinsic shape context descriptors [12]. The patch operator (D(x)f)(✓, ⇢) in GCNN maps the values of the function f around vertex x into the local polar coordinates ✓, ⇢, leading to the definition of the geodesic convolution (f ⇤ a)(x) = max ✓2[0,2⇡) Z a(✓ + ✓, ⇢)(D(x)f)(✓, ⇢)d⇢d✓, (6) which follows the idea of multiplication by template, but is defined up to arbitrary rotation ✓ 2 [0, 2⇡) due to the ambiguity in the selection of the origin of the angular coordinate. The authors propose to take the maximum over all possible rotations of the template a(⇢, ✓) to remove this ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.) There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism. 4 Anisotropic convolutional neural networks The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct (D ↵ (x)f)(✓, t) = R X h ↵✓t (x, ⇠)f(⇠)d⇠R X h ↵✓t (x, ⇠)d⇠ , (7) for some anisotropy level ↵ > 1. This way, the values of f around point x are mapped to a local system of coordinates (✓, t) that behaves like a polar system (here t denotes the scale of the heat kernel and ✓ is its orientation). We define intrinsic convolution as (f ⇤ a)(x) = Z a(✓, t)(D ↵ (x)f)(✓, t)dtd✓, (8) Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations (6), in our construction it is natural to use the principal curvature direction as the reference ✓ = 0. Such an approach has a few major advantages compared to previous intrinsic CNN models. First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize the comparative advantages in Table 1. ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer contains PQ filters arranged in banks (P filters in Q banks); each bank corresponds to an output dimension. The filters are applied to the input as follows, f out q (x) = PX p=1 (f in p ⇤ a qp )(x), q = 1, . . . , Q, (9) where a qp (✓, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization of such filters is available in the supplementary material. Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form f ⇥ (x) at each point x of the shape, where ⇥ denotes the set of all learnable parameters of the network. The choice of the parameters is done by an optimization process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning shape correspondence. Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in X and Y , respectively. For a point x on a query shape, the output of ACNN f ⇥ (x) is m-dimensional and is interpreted as a probability distribution (‘soft correspondence’) on Y . The output of the network at all the points of the query shape represents the probability of x mapped to y. Let us denote by y⇤(x) the ground-truth correspondence of x on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, T = {(x, y⇤(x))}. The optimal parameters of the network are found by minimizing the multinomial regression loss `reg(⇥) = X (x,y ⇤ (x))2T log f ⇥ (x, y⇤(x)). (10) 5 Results In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations and the anisotropy parameter ↵ = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, 1 = 0.9, and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544 dimensions and using default parameters. For all experiments, training was done by minimizing the loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense soft correspondence for all the vertices. Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point on the query shape, the output of the network represents the soft correspondence as a 6890- dimensional vector which was then converted to point correspondence with the technique explained in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper networks. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the net were refined using functional map [18]. We refer to the supplementary material for the details. We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5]. Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong (asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic error larger than 10% of the geodesic diameter of the shape 1. Please refer to the supplementary material for an additional visualization of the quality of the correspondences obtained with ACNN in terms of texture transfer. Partial correspondence We used the recent very challenging SHREC’16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are cuts (removal of a few large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the net were refined using partial functional correspondence [19]. We refer to the supplementary mate- 1Per subject leave-one-out produces comparable results with mean accuracy of 59.6± 3.7%. rial for the details. The dropout regularization, with ⇡ drop = 0.5, was crucial to avoid overfitting on such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19]. For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark. Figure 2 (middle) compares the performance of different partial matching methods on the SHREC’16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin. Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences even in such a challenging setting. Figure 2 (right) compares the performance of different partial matching methods on the SHREC’16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error. 6 Conclusions We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model. Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings. Being a generic model, ACNN can be used for many other applications. The most promising future work direction is applying ACNN to learning on graphs. Acknowledgments The authors wish to thank Matteo Sala for the textured models. This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
1. What is the novel approach introduced by the paper in constructing convolutional neural networks? 2. What are the advantages of the proposed method compared to previous works, specifically GCNN? 3. Can the reviewer clarify the confusion regarding the necessity of mesh input in the paper's method? 4. How does the paper's approach differ from traditional methods in handling partial shape correspondence? 5. What are the limitations or areas for improvement in the paper's method, particularly in light of recent advancements in optical flow dataset generation?
Review
Review This paper uses anisotropic heat kernel as local intrinsic filters to construct convolutional neural network. The network is used to construct correspondence between deformed shapes. Experiments on FAUST and SHREC'16 show good performance for both full shape and partial shape correspondence.Compared to GCNN, this paper simplifies the construction of discrete patch operator. It also avoided using angular max pooling by using local curvature direction as reference frame. However the paper did not make it clear why ACNN does not need mesh as input. In construction of discrete anisotropic Laplacian a mesh is used with its faces and edges.
NIPS
Title A Unified View of cGANs with and without Classifiers Abstract Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN. 1 Introduction Generative Adversarial Networks [GANs; 10] is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2, 37, 29, 25, 36, 26, 38], backbone architectures [41, 50], and regularization techniques [13, 35, 51] for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [3]. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [33, 39, 34, 16]. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [39] or texts [43], speech enhancement [32], and image style transformation [18, 53]. One representative cGAN is Auxiliary Classifier GAN [ACGAN; 39], which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [6]. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [45], which in term decreases the capability to match the target distribution. Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [ProjGAN; 34], which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [34]. The finding, along with the success of ProjGAN and other cGANs without classifiers [15, 4], seem to suggest that including a classifier is not helpful for improving cGANs. In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [39], ProjGAN [34], and ContraGAN [16] as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet. We summarize the contributions of this paper as: • We justify the principled use of classifiers for cGANs by decomposing the joint distribution. • We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. • We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets. The paper is organized as follows. Section 2 derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [39], ProjGAN [34], and ContraGAN [16] in Section 3. Then, we demonstrate the effectiveness of our framework by experiments in Section 4. We discuss related work in Section 5 before concluding in Section 6. 2 Method Given a K-class dataset (x, y) ∼ pd, where y ∈ {1 . . .K} is the class of x and pd is the underlying data distribution. Our goal is to train a generator G to generate a sample G(z, y) following pd(x|y), where z is sampled from a known distribution such as N (0, 1). To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: max D min G ∑ y E pd(x|y) D(x, y)− E p(z) D(G(z, y), y) (1) where G is the generator and D is a discriminator that outputs higher values for real data. The choice of D leads to different types of GANs [10, 2, 29, 8]. At first glance, there is no classifier in Eq. (1). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [39]. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality. 2.1 Bridge Classifiers to Discriminators with Joint Distribution A classifier, when viewed from a probabilistic perspective, is a function that approximates pd(y|x), the probability that x belongs to class y. On the other hand, a conditional discriminator, telling whether x is real data in class y, can be viewed as a function approximate pd(x|y). To connect pd(y|x) and pd(x|y), an important observation is through the joint probability: log p(x, y) = log p(x|y) + log p(y) (2) = log p(y|x) + log p(x). (3) The observation illustrates that we can approximate log p(x, y) in two directions: one containing p(x|y) for conditional discriminators and one containing p(y|x) for classifiers. The finding reveals that by sharing the parameterization, updating the parameters in one direction may optimize the other implicitly. Therefore, we link the classifier to the conditional discriminator by training both objectives jointly. 2.2 Learning Joint Distribution via Optimizing Conditional Discriminators Since p(y) is usually known a priori (e.g., uniform) or able to easily estimated (e.g., empirical counting), we focus on learning p(x|y) in Eq.(2). Specifically, since log p(x, y) ∈ R, we parameterize it via fθ(x), such as a neural network with K real value outputs, where exp(fθ(x)[y]) ∝ p(x, y) . Similar parameterization is also used in exponential family [48] and energy based model [23]. Therefore, the log-likelihood log p(x|y) can be modeled as: log pθ(x|y) = log ( exp (fθ(x)[y]) Zy(θ) ) = fθ(x)[y]− logZy(θ), (4) where Zy(θ) = ∫ x′ exp (fθ(x ′)[y]) dx′. Optimizing Eq. (4) is challenging because of the intractable partition function Zy(θ). Here we introduce the Fenchel duality [48] of the partition function Zy(θ): logZy(θ) = max qy [ E qy(x) [fθ(x)[y]] +H(qy) ] where qy is a distribution of x conditioned on y and H(qy) = −Exf∼qy(x) [log qy(x)] is the entropy of qy. The derivation is provided in Appendix A. By the Fenchel duality, we obtain our maximum likelihood estimation in Eq. (4) as: max θ [ E pd(x,y) [fθ(x)[y]]−max qy [ E qy(x) [fθ(x)[y]] +H(qy) ]] . (5) To approximate the solution of qy , in additional to density models, we can train an auxiliary generator qφ as in cGANs to estimate Eqy(x) via sampling. That is, we can sample x from qφ by x = qφ(z, y), where z ∼ N (0, 1). The objective (5) then becomes: max θ min φ ∑ y E pd(x|y) [fθ(x)[y]]− E p(z) [fθ(qφ(z, y))[y]]−H(qφ(·, y)), (6) which is almost in the form of Eq (1) except the entropy H(qφ(·, y)). We leave the discussion about the entropy estimation in Section 2.4. Currently, the loss function to optimize the objective without the entropy can be formulated as: Ld1(x, z, y; θ) = −fθ(x)[y] + fθ(qφ(z))[y] Lg1(z, y;φ) = −fθ(qφ(z, y))[y] 2.3 Learning Joint Distributions via Optimizing Unconditional Discriminators & Classifiers Following Eq. (3), we can approximate log p(x, y) by approximating log p(y|x) and log p(x). With our energy function fθ, pθ(y|x) can be formulated as: pθ(y|x) = pθ(x, y) pθ(x) = exp(fθ(x)[y])∑ y′ exp(fθ(x)[y ′]) , which is equivalent to the y’th output of SOFTMAX(fθ(x)). Therefore, we can maximize the loglikelihood of pθ(y|x) by consider fθ as a softmax classifier minimizing the cross-entropy loss: Lclf(x, y; θ) = − log (SOFTMAX (fθ(x)) [y]) On the other hand, to maximize the log-likelihood of p(x), we introduce a reparameterization hθ(x) = log ∑ y exp(fθ(x)[y]): log pθ(x) = log (∑ y pθ(x, y) ) = log (∑ y exp(fθ(x)[y])∫ x′ ∑ y′ exp(fθ(x ′)[y′]) dx′ ) = log ( exp(log( ∑ y exp(fθ(x)[y])))∫ x′ exp(log( ∑ y′ exp(fθ(x ′)[y′]))) dx′ ) = log ( exp(hθ(x))∫ x′ exp(hθ(x′)) dx′ ) = hθ(x)− log(Z ′(θ)), (7) where Z ′(θ) = ∫ x exp(hθ(x)) dx. Similar to Eq. (5), we can rewrite logZ ′(θ) by its Fenchel duality: logZ ′(θ) = max q [ E q(x) [hθ(x)] +H(q) ] (8) where q is a distribution of x and H(q) is the entropy of q. Combining Eq. (7) and Eq. (8) and reusing the generator in Section 2.2, we obtain the optimization problem: max θ min φ E pd(x,y) [hθ(x)]− E p(z) [hθ(qφ(z, y))]−H(qφ) (9) Similar to Eq. (6), the objective of the unconditional discriminator is equivalent to typical GANs augmented with an entropy term. The loss function without considering the entropy can be formulated as: Ld2(x, z, y; θ) = −hθ(x) + hθ(qφ(z)) Lg2(z, y;φ) = −hθ(qφ(z, y)) 2.4 Entropy Approximation in cGANs In Section 2.2 and Section 2.3, we propose two approaches to train cGANs with and without classification. Unsolved problems in Eq. (6) and Eq. (9) are the entropy termsH(qφ(·, y)) andH(qφ). In previous work, various estimators have been proposed to estimate entropy or its gradient [46, 42, 21, 27]. One can freely choose any approach to estimate the entropy in the proposed framework. In this work, we consider two entropy estimators, and we will show how they connect with existing cGANs. The first approach is the naive constant approximation. Since entropy is always non-negative, we naturally have the constant zero as a lower bound. Therefore, we can maximize the objective by replacing the entropy term with its lower bound, which is zero in this case. This approach is simple but we will show its effectiveness in Section 4 and how it links our framework to ProjGAN and ContraGAN in Section 3. The second approach is estimating a variational lower bound. Informally, given a batch of data {(x1, y1), . . . , (xm, ym)}, an encoder function l, and a class embedding function e(y), the negative 2C loss used in ContraGAN [16], LC(xi, yi; t) = log ( d(l(xi), e(yi)) + ∑m k=1 Jyk = yiK d(l(xi), l(xk)) d(l(xi), e(yi)) + ∑m k=1 Jk 6= iK d(l(xi), (l(xk)) ) , (10) is an empirical estimate of a proper lower bound of H(X) [40], where d(a, b) = exp(a>b/t) is a distance function with a temperature t. We provide the proof in Appendix B. The 2C loss heavily relies on the embeddings l(x) and e(y). Although we only need to estimate the entropy of generated data in Eq. (6) and Eq. (9), we still rely on true data to learn the embeddings in practice. Therefore, the loss function of Eq. (6) can be written as: LD1(x, z, y; θ) = Ld1(x, z, y; θ) + λcLrealC LG1(z, y;φ) = Lg1(x, y;φ) + λcLfakeC , where λc is a hyperparameter controlling the weight of the contrastive loss, and LrealC ,LfakeC are the contrastive loss calculated on a batch of real data and generated data respectively. Similarly, the loss function of Eq. (9) becomes: LD2(x, z, y; θ) = Ld2(x, z, y; θ) + λcLrealC LG2(z, y;φ) = Lg2(x, y;φ) + λcLfakeC , The introduction of 2C loss allows us to accommodate ContraGAN into our framework. 2.5 Energy-based Conditional Generative Adversarial Network Previous work has shown that multitask training benefits representation learning [30] and training discriminative and generative models jointly outperforms their purely generative or purely discriminative counterparts [11, 28]. Therefore, we propose a framework named Energy-based Conditional Generative Adversarial Network (ECGAN), which combines the two approaches in Section 2.2 and Section 2.3 to learn the joint distribution better. The loss function can be summarized as: LD(x, z, y; θ) = Ld1(x, z, y; θ) + αLd2(x, z, y; θ) + λcLrealC + λclfLclf(x, y; θ) (11) LG(z, y;φ) = Lg1(z, y;φ) + αLg2(z, y;φ) + λcLfakeC (12) where α is a weight parameter for the unconditional GAN loss. The discriminator’s design is illustrated in Fig 1. Here we discuss the intuition of the mechanisms behind each component in Eq. (11). Ld1 is a loss function for conditional discriminator. It updates the y-th output when given a data pair (x, y). Ld2 guides to an unconditional discriminator. It updates all outputs according to whether x is real. Lclf learns a classifier. It increases the y-th output and decreases the other outputs for data belonging to class y. Finally, LrealC and L fake C play the roles to improve the latent embeddings by pulling the embeddings of data with the same class closer. Previously, we derive the loss functions Ld1 and Ld2 as the loss in Wasserstein GAN [2]. In practice, we use the hinge loss as proposed in Geometric GAN [26] for better stability and convergence. We use the following combination of Ld1 and Ld2 : Hinge(fθ(xreal, y) + α · hθ(xreal), fθ(xfake, y) + α · hθ(xfake)). (13) For more discussion of the implementation of hinge loss, please check Appendix C. The overall training procedure of ECGAN is presented in Appendix E. 3 Accommodation to Existing cGANs In this section, we show that our framework covers several representative cGAN algorithms, including ACGAN [39], ProjGAN [35], and ContraGAN [16]. Through the ECGAN framework, we obtain a unified view of cGANs, which allows us to fairly compare and understand the pros and cons of existing cGANs. We name the ECGAN counterparts ECGAN-0, ECGAN-C, and ECGAN-E, corresponding to ProjGAN, ACGAN, and ContraGAN, respectively. We summarize the settings in Table 1 and illustrate the discriminator designs in Appendix F. 3.1 ProjGAN ProjGAN [34] is the most representative cGAN design that is commonly used in state-of-the-art research [3, 50]. Let the output of the penultimate layer in the discriminator be g(x). The output of ProjGAN’s discriminator is: D(x, y) = wTu g(x) + bu + w T y g(x) = (wu + wy) T g(x) + bu (14) where wu, bu are the parameters for the unconditional linear layer, and wy is the class embedding of y. On the other hand, the output of a discriminator in ECGAN is: D(x, y) = f(x)[y] = (WT g(x) + b)[y] = wTy g(x) + by (15) where W,b are the parameters of the linear output layer in fθ. As shown in Eq. (14) and Eq. (15), the architectures of ProjGAN and ECGAN are almost equivalent. In addition, the loss function of ProjGAN can be formulated as: LG = −D(G(z), y) LD = −D(x, y) +D(G(z), y), which is a special case of ECGAN while α = λc = λclf = 0. We name this case ECGAN-0, which is the simplest version of ECGAN. Compared with ProjGAN, ECGAN-0 has additional bias terms for the output of each class. 3.2 ACGAN ACGAN [39] is the most well-known cGAN algorithm that leverages a classifier to achieve conditional generation. Given aK-class dataset, the discriminator of ACGAN is parameterized by a network with K + 1 outputs. The first output, denoted as D(x), is an unconditional discriminator distinguishing between real and fake images. The remaining K outputs, denoted as C(x), is a classifier that predicts logits for every class. The loss function of ACGAN can be formulated as: LG = −D(G(z)) + λgLclf(G(z), y;C) LD = −D(x) +D(G(z)) + λd(Lclf(x, y;C) + Lclf(G(z), y;C)) where G is the generator, λg and λd are hyperparameters to control the weight of cross-entropy loss. The formulation of ACGAN is similar to our ECGAN when α = λc = 0 and λclf > 0. We call the special case as ECGAN-C, with a suffix ‘C’ for classification loss. ECGAN-C uses a conditional discriminator which plays the role of a classifier at the same time. Hence the generator in ECGAN-C learns from the conditional discriminator rather than the cross-entropy loss which is biased for generative objectives. 3.3 ContraGAN ContraGAN [16] proposed 2C loss, which we mentioned in Eq. (10), to capture the data-to-data relationship and data-to-label relationship. The 2C loss is applied in both discriminator and generator to achieve conditional generation. That is: LG = −D(G(z), y) + λcLfakeC LD = −D(x, y) +D(G(z), y) + λcLrealC The loss functions are similar to ones in ECGAN with α = λclf = 0 and λc > 0. We call it ECGANE, where ‘E’ means entropy estimation. The main difference between ContraGAN and ECGAN-E is the output layer of their discriminators. While ContraGAN uses a single-output network, ECGAN uses a K-output network fθ which has higher capacity. We keep Eq. (11) and Eq. (12) as simple as possible to reduce the burden of hyperparameter tuning. Under the simple equations of the current framework, ECGAN-C and ECGAN-E are the closest counterparts to ACGAN and ContraGAN. The subtle difference (in addition to the underlying network architecture) is that ACGAN uses Ld2 instead of Ld1 (ECGAN-C); ContraGAN uses Ld2 ,Lg2 instead of Ld1 ,Lg1 (ECGAN-E). One future direction is to introduce more hyperparameters in Eq. (11) and Eq. (12) to get closer counterparts. 4 Experiment We conduct our experiments on CIFAR-10 [20] and Tiny ImageNet [22] for analysis, and ImageNet [6] for large-scale empirical study. Table 2 shows the statistics of the datasets. All datasets are publicly available for research use. They were not constructed for human-related study. We do not specifically take any personal information from the datasets in our experiments. In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity. Besides, we use Intra-FID, which is the average of FID for each class, to evaluate the performance of conditional generation. 4.1 Experimental Setup We use StudioGAN1 [16] to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [3]. We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix D. 4.2 Ablation Study We start our empirical studies by investigating the effectiveness of each component in ECGAN. We use symbols ‘U’ to represent unconditional GAN loss, ‘C’ to represent classification loss, and ‘E’ 1https://github.com/POSTECH-CVLab/PyTorch-StudioGAN to represent entropy estimation loss, which is 2C loss in our implementation. The concatenation of the symbols indicates the combination of losses. For example, ECGAN-UC means ECGAN with both unconditional GAN loss and classification loss (α > 0 and λclf > 0). Table 3 shows the results of ECGAN from the simplest ECGAN-0 to the most complicated ECGAN-UCE. On CIFAR-10, ECGAN-0 already achieves decent results. Adding unconditional loss, classification loss, or contrastive loss provides slightly better or on-par performance. On the harder Tiny Imagenet, the benefit of unconditional loss and classification loss becomes more significant. While ECGAN-U already shows advantages to ECGAN-0, adding classification loss to ECGAN-U further improves all metrics considerably. We also observe that directly adding classification loss is not sufficient to improve cGAN, which is consistent to the finding in [34]. The fact reveals that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs. We also find that adding contrastive loss does not improve ECGAN-UC. An explanation is that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. Furthermore, the additional parameters introduced by 2C loss make the optimization problem more complicated. As a result, we use the combination ECGAN-UC as the default option of ECGAN in the following experiments. 4.3 Comparison with Existing cGANs We compare ECGAN to several representative cGANs including ACGAN [39], ProjGAN [34], and ContraGAN [16], with three representative backbone architectures: DCGAN [41], ResNet [13], and BigGAN [3]. Table 4 compares the results of each combinations of cGAN algorithms and backbone architectures. The results show that ECGAN-UC outperforms other cGANs significantly with all backbone architectures on both CIFAR-10 and Tiny ImageNet. We also noticed that ContraGAN, though achieves decent image quality and diversity, learns a conditional generator that interchanges some classes while generating, hence has low Intra-FID. Overall, the experiment indicates that ECGAN-UC can be a preferred choice for cGAN in general situations. 4.4 Comparisons between Existing cGANs and their ECGAN Counterpart Table 5 compares ProjGAN, ContraGAN, ACGAN to their ECGAN counterparts. As we described in Section 3, each of these representative cGANs can be viewed as special cases under our ECGAN framework. As mentioned in Section 3, ECGAN-0 has additional bias terms in the output layer compared to ProjGAN. The results in Table 5 shows that the subtle difference still brings significant improvement to the generation quality, especially on the harder Tiny ImageNet. Compared to ContraGAN, ECGAN-E has the same loss but different design in the discriminator’s output layer. While the discriminator of ContraGAN has only single output, ECGAN-E has multiple outputs for every class. The difference makes ECGAN-E solve the label mismatching problem of ContraGAN mentioned in Section 4.3 and benefits generation on CIFAR-10, but does not work well on Tiny ImageNet. It is probably because of the scarcity of training data in each class in Tiny ImageNet. Only 50 data are available for updating the parameters corresponding to each class. Last, we compare ECGAN-C to ACGAN. Both of them optimize a GAN loss and a classification loss. However, ECGAN-C combines the discriminator and the classifier, so the generator can directly optimize cGAN loss rather than the classification loss. As a result, ECGAN-C demonstrates better performance on both CIFAR-10 and Tiny ImageNet. In sum, the comparisons show that through the unified view provided by ECGAN, we can improve the existing methods with minimal modifications. 4.5 Evaluation on ImageNet We compare our ECGAN-UC and ECGAN-UCE with BigGAN [3] and ContraGAN [16] on ImageNet. We follow all configurations of BigGAN with batch size 256 in StudioGAN. The numbers in Table 6 are reported after 200,000 training steps if not specified. The results show that ECGAN-UCE outperforms other cGANs dramatically. The comparison between ECGAN-UC and ECGAN-UCE indicates that the 2C loss brings more significant improvement in the ECGAN framework than in ContraGAN. The proposed ECGAN-UCE achieves 8.49 FID and 80.69 inception score. To the best of our knowledge, this is a state-of-the-art result of GANs with batch size 256 on ImageNet. Selected generated images are shown in Appendix G. 5 Related Work The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [33, 7]. To improve the generation quality, ACGAN [39] proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [24] for semi-supervised learning and Triangle GAN [9] for cross-domain distribution matching. However, Shu [45] and Miyato and Koyama [34] pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable. In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [12] use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [MCMC; 1]. Our work is also similar to a concurrent work [11], which improves [12] by introducing Fenchel duality to replace computationallyintensive MCMC. They use a variational approach [19] to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [12, 11] by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [11]. Last, [5] study the connection between exponential family and unconditional GANs. Different from [5], we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs. 6 Conclusion In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future. 7 Limitations and Potential Negative Impacts There are two main limitations in the current study. One is the investigation on ImageNet. Ideally, more experiments and analysis on ImageNet can further strengthen the contribution. But training with such a large dataset is barely affordable for our computational resource, and we can only resort to the conclusive findings in the current results. The other limitation is whether the metrics such as FID truly reflect generation quality, but this limitation is considered an open problem to the community anyway. As with any work on generative models, there is a potential risk of the proposed model being misused to create malicious content, much like how misused technology can be used to forge bills. In this sense, more anti-forgery methods will be needed to mitigate the misuse in the future. Acknowledgement We thank the anonymous reviewers for valuable suggestions. This work is partially supported by the Ministry of Science and Technology of Taiwan via the grants MOST 107-2628-E-002-008-MY3 and 110-2628-E-002-013. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.
1. What is the main contribution of the paper regarding cGANs? 2. What are the strengths and weaknesses of the proposed approach compared to previous cGANs? 3. Do you have any concerns or questions about the experimental results or conclusions drawn from them? 4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors of the paper propose a unified view of conditional generative adversarial networks (cGANs). To this end, they analyze the log of joint probability distribution p(x, y) through two different perspective of views; via a conditional discriminator using Eq.(2), and via an unconditional discriminator and classifier using Eq.(3). Inspired by a multitask learning, they suggest Energy-based Conditional Generative Adversarial Networks (ECGAN) where two approaches are combined together to approximate the joint probability distribution. Under this framework, previous cGANs such as ProjGAN, ACGAN and ContraGAN can be closely explained using the variants of ECGAN with the right choice of hyperparameters, namely, ECGAN-0, ECGAN-C and ECGAN-E, respectively. Their experiments on CIFAR-10 and Tiny ImageNet show that these ECGAN variants outperform ProjGAN, ACGAN and ContraGAN in most cases of experiments. Furthermore, ECGAN-UC outperforms these variants of ECGAN as well as previous cGANs with various backbone structures such as DCGAN, ResGAN, and BigGAN. Through this result, the authors conclude that adding a classifier helps improving cGANs with the help of the unconditional discriminator as opposed to the previous findings in Shu et al. [42] and ProjGAN [34]. Review Originality: Previous cGANs have taken one of approaches explained in section 2.2 and 2.3 in a broad perspective. By combining them, this paper indeed provides a unified picture of cGANs. Although the concurrent work [11] is somewhat similar in that they both try to solve Eq.(5), the tasks they are trying to solve and the approaches they are taking are quite different as mentioned in the related work. Quality: Line 249 ~ 250 - The authors mentioned that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. But, doesn’t ECGAN-UC correspond to using 0 as a lower bound for entropy (1st approach mentioned to estimate entropy) which is a looser bound for entropy? Line 247 ~ 248 - The authors claimed that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs according to the performance results provided in Table2. Could authors provide any idea why it is the case other than the empirical evidence? Their experiments are somewhat limited as they did not provide the experiment results on ImageNet which were used by all of ProjGAN, ACGAN and ContraGAN. Clarity: Line 101 - It would be better to specify a section of the book or provides a proof in the appendix as done in appendix A.1 of [11]. Line 141 - Could authors elaborate why Eq.(10) is an empirical estimate of a proper lower bound of an entropy? Line 164 ~ 165 - It would be better if the authors can provide experiment results that compare the performance with the original wassertein loss versus hinge loss since it is hard to capture how much improvement is originated from using the hinge loss. Line 197 ~ 198 - The authors said the main difference between ACGAN and ECGAN-C is that ECGAN-C uses a conditional discriminator. But, if my understanding on the equations in section 3.2 is correct, another difference is that ACGAN has additional cross entropy loss for the generated images. Do authors think it is negligible? Also, wouldn’t ECGAN-C closer to ACGAN if unconditional discriminator is used instead of conditional discriminator? According to the claim in line 247 ~ 248 that unconditional GAN loss is crucial component to bridge classifiers and discriminators in cGANs, I am not sure why ECGAN-C is set in that way. Typos: It seems the denominator part of Eq.(10) is missing an indicator function of (k != I) according to the Eq.(8) in ContraGAN. Line 161 - increase -> increases, decrease -> decreases Eq.(14) - Third term in the middle h(x) -> g(x) Table 4 - bolding is missing for some cells. Citations [11] and [12] are mixed up in the related work section. Significance: Although there are some remaining questions as specified above and experiments are somewhat limited, this work can benefit the community by providing a unified framework for cGANs.
NIPS
Title A Unified View of cGANs with and without Classifiers Abstract Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN. 1 Introduction Generative Adversarial Networks [GANs; 10] is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2, 37, 29, 25, 36, 26, 38], backbone architectures [41, 50], and regularization techniques [13, 35, 51] for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [3]. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [33, 39, 34, 16]. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [39] or texts [43], speech enhancement [32], and image style transformation [18, 53]. One representative cGAN is Auxiliary Classifier GAN [ACGAN; 39], which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [6]. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [45], which in term decreases the capability to match the target distribution. Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [ProjGAN; 34], which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [34]. The finding, along with the success of ProjGAN and other cGANs without classifiers [15, 4], seem to suggest that including a classifier is not helpful for improving cGANs. In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [39], ProjGAN [34], and ContraGAN [16] as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet. We summarize the contributions of this paper as: • We justify the principled use of classifiers for cGANs by decomposing the joint distribution. • We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. • We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets. The paper is organized as follows. Section 2 derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [39], ProjGAN [34], and ContraGAN [16] in Section 3. Then, we demonstrate the effectiveness of our framework by experiments in Section 4. We discuss related work in Section 5 before concluding in Section 6. 2 Method Given a K-class dataset (x, y) ∼ pd, where y ∈ {1 . . .K} is the class of x and pd is the underlying data distribution. Our goal is to train a generator G to generate a sample G(z, y) following pd(x|y), where z is sampled from a known distribution such as N (0, 1). To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: max D min G ∑ y E pd(x|y) D(x, y)− E p(z) D(G(z, y), y) (1) where G is the generator and D is a discriminator that outputs higher values for real data. The choice of D leads to different types of GANs [10, 2, 29, 8]. At first glance, there is no classifier in Eq. (1). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [39]. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality. 2.1 Bridge Classifiers to Discriminators with Joint Distribution A classifier, when viewed from a probabilistic perspective, is a function that approximates pd(y|x), the probability that x belongs to class y. On the other hand, a conditional discriminator, telling whether x is real data in class y, can be viewed as a function approximate pd(x|y). To connect pd(y|x) and pd(x|y), an important observation is through the joint probability: log p(x, y) = log p(x|y) + log p(y) (2) = log p(y|x) + log p(x). (3) The observation illustrates that we can approximate log p(x, y) in two directions: one containing p(x|y) for conditional discriminators and one containing p(y|x) for classifiers. The finding reveals that by sharing the parameterization, updating the parameters in one direction may optimize the other implicitly. Therefore, we link the classifier to the conditional discriminator by training both objectives jointly. 2.2 Learning Joint Distribution via Optimizing Conditional Discriminators Since p(y) is usually known a priori (e.g., uniform) or able to easily estimated (e.g., empirical counting), we focus on learning p(x|y) in Eq.(2). Specifically, since log p(x, y) ∈ R, we parameterize it via fθ(x), such as a neural network with K real value outputs, where exp(fθ(x)[y]) ∝ p(x, y) . Similar parameterization is also used in exponential family [48] and energy based model [23]. Therefore, the log-likelihood log p(x|y) can be modeled as: log pθ(x|y) = log ( exp (fθ(x)[y]) Zy(θ) ) = fθ(x)[y]− logZy(θ), (4) where Zy(θ) = ∫ x′ exp (fθ(x ′)[y]) dx′. Optimizing Eq. (4) is challenging because of the intractable partition function Zy(θ). Here we introduce the Fenchel duality [48] of the partition function Zy(θ): logZy(θ) = max qy [ E qy(x) [fθ(x)[y]] +H(qy) ] where qy is a distribution of x conditioned on y and H(qy) = −Exf∼qy(x) [log qy(x)] is the entropy of qy. The derivation is provided in Appendix A. By the Fenchel duality, we obtain our maximum likelihood estimation in Eq. (4) as: max θ [ E pd(x,y) [fθ(x)[y]]−max qy [ E qy(x) [fθ(x)[y]] +H(qy) ]] . (5) To approximate the solution of qy , in additional to density models, we can train an auxiliary generator qφ as in cGANs to estimate Eqy(x) via sampling. That is, we can sample x from qφ by x = qφ(z, y), where z ∼ N (0, 1). The objective (5) then becomes: max θ min φ ∑ y E pd(x|y) [fθ(x)[y]]− E p(z) [fθ(qφ(z, y))[y]]−H(qφ(·, y)), (6) which is almost in the form of Eq (1) except the entropy H(qφ(·, y)). We leave the discussion about the entropy estimation in Section 2.4. Currently, the loss function to optimize the objective without the entropy can be formulated as: Ld1(x, z, y; θ) = −fθ(x)[y] + fθ(qφ(z))[y] Lg1(z, y;φ) = −fθ(qφ(z, y))[y] 2.3 Learning Joint Distributions via Optimizing Unconditional Discriminators & Classifiers Following Eq. (3), we can approximate log p(x, y) by approximating log p(y|x) and log p(x). With our energy function fθ, pθ(y|x) can be formulated as: pθ(y|x) = pθ(x, y) pθ(x) = exp(fθ(x)[y])∑ y′ exp(fθ(x)[y ′]) , which is equivalent to the y’th output of SOFTMAX(fθ(x)). Therefore, we can maximize the loglikelihood of pθ(y|x) by consider fθ as a softmax classifier minimizing the cross-entropy loss: Lclf(x, y; θ) = − log (SOFTMAX (fθ(x)) [y]) On the other hand, to maximize the log-likelihood of p(x), we introduce a reparameterization hθ(x) = log ∑ y exp(fθ(x)[y]): log pθ(x) = log (∑ y pθ(x, y) ) = log (∑ y exp(fθ(x)[y])∫ x′ ∑ y′ exp(fθ(x ′)[y′]) dx′ ) = log ( exp(log( ∑ y exp(fθ(x)[y])))∫ x′ exp(log( ∑ y′ exp(fθ(x ′)[y′]))) dx′ ) = log ( exp(hθ(x))∫ x′ exp(hθ(x′)) dx′ ) = hθ(x)− log(Z ′(θ)), (7) where Z ′(θ) = ∫ x exp(hθ(x)) dx. Similar to Eq. (5), we can rewrite logZ ′(θ) by its Fenchel duality: logZ ′(θ) = max q [ E q(x) [hθ(x)] +H(q) ] (8) where q is a distribution of x and H(q) is the entropy of q. Combining Eq. (7) and Eq. (8) and reusing the generator in Section 2.2, we obtain the optimization problem: max θ min φ E pd(x,y) [hθ(x)]− E p(z) [hθ(qφ(z, y))]−H(qφ) (9) Similar to Eq. (6), the objective of the unconditional discriminator is equivalent to typical GANs augmented with an entropy term. The loss function without considering the entropy can be formulated as: Ld2(x, z, y; θ) = −hθ(x) + hθ(qφ(z)) Lg2(z, y;φ) = −hθ(qφ(z, y)) 2.4 Entropy Approximation in cGANs In Section 2.2 and Section 2.3, we propose two approaches to train cGANs with and without classification. Unsolved problems in Eq. (6) and Eq. (9) are the entropy termsH(qφ(·, y)) andH(qφ). In previous work, various estimators have been proposed to estimate entropy or its gradient [46, 42, 21, 27]. One can freely choose any approach to estimate the entropy in the proposed framework. In this work, we consider two entropy estimators, and we will show how they connect with existing cGANs. The first approach is the naive constant approximation. Since entropy is always non-negative, we naturally have the constant zero as a lower bound. Therefore, we can maximize the objective by replacing the entropy term with its lower bound, which is zero in this case. This approach is simple but we will show its effectiveness in Section 4 and how it links our framework to ProjGAN and ContraGAN in Section 3. The second approach is estimating a variational lower bound. Informally, given a batch of data {(x1, y1), . . . , (xm, ym)}, an encoder function l, and a class embedding function e(y), the negative 2C loss used in ContraGAN [16], LC(xi, yi; t) = log ( d(l(xi), e(yi)) + ∑m k=1 Jyk = yiK d(l(xi), l(xk)) d(l(xi), e(yi)) + ∑m k=1 Jk 6= iK d(l(xi), (l(xk)) ) , (10) is an empirical estimate of a proper lower bound of H(X) [40], where d(a, b) = exp(a>b/t) is a distance function with a temperature t. We provide the proof in Appendix B. The 2C loss heavily relies on the embeddings l(x) and e(y). Although we only need to estimate the entropy of generated data in Eq. (6) and Eq. (9), we still rely on true data to learn the embeddings in practice. Therefore, the loss function of Eq. (6) can be written as: LD1(x, z, y; θ) = Ld1(x, z, y; θ) + λcLrealC LG1(z, y;φ) = Lg1(x, y;φ) + λcLfakeC , where λc is a hyperparameter controlling the weight of the contrastive loss, and LrealC ,LfakeC are the contrastive loss calculated on a batch of real data and generated data respectively. Similarly, the loss function of Eq. (9) becomes: LD2(x, z, y; θ) = Ld2(x, z, y; θ) + λcLrealC LG2(z, y;φ) = Lg2(x, y;φ) + λcLfakeC , The introduction of 2C loss allows us to accommodate ContraGAN into our framework. 2.5 Energy-based Conditional Generative Adversarial Network Previous work has shown that multitask training benefits representation learning [30] and training discriminative and generative models jointly outperforms their purely generative or purely discriminative counterparts [11, 28]. Therefore, we propose a framework named Energy-based Conditional Generative Adversarial Network (ECGAN), which combines the two approaches in Section 2.2 and Section 2.3 to learn the joint distribution better. The loss function can be summarized as: LD(x, z, y; θ) = Ld1(x, z, y; θ) + αLd2(x, z, y; θ) + λcLrealC + λclfLclf(x, y; θ) (11) LG(z, y;φ) = Lg1(z, y;φ) + αLg2(z, y;φ) + λcLfakeC (12) where α is a weight parameter for the unconditional GAN loss. The discriminator’s design is illustrated in Fig 1. Here we discuss the intuition of the mechanisms behind each component in Eq. (11). Ld1 is a loss function for conditional discriminator. It updates the y-th output when given a data pair (x, y). Ld2 guides to an unconditional discriminator. It updates all outputs according to whether x is real. Lclf learns a classifier. It increases the y-th output and decreases the other outputs for data belonging to class y. Finally, LrealC and L fake C play the roles to improve the latent embeddings by pulling the embeddings of data with the same class closer. Previously, we derive the loss functions Ld1 and Ld2 as the loss in Wasserstein GAN [2]. In practice, we use the hinge loss as proposed in Geometric GAN [26] for better stability and convergence. We use the following combination of Ld1 and Ld2 : Hinge(fθ(xreal, y) + α · hθ(xreal), fθ(xfake, y) + α · hθ(xfake)). (13) For more discussion of the implementation of hinge loss, please check Appendix C. The overall training procedure of ECGAN is presented in Appendix E. 3 Accommodation to Existing cGANs In this section, we show that our framework covers several representative cGAN algorithms, including ACGAN [39], ProjGAN [35], and ContraGAN [16]. Through the ECGAN framework, we obtain a unified view of cGANs, which allows us to fairly compare and understand the pros and cons of existing cGANs. We name the ECGAN counterparts ECGAN-0, ECGAN-C, and ECGAN-E, corresponding to ProjGAN, ACGAN, and ContraGAN, respectively. We summarize the settings in Table 1 and illustrate the discriminator designs in Appendix F. 3.1 ProjGAN ProjGAN [34] is the most representative cGAN design that is commonly used in state-of-the-art research [3, 50]. Let the output of the penultimate layer in the discriminator be g(x). The output of ProjGAN’s discriminator is: D(x, y) = wTu g(x) + bu + w T y g(x) = (wu + wy) T g(x) + bu (14) where wu, bu are the parameters for the unconditional linear layer, and wy is the class embedding of y. On the other hand, the output of a discriminator in ECGAN is: D(x, y) = f(x)[y] = (WT g(x) + b)[y] = wTy g(x) + by (15) where W,b are the parameters of the linear output layer in fθ. As shown in Eq. (14) and Eq. (15), the architectures of ProjGAN and ECGAN are almost equivalent. In addition, the loss function of ProjGAN can be formulated as: LG = −D(G(z), y) LD = −D(x, y) +D(G(z), y), which is a special case of ECGAN while α = λc = λclf = 0. We name this case ECGAN-0, which is the simplest version of ECGAN. Compared with ProjGAN, ECGAN-0 has additional bias terms for the output of each class. 3.2 ACGAN ACGAN [39] is the most well-known cGAN algorithm that leverages a classifier to achieve conditional generation. Given aK-class dataset, the discriminator of ACGAN is parameterized by a network with K + 1 outputs. The first output, denoted as D(x), is an unconditional discriminator distinguishing between real and fake images. The remaining K outputs, denoted as C(x), is a classifier that predicts logits for every class. The loss function of ACGAN can be formulated as: LG = −D(G(z)) + λgLclf(G(z), y;C) LD = −D(x) +D(G(z)) + λd(Lclf(x, y;C) + Lclf(G(z), y;C)) where G is the generator, λg and λd are hyperparameters to control the weight of cross-entropy loss. The formulation of ACGAN is similar to our ECGAN when α = λc = 0 and λclf > 0. We call the special case as ECGAN-C, with a suffix ‘C’ for classification loss. ECGAN-C uses a conditional discriminator which plays the role of a classifier at the same time. Hence the generator in ECGAN-C learns from the conditional discriminator rather than the cross-entropy loss which is biased for generative objectives. 3.3 ContraGAN ContraGAN [16] proposed 2C loss, which we mentioned in Eq. (10), to capture the data-to-data relationship and data-to-label relationship. The 2C loss is applied in both discriminator and generator to achieve conditional generation. That is: LG = −D(G(z), y) + λcLfakeC LD = −D(x, y) +D(G(z), y) + λcLrealC The loss functions are similar to ones in ECGAN with α = λclf = 0 and λc > 0. We call it ECGANE, where ‘E’ means entropy estimation. The main difference between ContraGAN and ECGAN-E is the output layer of their discriminators. While ContraGAN uses a single-output network, ECGAN uses a K-output network fθ which has higher capacity. We keep Eq. (11) and Eq. (12) as simple as possible to reduce the burden of hyperparameter tuning. Under the simple equations of the current framework, ECGAN-C and ECGAN-E are the closest counterparts to ACGAN and ContraGAN. The subtle difference (in addition to the underlying network architecture) is that ACGAN uses Ld2 instead of Ld1 (ECGAN-C); ContraGAN uses Ld2 ,Lg2 instead of Ld1 ,Lg1 (ECGAN-E). One future direction is to introduce more hyperparameters in Eq. (11) and Eq. (12) to get closer counterparts. 4 Experiment We conduct our experiments on CIFAR-10 [20] and Tiny ImageNet [22] for analysis, and ImageNet [6] for large-scale empirical study. Table 2 shows the statistics of the datasets. All datasets are publicly available for research use. They were not constructed for human-related study. We do not specifically take any personal information from the datasets in our experiments. In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity. Besides, we use Intra-FID, which is the average of FID for each class, to evaluate the performance of conditional generation. 4.1 Experimental Setup We use StudioGAN1 [16] to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [3]. We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix D. 4.2 Ablation Study We start our empirical studies by investigating the effectiveness of each component in ECGAN. We use symbols ‘U’ to represent unconditional GAN loss, ‘C’ to represent classification loss, and ‘E’ 1https://github.com/POSTECH-CVLab/PyTorch-StudioGAN to represent entropy estimation loss, which is 2C loss in our implementation. The concatenation of the symbols indicates the combination of losses. For example, ECGAN-UC means ECGAN with both unconditional GAN loss and classification loss (α > 0 and λclf > 0). Table 3 shows the results of ECGAN from the simplest ECGAN-0 to the most complicated ECGAN-UCE. On CIFAR-10, ECGAN-0 already achieves decent results. Adding unconditional loss, classification loss, or contrastive loss provides slightly better or on-par performance. On the harder Tiny Imagenet, the benefit of unconditional loss and classification loss becomes more significant. While ECGAN-U already shows advantages to ECGAN-0, adding classification loss to ECGAN-U further improves all metrics considerably. We also observe that directly adding classification loss is not sufficient to improve cGAN, which is consistent to the finding in [34]. The fact reveals that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs. We also find that adding contrastive loss does not improve ECGAN-UC. An explanation is that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. Furthermore, the additional parameters introduced by 2C loss make the optimization problem more complicated. As a result, we use the combination ECGAN-UC as the default option of ECGAN in the following experiments. 4.3 Comparison with Existing cGANs We compare ECGAN to several representative cGANs including ACGAN [39], ProjGAN [34], and ContraGAN [16], with three representative backbone architectures: DCGAN [41], ResNet [13], and BigGAN [3]. Table 4 compares the results of each combinations of cGAN algorithms and backbone architectures. The results show that ECGAN-UC outperforms other cGANs significantly with all backbone architectures on both CIFAR-10 and Tiny ImageNet. We also noticed that ContraGAN, though achieves decent image quality and diversity, learns a conditional generator that interchanges some classes while generating, hence has low Intra-FID. Overall, the experiment indicates that ECGAN-UC can be a preferred choice for cGAN in general situations. 4.4 Comparisons between Existing cGANs and their ECGAN Counterpart Table 5 compares ProjGAN, ContraGAN, ACGAN to their ECGAN counterparts. As we described in Section 3, each of these representative cGANs can be viewed as special cases under our ECGAN framework. As mentioned in Section 3, ECGAN-0 has additional bias terms in the output layer compared to ProjGAN. The results in Table 5 shows that the subtle difference still brings significant improvement to the generation quality, especially on the harder Tiny ImageNet. Compared to ContraGAN, ECGAN-E has the same loss but different design in the discriminator’s output layer. While the discriminator of ContraGAN has only single output, ECGAN-E has multiple outputs for every class. The difference makes ECGAN-E solve the label mismatching problem of ContraGAN mentioned in Section 4.3 and benefits generation on CIFAR-10, but does not work well on Tiny ImageNet. It is probably because of the scarcity of training data in each class in Tiny ImageNet. Only 50 data are available for updating the parameters corresponding to each class. Last, we compare ECGAN-C to ACGAN. Both of them optimize a GAN loss and a classification loss. However, ECGAN-C combines the discriminator and the classifier, so the generator can directly optimize cGAN loss rather than the classification loss. As a result, ECGAN-C demonstrates better performance on both CIFAR-10 and Tiny ImageNet. In sum, the comparisons show that through the unified view provided by ECGAN, we can improve the existing methods with minimal modifications. 4.5 Evaluation on ImageNet We compare our ECGAN-UC and ECGAN-UCE with BigGAN [3] and ContraGAN [16] on ImageNet. We follow all configurations of BigGAN with batch size 256 in StudioGAN. The numbers in Table 6 are reported after 200,000 training steps if not specified. The results show that ECGAN-UCE outperforms other cGANs dramatically. The comparison between ECGAN-UC and ECGAN-UCE indicates that the 2C loss brings more significant improvement in the ECGAN framework than in ContraGAN. The proposed ECGAN-UCE achieves 8.49 FID and 80.69 inception score. To the best of our knowledge, this is a state-of-the-art result of GANs with batch size 256 on ImageNet. Selected generated images are shown in Appendix G. 5 Related Work The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [33, 7]. To improve the generation quality, ACGAN [39] proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [24] for semi-supervised learning and Triangle GAN [9] for cross-domain distribution matching. However, Shu [45] and Miyato and Koyama [34] pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable. In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [12] use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [MCMC; 1]. Our work is also similar to a concurrent work [11], which improves [12] by introducing Fenchel duality to replace computationallyintensive MCMC. They use a variational approach [19] to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [12, 11] by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [11]. Last, [5] study the connection between exponential family and unconditional GANs. Different from [5], we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs. 6 Conclusion In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future. 7 Limitations and Potential Negative Impacts There are two main limitations in the current study. One is the investigation on ImageNet. Ideally, more experiments and analysis on ImageNet can further strengthen the contribution. But training with such a large dataset is barely affordable for our computational resource, and we can only resort to the conclusive findings in the current results. The other limitation is whether the metrics such as FID truly reflect generation quality, but this limitation is considered an open problem to the community anyway. As with any work on generative models, there is a potential risk of the proposed model being misused to create malicious content, much like how misused technology can be used to forge bills. In this sense, more anti-forgery methods will be needed to mitigate the misuse in the future. Acknowledgement We thank the anonymous reviewers for valuable suggestions. This work is partially supported by the Ministry of Science and Technology of Taiwan via the grants MOST 107-2628-E-002-008-MY3 and 110-2628-E-002-013. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.
1. What is the focus and contribution of the paper on generative adversarial networks (GANs)? 2. What are the strengths of the proposed approach, particularly in terms of its ability to leverage classifiers and unify them in cGAN? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns about the limitation of the proposed method in handling complex datasets? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors target to leverage classifiers in a principled manner and unify them in cGAN, by using the decomposition of joint probability distribution and a classic energy model to parameterize the distribution. Experiments show new state-of-the-art generation quality in terms of FID and IS, over several datasets (CIFAR-10, Tiny ImageNet) on several GAN implementations (DCGAN, ResGAN, BigGAN) and against several baselines (ACGAN, ProjGAN, ContraGAN). Review Strengths Clear exposition: good writing and good result demos. The research problem is clearly defined. Reproducible. Experiments are dense and results are informative to set the new state-of-the-art. Weaknesses Experiments can be more convincing if not only on toy datasets like CIFAR-10 or Tiny ImageNet. Repeat the quantitative comparisons on the entire ImageNet dataset with the full resolution.
NIPS
Title A Unified View of cGANs with and without Classifiers Abstract Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN. 1 Introduction Generative Adversarial Networks [GANs; 10] is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2, 37, 29, 25, 36, 26, 38], backbone architectures [41, 50], and regularization techniques [13, 35, 51] for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [3]. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [33, 39, 34, 16]. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [39] or texts [43], speech enhancement [32], and image style transformation [18, 53]. One representative cGAN is Auxiliary Classifier GAN [ACGAN; 39], which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [6]. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [45], which in term decreases the capability to match the target distribution. Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [ProjGAN; 34], which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [34]. The finding, along with the success of ProjGAN and other cGANs without classifiers [15, 4], seem to suggest that including a classifier is not helpful for improving cGANs. In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [39], ProjGAN [34], and ContraGAN [16] as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet. We summarize the contributions of this paper as: • We justify the principled use of classifiers for cGANs by decomposing the joint distribution. • We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. • We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets. The paper is organized as follows. Section 2 derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [39], ProjGAN [34], and ContraGAN [16] in Section 3. Then, we demonstrate the effectiveness of our framework by experiments in Section 4. We discuss related work in Section 5 before concluding in Section 6. 2 Method Given a K-class dataset (x, y) ∼ pd, where y ∈ {1 . . .K} is the class of x and pd is the underlying data distribution. Our goal is to train a generator G to generate a sample G(z, y) following pd(x|y), where z is sampled from a known distribution such as N (0, 1). To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: max D min G ∑ y E pd(x|y) D(x, y)− E p(z) D(G(z, y), y) (1) where G is the generator and D is a discriminator that outputs higher values for real data. The choice of D leads to different types of GANs [10, 2, 29, 8]. At first glance, there is no classifier in Eq. (1). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [39]. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality. 2.1 Bridge Classifiers to Discriminators with Joint Distribution A classifier, when viewed from a probabilistic perspective, is a function that approximates pd(y|x), the probability that x belongs to class y. On the other hand, a conditional discriminator, telling whether x is real data in class y, can be viewed as a function approximate pd(x|y). To connect pd(y|x) and pd(x|y), an important observation is through the joint probability: log p(x, y) = log p(x|y) + log p(y) (2) = log p(y|x) + log p(x). (3) The observation illustrates that we can approximate log p(x, y) in two directions: one containing p(x|y) for conditional discriminators and one containing p(y|x) for classifiers. The finding reveals that by sharing the parameterization, updating the parameters in one direction may optimize the other implicitly. Therefore, we link the classifier to the conditional discriminator by training both objectives jointly. 2.2 Learning Joint Distribution via Optimizing Conditional Discriminators Since p(y) is usually known a priori (e.g., uniform) or able to easily estimated (e.g., empirical counting), we focus on learning p(x|y) in Eq.(2). Specifically, since log p(x, y) ∈ R, we parameterize it via fθ(x), such as a neural network with K real value outputs, where exp(fθ(x)[y]) ∝ p(x, y) . Similar parameterization is also used in exponential family [48] and energy based model [23]. Therefore, the log-likelihood log p(x|y) can be modeled as: log pθ(x|y) = log ( exp (fθ(x)[y]) Zy(θ) ) = fθ(x)[y]− logZy(θ), (4) where Zy(θ) = ∫ x′ exp (fθ(x ′)[y]) dx′. Optimizing Eq. (4) is challenging because of the intractable partition function Zy(θ). Here we introduce the Fenchel duality [48] of the partition function Zy(θ): logZy(θ) = max qy [ E qy(x) [fθ(x)[y]] +H(qy) ] where qy is a distribution of x conditioned on y and H(qy) = −Exf∼qy(x) [log qy(x)] is the entropy of qy. The derivation is provided in Appendix A. By the Fenchel duality, we obtain our maximum likelihood estimation in Eq. (4) as: max θ [ E pd(x,y) [fθ(x)[y]]−max qy [ E qy(x) [fθ(x)[y]] +H(qy) ]] . (5) To approximate the solution of qy , in additional to density models, we can train an auxiliary generator qφ as in cGANs to estimate Eqy(x) via sampling. That is, we can sample x from qφ by x = qφ(z, y), where z ∼ N (0, 1). The objective (5) then becomes: max θ min φ ∑ y E pd(x|y) [fθ(x)[y]]− E p(z) [fθ(qφ(z, y))[y]]−H(qφ(·, y)), (6) which is almost in the form of Eq (1) except the entropy H(qφ(·, y)). We leave the discussion about the entropy estimation in Section 2.4. Currently, the loss function to optimize the objective without the entropy can be formulated as: Ld1(x, z, y; θ) = −fθ(x)[y] + fθ(qφ(z))[y] Lg1(z, y;φ) = −fθ(qφ(z, y))[y] 2.3 Learning Joint Distributions via Optimizing Unconditional Discriminators & Classifiers Following Eq. (3), we can approximate log p(x, y) by approximating log p(y|x) and log p(x). With our energy function fθ, pθ(y|x) can be formulated as: pθ(y|x) = pθ(x, y) pθ(x) = exp(fθ(x)[y])∑ y′ exp(fθ(x)[y ′]) , which is equivalent to the y’th output of SOFTMAX(fθ(x)). Therefore, we can maximize the loglikelihood of pθ(y|x) by consider fθ as a softmax classifier minimizing the cross-entropy loss: Lclf(x, y; θ) = − log (SOFTMAX (fθ(x)) [y]) On the other hand, to maximize the log-likelihood of p(x), we introduce a reparameterization hθ(x) = log ∑ y exp(fθ(x)[y]): log pθ(x) = log (∑ y pθ(x, y) ) = log (∑ y exp(fθ(x)[y])∫ x′ ∑ y′ exp(fθ(x ′)[y′]) dx′ ) = log ( exp(log( ∑ y exp(fθ(x)[y])))∫ x′ exp(log( ∑ y′ exp(fθ(x ′)[y′]))) dx′ ) = log ( exp(hθ(x))∫ x′ exp(hθ(x′)) dx′ ) = hθ(x)− log(Z ′(θ)), (7) where Z ′(θ) = ∫ x exp(hθ(x)) dx. Similar to Eq. (5), we can rewrite logZ ′(θ) by its Fenchel duality: logZ ′(θ) = max q [ E q(x) [hθ(x)] +H(q) ] (8) where q is a distribution of x and H(q) is the entropy of q. Combining Eq. (7) and Eq. (8) and reusing the generator in Section 2.2, we obtain the optimization problem: max θ min φ E pd(x,y) [hθ(x)]− E p(z) [hθ(qφ(z, y))]−H(qφ) (9) Similar to Eq. (6), the objective of the unconditional discriminator is equivalent to typical GANs augmented with an entropy term. The loss function without considering the entropy can be formulated as: Ld2(x, z, y; θ) = −hθ(x) + hθ(qφ(z)) Lg2(z, y;φ) = −hθ(qφ(z, y)) 2.4 Entropy Approximation in cGANs In Section 2.2 and Section 2.3, we propose two approaches to train cGANs with and without classification. Unsolved problems in Eq. (6) and Eq. (9) are the entropy termsH(qφ(·, y)) andH(qφ). In previous work, various estimators have been proposed to estimate entropy or its gradient [46, 42, 21, 27]. One can freely choose any approach to estimate the entropy in the proposed framework. In this work, we consider two entropy estimators, and we will show how they connect with existing cGANs. The first approach is the naive constant approximation. Since entropy is always non-negative, we naturally have the constant zero as a lower bound. Therefore, we can maximize the objective by replacing the entropy term with its lower bound, which is zero in this case. This approach is simple but we will show its effectiveness in Section 4 and how it links our framework to ProjGAN and ContraGAN in Section 3. The second approach is estimating a variational lower bound. Informally, given a batch of data {(x1, y1), . . . , (xm, ym)}, an encoder function l, and a class embedding function e(y), the negative 2C loss used in ContraGAN [16], LC(xi, yi; t) = log ( d(l(xi), e(yi)) + ∑m k=1 Jyk = yiK d(l(xi), l(xk)) d(l(xi), e(yi)) + ∑m k=1 Jk 6= iK d(l(xi), (l(xk)) ) , (10) is an empirical estimate of a proper lower bound of H(X) [40], where d(a, b) = exp(a>b/t) is a distance function with a temperature t. We provide the proof in Appendix B. The 2C loss heavily relies on the embeddings l(x) and e(y). Although we only need to estimate the entropy of generated data in Eq. (6) and Eq. (9), we still rely on true data to learn the embeddings in practice. Therefore, the loss function of Eq. (6) can be written as: LD1(x, z, y; θ) = Ld1(x, z, y; θ) + λcLrealC LG1(z, y;φ) = Lg1(x, y;φ) + λcLfakeC , where λc is a hyperparameter controlling the weight of the contrastive loss, and LrealC ,LfakeC are the contrastive loss calculated on a batch of real data and generated data respectively. Similarly, the loss function of Eq. (9) becomes: LD2(x, z, y; θ) = Ld2(x, z, y; θ) + λcLrealC LG2(z, y;φ) = Lg2(x, y;φ) + λcLfakeC , The introduction of 2C loss allows us to accommodate ContraGAN into our framework. 2.5 Energy-based Conditional Generative Adversarial Network Previous work has shown that multitask training benefits representation learning [30] and training discriminative and generative models jointly outperforms their purely generative or purely discriminative counterparts [11, 28]. Therefore, we propose a framework named Energy-based Conditional Generative Adversarial Network (ECGAN), which combines the two approaches in Section 2.2 and Section 2.3 to learn the joint distribution better. The loss function can be summarized as: LD(x, z, y; θ) = Ld1(x, z, y; θ) + αLd2(x, z, y; θ) + λcLrealC + λclfLclf(x, y; θ) (11) LG(z, y;φ) = Lg1(z, y;φ) + αLg2(z, y;φ) + λcLfakeC (12) where α is a weight parameter for the unconditional GAN loss. The discriminator’s design is illustrated in Fig 1. Here we discuss the intuition of the mechanisms behind each component in Eq. (11). Ld1 is a loss function for conditional discriminator. It updates the y-th output when given a data pair (x, y). Ld2 guides to an unconditional discriminator. It updates all outputs according to whether x is real. Lclf learns a classifier. It increases the y-th output and decreases the other outputs for data belonging to class y. Finally, LrealC and L fake C play the roles to improve the latent embeddings by pulling the embeddings of data with the same class closer. Previously, we derive the loss functions Ld1 and Ld2 as the loss in Wasserstein GAN [2]. In practice, we use the hinge loss as proposed in Geometric GAN [26] for better stability and convergence. We use the following combination of Ld1 and Ld2 : Hinge(fθ(xreal, y) + α · hθ(xreal), fθ(xfake, y) + α · hθ(xfake)). (13) For more discussion of the implementation of hinge loss, please check Appendix C. The overall training procedure of ECGAN is presented in Appendix E. 3 Accommodation to Existing cGANs In this section, we show that our framework covers several representative cGAN algorithms, including ACGAN [39], ProjGAN [35], and ContraGAN [16]. Through the ECGAN framework, we obtain a unified view of cGANs, which allows us to fairly compare and understand the pros and cons of existing cGANs. We name the ECGAN counterparts ECGAN-0, ECGAN-C, and ECGAN-E, corresponding to ProjGAN, ACGAN, and ContraGAN, respectively. We summarize the settings in Table 1 and illustrate the discriminator designs in Appendix F. 3.1 ProjGAN ProjGAN [34] is the most representative cGAN design that is commonly used in state-of-the-art research [3, 50]. Let the output of the penultimate layer in the discriminator be g(x). The output of ProjGAN’s discriminator is: D(x, y) = wTu g(x) + bu + w T y g(x) = (wu + wy) T g(x) + bu (14) where wu, bu are the parameters for the unconditional linear layer, and wy is the class embedding of y. On the other hand, the output of a discriminator in ECGAN is: D(x, y) = f(x)[y] = (WT g(x) + b)[y] = wTy g(x) + by (15) where W,b are the parameters of the linear output layer in fθ. As shown in Eq. (14) and Eq. (15), the architectures of ProjGAN and ECGAN are almost equivalent. In addition, the loss function of ProjGAN can be formulated as: LG = −D(G(z), y) LD = −D(x, y) +D(G(z), y), which is a special case of ECGAN while α = λc = λclf = 0. We name this case ECGAN-0, which is the simplest version of ECGAN. Compared with ProjGAN, ECGAN-0 has additional bias terms for the output of each class. 3.2 ACGAN ACGAN [39] is the most well-known cGAN algorithm that leverages a classifier to achieve conditional generation. Given aK-class dataset, the discriminator of ACGAN is parameterized by a network with K + 1 outputs. The first output, denoted as D(x), is an unconditional discriminator distinguishing between real and fake images. The remaining K outputs, denoted as C(x), is a classifier that predicts logits for every class. The loss function of ACGAN can be formulated as: LG = −D(G(z)) + λgLclf(G(z), y;C) LD = −D(x) +D(G(z)) + λd(Lclf(x, y;C) + Lclf(G(z), y;C)) where G is the generator, λg and λd are hyperparameters to control the weight of cross-entropy loss. The formulation of ACGAN is similar to our ECGAN when α = λc = 0 and λclf > 0. We call the special case as ECGAN-C, with a suffix ‘C’ for classification loss. ECGAN-C uses a conditional discriminator which plays the role of a classifier at the same time. Hence the generator in ECGAN-C learns from the conditional discriminator rather than the cross-entropy loss which is biased for generative objectives. 3.3 ContraGAN ContraGAN [16] proposed 2C loss, which we mentioned in Eq. (10), to capture the data-to-data relationship and data-to-label relationship. The 2C loss is applied in both discriminator and generator to achieve conditional generation. That is: LG = −D(G(z), y) + λcLfakeC LD = −D(x, y) +D(G(z), y) + λcLrealC The loss functions are similar to ones in ECGAN with α = λclf = 0 and λc > 0. We call it ECGANE, where ‘E’ means entropy estimation. The main difference between ContraGAN and ECGAN-E is the output layer of their discriminators. While ContraGAN uses a single-output network, ECGAN uses a K-output network fθ which has higher capacity. We keep Eq. (11) and Eq. (12) as simple as possible to reduce the burden of hyperparameter tuning. Under the simple equations of the current framework, ECGAN-C and ECGAN-E are the closest counterparts to ACGAN and ContraGAN. The subtle difference (in addition to the underlying network architecture) is that ACGAN uses Ld2 instead of Ld1 (ECGAN-C); ContraGAN uses Ld2 ,Lg2 instead of Ld1 ,Lg1 (ECGAN-E). One future direction is to introduce more hyperparameters in Eq. (11) and Eq. (12) to get closer counterparts. 4 Experiment We conduct our experiments on CIFAR-10 [20] and Tiny ImageNet [22] for analysis, and ImageNet [6] for large-scale empirical study. Table 2 shows the statistics of the datasets. All datasets are publicly available for research use. They were not constructed for human-related study. We do not specifically take any personal information from the datasets in our experiments. In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity. Besides, we use Intra-FID, which is the average of FID for each class, to evaluate the performance of conditional generation. 4.1 Experimental Setup We use StudioGAN1 [16] to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [3]. We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix D. 4.2 Ablation Study We start our empirical studies by investigating the effectiveness of each component in ECGAN. We use symbols ‘U’ to represent unconditional GAN loss, ‘C’ to represent classification loss, and ‘E’ 1https://github.com/POSTECH-CVLab/PyTorch-StudioGAN to represent entropy estimation loss, which is 2C loss in our implementation. The concatenation of the symbols indicates the combination of losses. For example, ECGAN-UC means ECGAN with both unconditional GAN loss and classification loss (α > 0 and λclf > 0). Table 3 shows the results of ECGAN from the simplest ECGAN-0 to the most complicated ECGAN-UCE. On CIFAR-10, ECGAN-0 already achieves decent results. Adding unconditional loss, classification loss, or contrastive loss provides slightly better or on-par performance. On the harder Tiny Imagenet, the benefit of unconditional loss and classification loss becomes more significant. While ECGAN-U already shows advantages to ECGAN-0, adding classification loss to ECGAN-U further improves all metrics considerably. We also observe that directly adding classification loss is not sufficient to improve cGAN, which is consistent to the finding in [34]. The fact reveals that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs. We also find that adding contrastive loss does not improve ECGAN-UC. An explanation is that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. Furthermore, the additional parameters introduced by 2C loss make the optimization problem more complicated. As a result, we use the combination ECGAN-UC as the default option of ECGAN in the following experiments. 4.3 Comparison with Existing cGANs We compare ECGAN to several representative cGANs including ACGAN [39], ProjGAN [34], and ContraGAN [16], with three representative backbone architectures: DCGAN [41], ResNet [13], and BigGAN [3]. Table 4 compares the results of each combinations of cGAN algorithms and backbone architectures. The results show that ECGAN-UC outperforms other cGANs significantly with all backbone architectures on both CIFAR-10 and Tiny ImageNet. We also noticed that ContraGAN, though achieves decent image quality and diversity, learns a conditional generator that interchanges some classes while generating, hence has low Intra-FID. Overall, the experiment indicates that ECGAN-UC can be a preferred choice for cGAN in general situations. 4.4 Comparisons between Existing cGANs and their ECGAN Counterpart Table 5 compares ProjGAN, ContraGAN, ACGAN to their ECGAN counterparts. As we described in Section 3, each of these representative cGANs can be viewed as special cases under our ECGAN framework. As mentioned in Section 3, ECGAN-0 has additional bias terms in the output layer compared to ProjGAN. The results in Table 5 shows that the subtle difference still brings significant improvement to the generation quality, especially on the harder Tiny ImageNet. Compared to ContraGAN, ECGAN-E has the same loss but different design in the discriminator’s output layer. While the discriminator of ContraGAN has only single output, ECGAN-E has multiple outputs for every class. The difference makes ECGAN-E solve the label mismatching problem of ContraGAN mentioned in Section 4.3 and benefits generation on CIFAR-10, but does not work well on Tiny ImageNet. It is probably because of the scarcity of training data in each class in Tiny ImageNet. Only 50 data are available for updating the parameters corresponding to each class. Last, we compare ECGAN-C to ACGAN. Both of them optimize a GAN loss and a classification loss. However, ECGAN-C combines the discriminator and the classifier, so the generator can directly optimize cGAN loss rather than the classification loss. As a result, ECGAN-C demonstrates better performance on both CIFAR-10 and Tiny ImageNet. In sum, the comparisons show that through the unified view provided by ECGAN, we can improve the existing methods with minimal modifications. 4.5 Evaluation on ImageNet We compare our ECGAN-UC and ECGAN-UCE with BigGAN [3] and ContraGAN [16] on ImageNet. We follow all configurations of BigGAN with batch size 256 in StudioGAN. The numbers in Table 6 are reported after 200,000 training steps if not specified. The results show that ECGAN-UCE outperforms other cGANs dramatically. The comparison between ECGAN-UC and ECGAN-UCE indicates that the 2C loss brings more significant improvement in the ECGAN framework than in ContraGAN. The proposed ECGAN-UCE achieves 8.49 FID and 80.69 inception score. To the best of our knowledge, this is a state-of-the-art result of GANs with batch size 256 on ImageNet. Selected generated images are shown in Appendix G. 5 Related Work The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [33, 7]. To improve the generation quality, ACGAN [39] proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [24] for semi-supervised learning and Triangle GAN [9] for cross-domain distribution matching. However, Shu [45] and Miyato and Koyama [34] pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable. In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [12] use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [MCMC; 1]. Our work is also similar to a concurrent work [11], which improves [12] by introducing Fenchel duality to replace computationallyintensive MCMC. They use a variational approach [19] to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [12, 11] by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [11]. Last, [5] study the connection between exponential family and unconditional GANs. Different from [5], we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs. 6 Conclusion In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future. 7 Limitations and Potential Negative Impacts There are two main limitations in the current study. One is the investigation on ImageNet. Ideally, more experiments and analysis on ImageNet can further strengthen the contribution. But training with such a large dataset is barely affordable for our computational resource, and we can only resort to the conclusive findings in the current results. The other limitation is whether the metrics such as FID truly reflect generation quality, but this limitation is considered an open problem to the community anyway. As with any work on generative models, there is a potential risk of the proposed model being misused to create malicious content, much like how misused technology can be used to forge bills. In this sense, more anti-forgery methods will be needed to mitigate the misuse in the future. Acknowledgement We thank the anonymous reviewers for valuable suggestions. This work is partially supported by the Ministry of Science and Technology of Taiwan via the grants MOST 107-2628-E-002-008-MY3 and 110-2628-E-002-013. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.
1. What is the focus and contribution of the paper on conditional image generation? 2. What are the strengths of the proposed approach, particularly in its ability to unify various cGAN methods? 3. What are the weaknesses of the paper, especially regarding its lack of novelty and experimental concerns? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper introduced a general framework for conditional image generation. And existing cGAN methods such as ACGAN, ProjGAN and ContraGAN are inclued in the proposed algorithm. Experimental results in Cifar and Tiny ImageNet verified the effectiveness of different components of the proposed method. Review This workd presented a detailed discussion among the differences on existing condiontal image generation methods. The unified view of cGAN is interesting. However, the authors did not prpose new loss functions for cGANs. Thanks for the author feedback. Some issues about the experimental evaluation have been addressed welll. However, I still concern about the originality of this work which is also indicated by Reviewer gvcB. In addition, the new results on ImageNet are also confusing and not satisfactory. According to the table below, your reinplementations of ACGAN, ProjGAN and ContraGAN are better. Then the comparisons with BigGAN, ContraGAN based on StudioGAN are not convincing.
NIPS
Title A Unified View of cGANs with and without Classifiers Abstract Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN. 1 Introduction Generative Adversarial Networks [GANs; 10] is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2, 37, 29, 25, 36, 26, 38], backbone architectures [41, 50], and regularization techniques [13, 35, 51] for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [3]. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [33, 39, 34, 16]. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [39] or texts [43], speech enhancement [32], and image style transformation [18, 53]. One representative cGAN is Auxiliary Classifier GAN [ACGAN; 39], which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [6]. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [45], which in term decreases the capability to match the target distribution. Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [ProjGAN; 34], which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [34]. The finding, along with the success of ProjGAN and other cGANs without classifiers [15, 4], seem to suggest that including a classifier is not helpful for improving cGANs. In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [39], ProjGAN [34], and ContraGAN [16] as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet. We summarize the contributions of this paper as: • We justify the principled use of classifiers for cGANs by decomposing the joint distribution. • We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. • We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets. The paper is organized as follows. Section 2 derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [39], ProjGAN [34], and ContraGAN [16] in Section 3. Then, we demonstrate the effectiveness of our framework by experiments in Section 4. We discuss related work in Section 5 before concluding in Section 6. 2 Method Given a K-class dataset (x, y) ∼ pd, where y ∈ {1 . . .K} is the class of x and pd is the underlying data distribution. Our goal is to train a generator G to generate a sample G(z, y) following pd(x|y), where z is sampled from a known distribution such as N (0, 1). To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: max D min G ∑ y E pd(x|y) D(x, y)− E p(z) D(G(z, y), y) (1) where G is the generator and D is a discriminator that outputs higher values for real data. The choice of D leads to different types of GANs [10, 2, 29, 8]. At first glance, there is no classifier in Eq. (1). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [39]. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality. 2.1 Bridge Classifiers to Discriminators with Joint Distribution A classifier, when viewed from a probabilistic perspective, is a function that approximates pd(y|x), the probability that x belongs to class y. On the other hand, a conditional discriminator, telling whether x is real data in class y, can be viewed as a function approximate pd(x|y). To connect pd(y|x) and pd(x|y), an important observation is through the joint probability: log p(x, y) = log p(x|y) + log p(y) (2) = log p(y|x) + log p(x). (3) The observation illustrates that we can approximate log p(x, y) in two directions: one containing p(x|y) for conditional discriminators and one containing p(y|x) for classifiers. The finding reveals that by sharing the parameterization, updating the parameters in one direction may optimize the other implicitly. Therefore, we link the classifier to the conditional discriminator by training both objectives jointly. 2.2 Learning Joint Distribution via Optimizing Conditional Discriminators Since p(y) is usually known a priori (e.g., uniform) or able to easily estimated (e.g., empirical counting), we focus on learning p(x|y) in Eq.(2). Specifically, since log p(x, y) ∈ R, we parameterize it via fθ(x), such as a neural network with K real value outputs, where exp(fθ(x)[y]) ∝ p(x, y) . Similar parameterization is also used in exponential family [48] and energy based model [23]. Therefore, the log-likelihood log p(x|y) can be modeled as: log pθ(x|y) = log ( exp (fθ(x)[y]) Zy(θ) ) = fθ(x)[y]− logZy(θ), (4) where Zy(θ) = ∫ x′ exp (fθ(x ′)[y]) dx′. Optimizing Eq. (4) is challenging because of the intractable partition function Zy(θ). Here we introduce the Fenchel duality [48] of the partition function Zy(θ): logZy(θ) = max qy [ E qy(x) [fθ(x)[y]] +H(qy) ] where qy is a distribution of x conditioned on y and H(qy) = −Exf∼qy(x) [log qy(x)] is the entropy of qy. The derivation is provided in Appendix A. By the Fenchel duality, we obtain our maximum likelihood estimation in Eq. (4) as: max θ [ E pd(x,y) [fθ(x)[y]]−max qy [ E qy(x) [fθ(x)[y]] +H(qy) ]] . (5) To approximate the solution of qy , in additional to density models, we can train an auxiliary generator qφ as in cGANs to estimate Eqy(x) via sampling. That is, we can sample x from qφ by x = qφ(z, y), where z ∼ N (0, 1). The objective (5) then becomes: max θ min φ ∑ y E pd(x|y) [fθ(x)[y]]− E p(z) [fθ(qφ(z, y))[y]]−H(qφ(·, y)), (6) which is almost in the form of Eq (1) except the entropy H(qφ(·, y)). We leave the discussion about the entropy estimation in Section 2.4. Currently, the loss function to optimize the objective without the entropy can be formulated as: Ld1(x, z, y; θ) = −fθ(x)[y] + fθ(qφ(z))[y] Lg1(z, y;φ) = −fθ(qφ(z, y))[y] 2.3 Learning Joint Distributions via Optimizing Unconditional Discriminators & Classifiers Following Eq. (3), we can approximate log p(x, y) by approximating log p(y|x) and log p(x). With our energy function fθ, pθ(y|x) can be formulated as: pθ(y|x) = pθ(x, y) pθ(x) = exp(fθ(x)[y])∑ y′ exp(fθ(x)[y ′]) , which is equivalent to the y’th output of SOFTMAX(fθ(x)). Therefore, we can maximize the loglikelihood of pθ(y|x) by consider fθ as a softmax classifier minimizing the cross-entropy loss: Lclf(x, y; θ) = − log (SOFTMAX (fθ(x)) [y]) On the other hand, to maximize the log-likelihood of p(x), we introduce a reparameterization hθ(x) = log ∑ y exp(fθ(x)[y]): log pθ(x) = log (∑ y pθ(x, y) ) = log (∑ y exp(fθ(x)[y])∫ x′ ∑ y′ exp(fθ(x ′)[y′]) dx′ ) = log ( exp(log( ∑ y exp(fθ(x)[y])))∫ x′ exp(log( ∑ y′ exp(fθ(x ′)[y′]))) dx′ ) = log ( exp(hθ(x))∫ x′ exp(hθ(x′)) dx′ ) = hθ(x)− log(Z ′(θ)), (7) where Z ′(θ) = ∫ x exp(hθ(x)) dx. Similar to Eq. (5), we can rewrite logZ ′(θ) by its Fenchel duality: logZ ′(θ) = max q [ E q(x) [hθ(x)] +H(q) ] (8) where q is a distribution of x and H(q) is the entropy of q. Combining Eq. (7) and Eq. (8) and reusing the generator in Section 2.2, we obtain the optimization problem: max θ min φ E pd(x,y) [hθ(x)]− E p(z) [hθ(qφ(z, y))]−H(qφ) (9) Similar to Eq. (6), the objective of the unconditional discriminator is equivalent to typical GANs augmented with an entropy term. The loss function without considering the entropy can be formulated as: Ld2(x, z, y; θ) = −hθ(x) + hθ(qφ(z)) Lg2(z, y;φ) = −hθ(qφ(z, y)) 2.4 Entropy Approximation in cGANs In Section 2.2 and Section 2.3, we propose two approaches to train cGANs with and without classification. Unsolved problems in Eq. (6) and Eq. (9) are the entropy termsH(qφ(·, y)) andH(qφ). In previous work, various estimators have been proposed to estimate entropy or its gradient [46, 42, 21, 27]. One can freely choose any approach to estimate the entropy in the proposed framework. In this work, we consider two entropy estimators, and we will show how they connect with existing cGANs. The first approach is the naive constant approximation. Since entropy is always non-negative, we naturally have the constant zero as a lower bound. Therefore, we can maximize the objective by replacing the entropy term with its lower bound, which is zero in this case. This approach is simple but we will show its effectiveness in Section 4 and how it links our framework to ProjGAN and ContraGAN in Section 3. The second approach is estimating a variational lower bound. Informally, given a batch of data {(x1, y1), . . . , (xm, ym)}, an encoder function l, and a class embedding function e(y), the negative 2C loss used in ContraGAN [16], LC(xi, yi; t) = log ( d(l(xi), e(yi)) + ∑m k=1 Jyk = yiK d(l(xi), l(xk)) d(l(xi), e(yi)) + ∑m k=1 Jk 6= iK d(l(xi), (l(xk)) ) , (10) is an empirical estimate of a proper lower bound of H(X) [40], where d(a, b) = exp(a>b/t) is a distance function with a temperature t. We provide the proof in Appendix B. The 2C loss heavily relies on the embeddings l(x) and e(y). Although we only need to estimate the entropy of generated data in Eq. (6) and Eq. (9), we still rely on true data to learn the embeddings in practice. Therefore, the loss function of Eq. (6) can be written as: LD1(x, z, y; θ) = Ld1(x, z, y; θ) + λcLrealC LG1(z, y;φ) = Lg1(x, y;φ) + λcLfakeC , where λc is a hyperparameter controlling the weight of the contrastive loss, and LrealC ,LfakeC are the contrastive loss calculated on a batch of real data and generated data respectively. Similarly, the loss function of Eq. (9) becomes: LD2(x, z, y; θ) = Ld2(x, z, y; θ) + λcLrealC LG2(z, y;φ) = Lg2(x, y;φ) + λcLfakeC , The introduction of 2C loss allows us to accommodate ContraGAN into our framework. 2.5 Energy-based Conditional Generative Adversarial Network Previous work has shown that multitask training benefits representation learning [30] and training discriminative and generative models jointly outperforms their purely generative or purely discriminative counterparts [11, 28]. Therefore, we propose a framework named Energy-based Conditional Generative Adversarial Network (ECGAN), which combines the two approaches in Section 2.2 and Section 2.3 to learn the joint distribution better. The loss function can be summarized as: LD(x, z, y; θ) = Ld1(x, z, y; θ) + αLd2(x, z, y; θ) + λcLrealC + λclfLclf(x, y; θ) (11) LG(z, y;φ) = Lg1(z, y;φ) + αLg2(z, y;φ) + λcLfakeC (12) where α is a weight parameter for the unconditional GAN loss. The discriminator’s design is illustrated in Fig 1. Here we discuss the intuition of the mechanisms behind each component in Eq. (11). Ld1 is a loss function for conditional discriminator. It updates the y-th output when given a data pair (x, y). Ld2 guides to an unconditional discriminator. It updates all outputs according to whether x is real. Lclf learns a classifier. It increases the y-th output and decreases the other outputs for data belonging to class y. Finally, LrealC and L fake C play the roles to improve the latent embeddings by pulling the embeddings of data with the same class closer. Previously, we derive the loss functions Ld1 and Ld2 as the loss in Wasserstein GAN [2]. In practice, we use the hinge loss as proposed in Geometric GAN [26] for better stability and convergence. We use the following combination of Ld1 and Ld2 : Hinge(fθ(xreal, y) + α · hθ(xreal), fθ(xfake, y) + α · hθ(xfake)). (13) For more discussion of the implementation of hinge loss, please check Appendix C. The overall training procedure of ECGAN is presented in Appendix E. 3 Accommodation to Existing cGANs In this section, we show that our framework covers several representative cGAN algorithms, including ACGAN [39], ProjGAN [35], and ContraGAN [16]. Through the ECGAN framework, we obtain a unified view of cGANs, which allows us to fairly compare and understand the pros and cons of existing cGANs. We name the ECGAN counterparts ECGAN-0, ECGAN-C, and ECGAN-E, corresponding to ProjGAN, ACGAN, and ContraGAN, respectively. We summarize the settings in Table 1 and illustrate the discriminator designs in Appendix F. 3.1 ProjGAN ProjGAN [34] is the most representative cGAN design that is commonly used in state-of-the-art research [3, 50]. Let the output of the penultimate layer in the discriminator be g(x). The output of ProjGAN’s discriminator is: D(x, y) = wTu g(x) + bu + w T y g(x) = (wu + wy) T g(x) + bu (14) where wu, bu are the parameters for the unconditional linear layer, and wy is the class embedding of y. On the other hand, the output of a discriminator in ECGAN is: D(x, y) = f(x)[y] = (WT g(x) + b)[y] = wTy g(x) + by (15) where W,b are the parameters of the linear output layer in fθ. As shown in Eq. (14) and Eq. (15), the architectures of ProjGAN and ECGAN are almost equivalent. In addition, the loss function of ProjGAN can be formulated as: LG = −D(G(z), y) LD = −D(x, y) +D(G(z), y), which is a special case of ECGAN while α = λc = λclf = 0. We name this case ECGAN-0, which is the simplest version of ECGAN. Compared with ProjGAN, ECGAN-0 has additional bias terms for the output of each class. 3.2 ACGAN ACGAN [39] is the most well-known cGAN algorithm that leverages a classifier to achieve conditional generation. Given aK-class dataset, the discriminator of ACGAN is parameterized by a network with K + 1 outputs. The first output, denoted as D(x), is an unconditional discriminator distinguishing between real and fake images. The remaining K outputs, denoted as C(x), is a classifier that predicts logits for every class. The loss function of ACGAN can be formulated as: LG = −D(G(z)) + λgLclf(G(z), y;C) LD = −D(x) +D(G(z)) + λd(Lclf(x, y;C) + Lclf(G(z), y;C)) where G is the generator, λg and λd are hyperparameters to control the weight of cross-entropy loss. The formulation of ACGAN is similar to our ECGAN when α = λc = 0 and λclf > 0. We call the special case as ECGAN-C, with a suffix ‘C’ for classification loss. ECGAN-C uses a conditional discriminator which plays the role of a classifier at the same time. Hence the generator in ECGAN-C learns from the conditional discriminator rather than the cross-entropy loss which is biased for generative objectives. 3.3 ContraGAN ContraGAN [16] proposed 2C loss, which we mentioned in Eq. (10), to capture the data-to-data relationship and data-to-label relationship. The 2C loss is applied in both discriminator and generator to achieve conditional generation. That is: LG = −D(G(z), y) + λcLfakeC LD = −D(x, y) +D(G(z), y) + λcLrealC The loss functions are similar to ones in ECGAN with α = λclf = 0 and λc > 0. We call it ECGANE, where ‘E’ means entropy estimation. The main difference between ContraGAN and ECGAN-E is the output layer of their discriminators. While ContraGAN uses a single-output network, ECGAN uses a K-output network fθ which has higher capacity. We keep Eq. (11) and Eq. (12) as simple as possible to reduce the burden of hyperparameter tuning. Under the simple equations of the current framework, ECGAN-C and ECGAN-E are the closest counterparts to ACGAN and ContraGAN. The subtle difference (in addition to the underlying network architecture) is that ACGAN uses Ld2 instead of Ld1 (ECGAN-C); ContraGAN uses Ld2 ,Lg2 instead of Ld1 ,Lg1 (ECGAN-E). One future direction is to introduce more hyperparameters in Eq. (11) and Eq. (12) to get closer counterparts. 4 Experiment We conduct our experiments on CIFAR-10 [20] and Tiny ImageNet [22] for analysis, and ImageNet [6] for large-scale empirical study. Table 2 shows the statistics of the datasets. All datasets are publicly available for research use. They were not constructed for human-related study. We do not specifically take any personal information from the datasets in our experiments. In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity. Besides, we use Intra-FID, which is the average of FID for each class, to evaluate the performance of conditional generation. 4.1 Experimental Setup We use StudioGAN1 [16] to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [3]. We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix D. 4.2 Ablation Study We start our empirical studies by investigating the effectiveness of each component in ECGAN. We use symbols ‘U’ to represent unconditional GAN loss, ‘C’ to represent classification loss, and ‘E’ 1https://github.com/POSTECH-CVLab/PyTorch-StudioGAN to represent entropy estimation loss, which is 2C loss in our implementation. The concatenation of the symbols indicates the combination of losses. For example, ECGAN-UC means ECGAN with both unconditional GAN loss and classification loss (α > 0 and λclf > 0). Table 3 shows the results of ECGAN from the simplest ECGAN-0 to the most complicated ECGAN-UCE. On CIFAR-10, ECGAN-0 already achieves decent results. Adding unconditional loss, classification loss, or contrastive loss provides slightly better or on-par performance. On the harder Tiny Imagenet, the benefit of unconditional loss and classification loss becomes more significant. While ECGAN-U already shows advantages to ECGAN-0, adding classification loss to ECGAN-U further improves all metrics considerably. We also observe that directly adding classification loss is not sufficient to improve cGAN, which is consistent to the finding in [34]. The fact reveals that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs. We also find that adding contrastive loss does not improve ECGAN-UC. An explanation is that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. Furthermore, the additional parameters introduced by 2C loss make the optimization problem more complicated. As a result, we use the combination ECGAN-UC as the default option of ECGAN in the following experiments. 4.3 Comparison with Existing cGANs We compare ECGAN to several representative cGANs including ACGAN [39], ProjGAN [34], and ContraGAN [16], with three representative backbone architectures: DCGAN [41], ResNet [13], and BigGAN [3]. Table 4 compares the results of each combinations of cGAN algorithms and backbone architectures. The results show that ECGAN-UC outperforms other cGANs significantly with all backbone architectures on both CIFAR-10 and Tiny ImageNet. We also noticed that ContraGAN, though achieves decent image quality and diversity, learns a conditional generator that interchanges some classes while generating, hence has low Intra-FID. Overall, the experiment indicates that ECGAN-UC can be a preferred choice for cGAN in general situations. 4.4 Comparisons between Existing cGANs and their ECGAN Counterpart Table 5 compares ProjGAN, ContraGAN, ACGAN to their ECGAN counterparts. As we described in Section 3, each of these representative cGANs can be viewed as special cases under our ECGAN framework. As mentioned in Section 3, ECGAN-0 has additional bias terms in the output layer compared to ProjGAN. The results in Table 5 shows that the subtle difference still brings significant improvement to the generation quality, especially on the harder Tiny ImageNet. Compared to ContraGAN, ECGAN-E has the same loss but different design in the discriminator’s output layer. While the discriminator of ContraGAN has only single output, ECGAN-E has multiple outputs for every class. The difference makes ECGAN-E solve the label mismatching problem of ContraGAN mentioned in Section 4.3 and benefits generation on CIFAR-10, but does not work well on Tiny ImageNet. It is probably because of the scarcity of training data in each class in Tiny ImageNet. Only 50 data are available for updating the parameters corresponding to each class. Last, we compare ECGAN-C to ACGAN. Both of them optimize a GAN loss and a classification loss. However, ECGAN-C combines the discriminator and the classifier, so the generator can directly optimize cGAN loss rather than the classification loss. As a result, ECGAN-C demonstrates better performance on both CIFAR-10 and Tiny ImageNet. In sum, the comparisons show that through the unified view provided by ECGAN, we can improve the existing methods with minimal modifications. 4.5 Evaluation on ImageNet We compare our ECGAN-UC and ECGAN-UCE with BigGAN [3] and ContraGAN [16] on ImageNet. We follow all configurations of BigGAN with batch size 256 in StudioGAN. The numbers in Table 6 are reported after 200,000 training steps if not specified. The results show that ECGAN-UCE outperforms other cGANs dramatically. The comparison between ECGAN-UC and ECGAN-UCE indicates that the 2C loss brings more significant improvement in the ECGAN framework than in ContraGAN. The proposed ECGAN-UCE achieves 8.49 FID and 80.69 inception score. To the best of our knowledge, this is a state-of-the-art result of GANs with batch size 256 on ImageNet. Selected generated images are shown in Appendix G. 5 Related Work The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [33, 7]. To improve the generation quality, ACGAN [39] proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [24] for semi-supervised learning and Triangle GAN [9] for cross-domain distribution matching. However, Shu [45] and Miyato and Koyama [34] pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable. In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [12] use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [MCMC; 1]. Our work is also similar to a concurrent work [11], which improves [12] by introducing Fenchel duality to replace computationallyintensive MCMC. They use a variational approach [19] to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [12, 11] by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [11]. Last, [5] study the connection between exponential family and unconditional GANs. Different from [5], we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs. 6 Conclusion In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future. 7 Limitations and Potential Negative Impacts There are two main limitations in the current study. One is the investigation on ImageNet. Ideally, more experiments and analysis on ImageNet can further strengthen the contribution. But training with such a large dataset is barely affordable for our computational resource, and we can only resort to the conclusive findings in the current results. The other limitation is whether the metrics such as FID truly reflect generation quality, but this limitation is considered an open problem to the community anyway. As with any work on generative models, there is a potential risk of the proposed model being misused to create malicious content, much like how misused technology can be used to forge bills. In this sense, more anti-forgery methods will be needed to mitigate the misuse in the future. Acknowledgement We thank the anonymous reviewers for valuable suggestions. This work is partially supported by the Ministry of Science and Technology of Taiwan via the grants MOST 107-2628-E-002-008-MY3 and 110-2628-E-002-013. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.
1. What is the focus of the paper regarding conditional GANs? 2. What are the strengths and weaknesses of the proposed ECGAN formulation compared to prior works? 3. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content? 4. Does the reviewer have any concerns or questions regarding the comparisons made in the paper between ECGAN and other variations of conditional GANs? 5. Are there any suggestions for additional experiments or improvements to strengthen the paper's results?
Summary Of The Paper Review
Summary Of The Paper This submission proposes to analyze the most popular variations of conditional GANs (ACGAN, ProjGAN, ContraGAN) under a unified, energy-based, formulation (ECGAN). Specifically, ECGAN is composed of terms derived from two different decompositions of the joint probability p(x,y). The paper then links each term to one of the popular cGAN approaches. ECGAN is evaluated against the traditional models and an extensive ablation study tests the impact of each component of the proposed model. Review Originality: The paper builds very heavily on existing work. The ECGAN formulations are only marginally different from previous works. Quality: The submission seems technically sound, supported by adequate theoretical framework and experiments. It is however difficult to draw conclusive results on only CIFAR-10 and Tiny ImageNet. Clarity: The paper is clear and well organized. The association between the different cGANs and their corresponding ECGAN variants could be debatable, however (see questions below). Significance: The proposed formulation provides a unified view of the different losses terms used in conditional GANs that allows for a valuable principled evaluation. Moreover, considering that ECGAN-UC has very encouraging reported performances and that its implementation is only a slight departure from previous cGAN's, it could become a staple for conditional GANs if future works confirm its good performances. Overall, I believe the paper is interesting to read and of good quality. I still have some questions and remarks that I hope can be clarified: Since ProjGAN sums both an unconditional and a conditional output, wouldn't it make more sense to compare it to ECGAN-U instead of ECGAN-0? Especially since the used Hinge Loss implementation, described in appendix A, sums the conditional and unconditional terms before the Hinge operation, just like in ProjGAN. I would appreciate it if the authors could comment and discuss the differences of model and performances between ProjGAN and ECGAN-U. Similarly, ACGAN is compared to ECGAN-C in section 3.2. But in section 2.2, ACGAN is already viewed as an unnamed variant of ECGAN (without the ECGAN-0 term and alpha=1 instead). Could the authors comment on why ACGAN is not directly considered a specific variant of ECGAN? Considering both previous points, my opinion is that the comparisons in section 3 seem forced and actually make the paper less clear. Additional experiments on diverse datasets would strengthen the results, even if performed only with ECGAN-UC and a baseline. Post-rebuttal: I'd like to thank the authors for their detailled responses and for providing additional results. After reading the different reviews and responses, I believe the proposed framework is promising and the comparison between the different variants can provide interesting insights for GANs. However, I also believe the paper needs to be very clear about exactly how the classical cGAN fit or don't fit into the ECGAN formulation. Without the ability to assess a revised version, I remain unconvinced that small modifications would be sufficient in that regard. I therefore keep my previous rating (6: Marginally above the acceptance threshold).
NIPS
Title A Unified View of cGANs with and without Classifiers Abstract Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN. 1 Introduction Generative Adversarial Networks [GANs; 10] is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2, 37, 29, 25, 36, 26, 38], backbone architectures [41, 50], and regularization techniques [13, 35, 51] for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [3]. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [33, 39, 34, 16]. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [39] or texts [43], speech enhancement [32], and image style transformation [18, 53]. One representative cGAN is Auxiliary Classifier GAN [ACGAN; 39], which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [6]. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [45], which in term decreases the capability to match the target distribution. Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [ProjGAN; 34], which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [34]. The finding, along with the success of ProjGAN and other cGANs without classifiers [15, 4], seem to suggest that including a classifier is not helpful for improving cGANs. In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [39], ProjGAN [34], and ContraGAN [16] as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet. We summarize the contributions of this paper as: • We justify the principled use of classifiers for cGANs by decomposing the joint distribution. • We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. • We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets. The paper is organized as follows. Section 2 derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [39], ProjGAN [34], and ContraGAN [16] in Section 3. Then, we demonstrate the effectiveness of our framework by experiments in Section 4. We discuss related work in Section 5 before concluding in Section 6. 2 Method Given a K-class dataset (x, y) ∼ pd, where y ∈ {1 . . .K} is the class of x and pd is the underlying data distribution. Our goal is to train a generator G to generate a sample G(z, y) following pd(x|y), where z is sampled from a known distribution such as N (0, 1). To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: max D min G ∑ y E pd(x|y) D(x, y)− E p(z) D(G(z, y), y) (1) where G is the generator and D is a discriminator that outputs higher values for real data. The choice of D leads to different types of GANs [10, 2, 29, 8]. At first glance, there is no classifier in Eq. (1). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [39]. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality. 2.1 Bridge Classifiers to Discriminators with Joint Distribution A classifier, when viewed from a probabilistic perspective, is a function that approximates pd(y|x), the probability that x belongs to class y. On the other hand, a conditional discriminator, telling whether x is real data in class y, can be viewed as a function approximate pd(x|y). To connect pd(y|x) and pd(x|y), an important observation is through the joint probability: log p(x, y) = log p(x|y) + log p(y) (2) = log p(y|x) + log p(x). (3) The observation illustrates that we can approximate log p(x, y) in two directions: one containing p(x|y) for conditional discriminators and one containing p(y|x) for classifiers. The finding reveals that by sharing the parameterization, updating the parameters in one direction may optimize the other implicitly. Therefore, we link the classifier to the conditional discriminator by training both objectives jointly. 2.2 Learning Joint Distribution via Optimizing Conditional Discriminators Since p(y) is usually known a priori (e.g., uniform) or able to easily estimated (e.g., empirical counting), we focus on learning p(x|y) in Eq.(2). Specifically, since log p(x, y) ∈ R, we parameterize it via fθ(x), such as a neural network with K real value outputs, where exp(fθ(x)[y]) ∝ p(x, y) . Similar parameterization is also used in exponential family [48] and energy based model [23]. Therefore, the log-likelihood log p(x|y) can be modeled as: log pθ(x|y) = log ( exp (fθ(x)[y]) Zy(θ) ) = fθ(x)[y]− logZy(θ), (4) where Zy(θ) = ∫ x′ exp (fθ(x ′)[y]) dx′. Optimizing Eq. (4) is challenging because of the intractable partition function Zy(θ). Here we introduce the Fenchel duality [48] of the partition function Zy(θ): logZy(θ) = max qy [ E qy(x) [fθ(x)[y]] +H(qy) ] where qy is a distribution of x conditioned on y and H(qy) = −Exf∼qy(x) [log qy(x)] is the entropy of qy. The derivation is provided in Appendix A. By the Fenchel duality, we obtain our maximum likelihood estimation in Eq. (4) as: max θ [ E pd(x,y) [fθ(x)[y]]−max qy [ E qy(x) [fθ(x)[y]] +H(qy) ]] . (5) To approximate the solution of qy , in additional to density models, we can train an auxiliary generator qφ as in cGANs to estimate Eqy(x) via sampling. That is, we can sample x from qφ by x = qφ(z, y), where z ∼ N (0, 1). The objective (5) then becomes: max θ min φ ∑ y E pd(x|y) [fθ(x)[y]]− E p(z) [fθ(qφ(z, y))[y]]−H(qφ(·, y)), (6) which is almost in the form of Eq (1) except the entropy H(qφ(·, y)). We leave the discussion about the entropy estimation in Section 2.4. Currently, the loss function to optimize the objective without the entropy can be formulated as: Ld1(x, z, y; θ) = −fθ(x)[y] + fθ(qφ(z))[y] Lg1(z, y;φ) = −fθ(qφ(z, y))[y] 2.3 Learning Joint Distributions via Optimizing Unconditional Discriminators & Classifiers Following Eq. (3), we can approximate log p(x, y) by approximating log p(y|x) and log p(x). With our energy function fθ, pθ(y|x) can be formulated as: pθ(y|x) = pθ(x, y) pθ(x) = exp(fθ(x)[y])∑ y′ exp(fθ(x)[y ′]) , which is equivalent to the y’th output of SOFTMAX(fθ(x)). Therefore, we can maximize the loglikelihood of pθ(y|x) by consider fθ as a softmax classifier minimizing the cross-entropy loss: Lclf(x, y; θ) = − log (SOFTMAX (fθ(x)) [y]) On the other hand, to maximize the log-likelihood of p(x), we introduce a reparameterization hθ(x) = log ∑ y exp(fθ(x)[y]): log pθ(x) = log (∑ y pθ(x, y) ) = log (∑ y exp(fθ(x)[y])∫ x′ ∑ y′ exp(fθ(x ′)[y′]) dx′ ) = log ( exp(log( ∑ y exp(fθ(x)[y])))∫ x′ exp(log( ∑ y′ exp(fθ(x ′)[y′]))) dx′ ) = log ( exp(hθ(x))∫ x′ exp(hθ(x′)) dx′ ) = hθ(x)− log(Z ′(θ)), (7) where Z ′(θ) = ∫ x exp(hθ(x)) dx. Similar to Eq. (5), we can rewrite logZ ′(θ) by its Fenchel duality: logZ ′(θ) = max q [ E q(x) [hθ(x)] +H(q) ] (8) where q is a distribution of x and H(q) is the entropy of q. Combining Eq. (7) and Eq. (8) and reusing the generator in Section 2.2, we obtain the optimization problem: max θ min φ E pd(x,y) [hθ(x)]− E p(z) [hθ(qφ(z, y))]−H(qφ) (9) Similar to Eq. (6), the objective of the unconditional discriminator is equivalent to typical GANs augmented with an entropy term. The loss function without considering the entropy can be formulated as: Ld2(x, z, y; θ) = −hθ(x) + hθ(qφ(z)) Lg2(z, y;φ) = −hθ(qφ(z, y)) 2.4 Entropy Approximation in cGANs In Section 2.2 and Section 2.3, we propose two approaches to train cGANs with and without classification. Unsolved problems in Eq. (6) and Eq. (9) are the entropy termsH(qφ(·, y)) andH(qφ). In previous work, various estimators have been proposed to estimate entropy or its gradient [46, 42, 21, 27]. One can freely choose any approach to estimate the entropy in the proposed framework. In this work, we consider two entropy estimators, and we will show how they connect with existing cGANs. The first approach is the naive constant approximation. Since entropy is always non-negative, we naturally have the constant zero as a lower bound. Therefore, we can maximize the objective by replacing the entropy term with its lower bound, which is zero in this case. This approach is simple but we will show its effectiveness in Section 4 and how it links our framework to ProjGAN and ContraGAN in Section 3. The second approach is estimating a variational lower bound. Informally, given a batch of data {(x1, y1), . . . , (xm, ym)}, an encoder function l, and a class embedding function e(y), the negative 2C loss used in ContraGAN [16], LC(xi, yi; t) = log ( d(l(xi), e(yi)) + ∑m k=1 Jyk = yiK d(l(xi), l(xk)) d(l(xi), e(yi)) + ∑m k=1 Jk 6= iK d(l(xi), (l(xk)) ) , (10) is an empirical estimate of a proper lower bound of H(X) [40], where d(a, b) = exp(a>b/t) is a distance function with a temperature t. We provide the proof in Appendix B. The 2C loss heavily relies on the embeddings l(x) and e(y). Although we only need to estimate the entropy of generated data in Eq. (6) and Eq. (9), we still rely on true data to learn the embeddings in practice. Therefore, the loss function of Eq. (6) can be written as: LD1(x, z, y; θ) = Ld1(x, z, y; θ) + λcLrealC LG1(z, y;φ) = Lg1(x, y;φ) + λcLfakeC , where λc is a hyperparameter controlling the weight of the contrastive loss, and LrealC ,LfakeC are the contrastive loss calculated on a batch of real data and generated data respectively. Similarly, the loss function of Eq. (9) becomes: LD2(x, z, y; θ) = Ld2(x, z, y; θ) + λcLrealC LG2(z, y;φ) = Lg2(x, y;φ) + λcLfakeC , The introduction of 2C loss allows us to accommodate ContraGAN into our framework. 2.5 Energy-based Conditional Generative Adversarial Network Previous work has shown that multitask training benefits representation learning [30] and training discriminative and generative models jointly outperforms their purely generative or purely discriminative counterparts [11, 28]. Therefore, we propose a framework named Energy-based Conditional Generative Adversarial Network (ECGAN), which combines the two approaches in Section 2.2 and Section 2.3 to learn the joint distribution better. The loss function can be summarized as: LD(x, z, y; θ) = Ld1(x, z, y; θ) + αLd2(x, z, y; θ) + λcLrealC + λclfLclf(x, y; θ) (11) LG(z, y;φ) = Lg1(z, y;φ) + αLg2(z, y;φ) + λcLfakeC (12) where α is a weight parameter for the unconditional GAN loss. The discriminator’s design is illustrated in Fig 1. Here we discuss the intuition of the mechanisms behind each component in Eq. (11). Ld1 is a loss function for conditional discriminator. It updates the y-th output when given a data pair (x, y). Ld2 guides to an unconditional discriminator. It updates all outputs according to whether x is real. Lclf learns a classifier. It increases the y-th output and decreases the other outputs for data belonging to class y. Finally, LrealC and L fake C play the roles to improve the latent embeddings by pulling the embeddings of data with the same class closer. Previously, we derive the loss functions Ld1 and Ld2 as the loss in Wasserstein GAN [2]. In practice, we use the hinge loss as proposed in Geometric GAN [26] for better stability and convergence. We use the following combination of Ld1 and Ld2 : Hinge(fθ(xreal, y) + α · hθ(xreal), fθ(xfake, y) + α · hθ(xfake)). (13) For more discussion of the implementation of hinge loss, please check Appendix C. The overall training procedure of ECGAN is presented in Appendix E. 3 Accommodation to Existing cGANs In this section, we show that our framework covers several representative cGAN algorithms, including ACGAN [39], ProjGAN [35], and ContraGAN [16]. Through the ECGAN framework, we obtain a unified view of cGANs, which allows us to fairly compare and understand the pros and cons of existing cGANs. We name the ECGAN counterparts ECGAN-0, ECGAN-C, and ECGAN-E, corresponding to ProjGAN, ACGAN, and ContraGAN, respectively. We summarize the settings in Table 1 and illustrate the discriminator designs in Appendix F. 3.1 ProjGAN ProjGAN [34] is the most representative cGAN design that is commonly used in state-of-the-art research [3, 50]. Let the output of the penultimate layer in the discriminator be g(x). The output of ProjGAN’s discriminator is: D(x, y) = wTu g(x) + bu + w T y g(x) = (wu + wy) T g(x) + bu (14) where wu, bu are the parameters for the unconditional linear layer, and wy is the class embedding of y. On the other hand, the output of a discriminator in ECGAN is: D(x, y) = f(x)[y] = (WT g(x) + b)[y] = wTy g(x) + by (15) where W,b are the parameters of the linear output layer in fθ. As shown in Eq. (14) and Eq. (15), the architectures of ProjGAN and ECGAN are almost equivalent. In addition, the loss function of ProjGAN can be formulated as: LG = −D(G(z), y) LD = −D(x, y) +D(G(z), y), which is a special case of ECGAN while α = λc = λclf = 0. We name this case ECGAN-0, which is the simplest version of ECGAN. Compared with ProjGAN, ECGAN-0 has additional bias terms for the output of each class. 3.2 ACGAN ACGAN [39] is the most well-known cGAN algorithm that leverages a classifier to achieve conditional generation. Given aK-class dataset, the discriminator of ACGAN is parameterized by a network with K + 1 outputs. The first output, denoted as D(x), is an unconditional discriminator distinguishing between real and fake images. The remaining K outputs, denoted as C(x), is a classifier that predicts logits for every class. The loss function of ACGAN can be formulated as: LG = −D(G(z)) + λgLclf(G(z), y;C) LD = −D(x) +D(G(z)) + λd(Lclf(x, y;C) + Lclf(G(z), y;C)) where G is the generator, λg and λd are hyperparameters to control the weight of cross-entropy loss. The formulation of ACGAN is similar to our ECGAN when α = λc = 0 and λclf > 0. We call the special case as ECGAN-C, with a suffix ‘C’ for classification loss. ECGAN-C uses a conditional discriminator which plays the role of a classifier at the same time. Hence the generator in ECGAN-C learns from the conditional discriminator rather than the cross-entropy loss which is biased for generative objectives. 3.3 ContraGAN ContraGAN [16] proposed 2C loss, which we mentioned in Eq. (10), to capture the data-to-data relationship and data-to-label relationship. The 2C loss is applied in both discriminator and generator to achieve conditional generation. That is: LG = −D(G(z), y) + λcLfakeC LD = −D(x, y) +D(G(z), y) + λcLrealC The loss functions are similar to ones in ECGAN with α = λclf = 0 and λc > 0. We call it ECGANE, where ‘E’ means entropy estimation. The main difference between ContraGAN and ECGAN-E is the output layer of their discriminators. While ContraGAN uses a single-output network, ECGAN uses a K-output network fθ which has higher capacity. We keep Eq. (11) and Eq. (12) as simple as possible to reduce the burden of hyperparameter tuning. Under the simple equations of the current framework, ECGAN-C and ECGAN-E are the closest counterparts to ACGAN and ContraGAN. The subtle difference (in addition to the underlying network architecture) is that ACGAN uses Ld2 instead of Ld1 (ECGAN-C); ContraGAN uses Ld2 ,Lg2 instead of Ld1 ,Lg1 (ECGAN-E). One future direction is to introduce more hyperparameters in Eq. (11) and Eq. (12) to get closer counterparts. 4 Experiment We conduct our experiments on CIFAR-10 [20] and Tiny ImageNet [22] for analysis, and ImageNet [6] for large-scale empirical study. Table 2 shows the statistics of the datasets. All datasets are publicly available for research use. They were not constructed for human-related study. We do not specifically take any personal information from the datasets in our experiments. In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity. Besides, we use Intra-FID, which is the average of FID for each class, to evaluate the performance of conditional generation. 4.1 Experimental Setup We use StudioGAN1 [16] to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [3]. We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix D. 4.2 Ablation Study We start our empirical studies by investigating the effectiveness of each component in ECGAN. We use symbols ‘U’ to represent unconditional GAN loss, ‘C’ to represent classification loss, and ‘E’ 1https://github.com/POSTECH-CVLab/PyTorch-StudioGAN to represent entropy estimation loss, which is 2C loss in our implementation. The concatenation of the symbols indicates the combination of losses. For example, ECGAN-UC means ECGAN with both unconditional GAN loss and classification loss (α > 0 and λclf > 0). Table 3 shows the results of ECGAN from the simplest ECGAN-0 to the most complicated ECGAN-UCE. On CIFAR-10, ECGAN-0 already achieves decent results. Adding unconditional loss, classification loss, or contrastive loss provides slightly better or on-par performance. On the harder Tiny Imagenet, the benefit of unconditional loss and classification loss becomes more significant. While ECGAN-U already shows advantages to ECGAN-0, adding classification loss to ECGAN-U further improves all metrics considerably. We also observe that directly adding classification loss is not sufficient to improve cGAN, which is consistent to the finding in [34]. The fact reveals that the unconditional GAN loss is a crucial component to bridge classifiers and discriminators in cGANs. We also find that adding contrastive loss does not improve ECGAN-UC. An explanation is that the entropy estimation lower bound provided by the contrastive loss is too loose to benefit the training. Furthermore, the additional parameters introduced by 2C loss make the optimization problem more complicated. As a result, we use the combination ECGAN-UC as the default option of ECGAN in the following experiments. 4.3 Comparison with Existing cGANs We compare ECGAN to several representative cGANs including ACGAN [39], ProjGAN [34], and ContraGAN [16], with three representative backbone architectures: DCGAN [41], ResNet [13], and BigGAN [3]. Table 4 compares the results of each combinations of cGAN algorithms and backbone architectures. The results show that ECGAN-UC outperforms other cGANs significantly with all backbone architectures on both CIFAR-10 and Tiny ImageNet. We also noticed that ContraGAN, though achieves decent image quality and diversity, learns a conditional generator that interchanges some classes while generating, hence has low Intra-FID. Overall, the experiment indicates that ECGAN-UC can be a preferred choice for cGAN in general situations. 4.4 Comparisons between Existing cGANs and their ECGAN Counterpart Table 5 compares ProjGAN, ContraGAN, ACGAN to their ECGAN counterparts. As we described in Section 3, each of these representative cGANs can be viewed as special cases under our ECGAN framework. As mentioned in Section 3, ECGAN-0 has additional bias terms in the output layer compared to ProjGAN. The results in Table 5 shows that the subtle difference still brings significant improvement to the generation quality, especially on the harder Tiny ImageNet. Compared to ContraGAN, ECGAN-E has the same loss but different design in the discriminator’s output layer. While the discriminator of ContraGAN has only single output, ECGAN-E has multiple outputs for every class. The difference makes ECGAN-E solve the label mismatching problem of ContraGAN mentioned in Section 4.3 and benefits generation on CIFAR-10, but does not work well on Tiny ImageNet. It is probably because of the scarcity of training data in each class in Tiny ImageNet. Only 50 data are available for updating the parameters corresponding to each class. Last, we compare ECGAN-C to ACGAN. Both of them optimize a GAN loss and a classification loss. However, ECGAN-C combines the discriminator and the classifier, so the generator can directly optimize cGAN loss rather than the classification loss. As a result, ECGAN-C demonstrates better performance on both CIFAR-10 and Tiny ImageNet. In sum, the comparisons show that through the unified view provided by ECGAN, we can improve the existing methods with minimal modifications. 4.5 Evaluation on ImageNet We compare our ECGAN-UC and ECGAN-UCE with BigGAN [3] and ContraGAN [16] on ImageNet. We follow all configurations of BigGAN with batch size 256 in StudioGAN. The numbers in Table 6 are reported after 200,000 training steps if not specified. The results show that ECGAN-UCE outperforms other cGANs dramatically. The comparison between ECGAN-UC and ECGAN-UCE indicates that the 2C loss brings more significant improvement in the ECGAN framework than in ContraGAN. The proposed ECGAN-UCE achieves 8.49 FID and 80.69 inception score. To the best of our knowledge, this is a state-of-the-art result of GANs with batch size 256 on ImageNet. Selected generated images are shown in Appendix G. 5 Related Work The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [33, 7]. To improve the generation quality, ACGAN [39] proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [24] for semi-supervised learning and Triangle GAN [9] for cross-domain distribution matching. However, Shu [45] and Miyato and Koyama [34] pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable. In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [12] use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [MCMC; 1]. Our work is also similar to a concurrent work [11], which improves [12] by introducing Fenchel duality to replace computationallyintensive MCMC. They use a variational approach [19] to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [12, 11] by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [11]. Last, [5] study the connection between exponential family and unconditional GANs. Different from [5], we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs. 6 Conclusion In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future. 7 Limitations and Potential Negative Impacts There are two main limitations in the current study. One is the investigation on ImageNet. Ideally, more experiments and analysis on ImageNet can further strengthen the contribution. But training with such a large dataset is barely affordable for our computational resource, and we can only resort to the conclusive findings in the current results. The other limitation is whether the metrics such as FID truly reflect generation quality, but this limitation is considered an open problem to the community anyway. As with any work on generative models, there is a potential risk of the proposed model being misused to create malicious content, much like how misused technology can be used to forge bills. In this sense, more anti-forgery methods will be needed to mitigate the misuse in the future. Acknowledgement We thank the anonymous reviewers for valuable suggestions. This work is partially supported by the Ministry of Science and Technology of Taiwan via the grants MOST 107-2628-E-002-008-MY3 and 110-2628-E-002-013. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.
1. What is the main contribution of the paper in improving the performance of cGANs? 2. What are the strengths of the proposed Energy-based cGAN (ECGAN) method? 3. Do you have any concerns regarding the additional cross entropy loss in the ECGAN method? 4. How does the reviewer assess the clarity and explanation of the general architecture of the method? 5. What are the weaknesses of the paper regarding the linking classifiers and conditional discriminators? 6. Would it be beneficial to include a training stability analysis and visualization of generated images across different cGAN variants? 7. How does the computational cost of the ECGAN method compare to other methods like ContraGAN and ProjGAN? 8. Should the authors experiment with more challenging datasets to demonstrate consistency in their findings?
Summary Of The Paper Review
Summary Of The Paper The authors discuss that existing methods on including classifiers in a cGAN biases the generator in generating easy to classify images. Therefore, they propose a way to include classifiers in a cGAN to improve its performance in a principled manner. To do so, they decompose the joint probability distribution by the Bayes rule that results in linking classifiers to conditional discriminators. The proposed formulation shows that a joint generator model can be trained from two directions: a conditional discriminator and an un-conditional discriminator with a classifier. They combine the formulation of these two routes and propose a new method called Energy-based cGAN (ECGAN). ECGAN shows how to use a classifier for cGANs, and it explains other variants of cGANs such as ContraGAN, ACGAN, and ProjGAN. They empirically show that ECGAN outperforms existing cGANs by achieving higher FID (and similar ones) score on two sets of datasets (CIFAR10, Tiny ImageNet). Review Positives • The paper's motivation and the overall idea are interesting. • Principled way to use classifiers in conditional GAN. The authors present a new view to explore cGANs and use classifiers in a way that is beneficial to cGANs. • The presented framework covers and explains other variants of cGANs (ContraGAN, ProjGAN, ACGAN). • Experimental results confirm previous results such as adding a classifier directly to ProjGAN does not result in improvements. • Based on the experiments, the ECGAN method achieves higher scores compared to the existing methods. • The experimental findings are consistent across different GAN architectures. Negatives: • The additional of the cross entropy loss to equation (11) appears adhoc to cover ACGAN as one of the variants of ECGAN. • The authors may add more explanation on the general architecture of the method. Are there two Discriminators with a classifier and a generator? There is some confusion regarding whether the classifier and the conditional discriminator are the same neural network or separate? • The authors explain how they link conditional discriminators and classifiers. It is, however, unclear why the loss components are helping? what kinds of behaviors are promoted by the loss components? • It would be beneficial if the authors also present training stability analysis (similar to figure 3 of ContraGAN [1]). This analysis can reveal how the formulation and losses are helpful. • It would be great if the authors can visualize and compare the diversity of generated images across different existing cGANs (ContraGAN, ProjGAN, ACCGAN). • What is the computational cost of this method compared to the ContraGAN and ProjGAN? • Since the results between the various approaches are not significantly different for one of the 2 datasets used (CIFAR-10), the authors could experiment with more challenging datasets (e.g., ImageNet) to demonstrate that their findings are consistent. [1] Kang, Minguk, and Jaesik Park. "Contragan: Contrastive learning for conditional image generation." arXiv preprint arXiv:2006.12681 (2020).
NIPS
Title Compact Generalized Non-local Network Abstract The non-local module [27] is designed for capturing long-range spatio-temporal dependencies in images and videos. Although having shown excellent performance, it lacks the mechanism to model the interactions between positions across channels, which are of vital importance in recognizing fine-grained objects and actions. To address this limitation, we generalize the non-local module and take the correlations between the positions of any two channels into account. This extension utilizes the compact representation for multiple kernel functions with Taylor expansion that makes the generalized non-local module in a fast and low-complexity computation flow. Moreover, we implement our generalized non-local method within channel groups to ease the optimization. Experimental results illustrate the clear-cut improvements and practical applicability of the generalized non-local module on both fine-grained object recognition and video classification. Code is available at: https://github.com/KaiyuYue/cgnl-network.pytorch. 1 Introduction Capturing spatio-temporal dependencies between spatial pixels or temporal frames plays a key role in the tasks of fine-grained object and action classification. Modeling such interactions among images and videos is the major topic of various feature extraction techniques, including SIFT, LBP, Dense Trajectory [26], etc. In the past few years, deep neural network automates the feature designing pipeline by stacking multiple end-to-end convolutional or recurrent modules, where each of them processes correlation within spatial or temporal local regions. In general, capturing the long-range dependencies among images or videos still requires multiple stacking of these modules, which greatly hinders the learning and inference efficiency. A recent work [16] also suggests that stacking more layers cannot always increase the effective receptive fields to capture enough local relations. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Inspired by the classical non-local means for image filtering, the recently proposed non-local neural network [27] addresses this challenge by directly modeling the correlation between any two positions in the feature maps in a single module. Without bells and whistles, the non-local method can greatly improve the performances of existing networks on many video classification benchmarks. Despite its great performances, the original non-local network only considers the global spatio-temporal correlation by merging channels, and it might miss the subtle but important cross-channel clues for discriminating fine-grained objects or actions. For instance, the body, the ball and their interaction are all necessary for describing the action of kicking the ball in Fig. 1, while the original non-local operation learns to focus on the body part relations but neglect the body-ball interactions that usually correspond to different channels of the input features. To improve the effectiveness in fine-grained object and action recognition tasks, this work extends the non-local module by learning explicit correlations among all of the elements across the channels. First, this extension scale-ups the representation power of the non-local operation to attend the interaction between subtle object parts (e.g., the body and ball in Fig. 1). Second, we propose its compact representation for various kernel functions to address the high computation burden issue. We show that as a self-contained module, the compact generalized non-local (CGNL) module provides steady improvements in classification tasks. Third, we also investigate the grouped CGNL blocks, which model the correlations across channels within each group. We evaluate the proposed CGNL method on the task of fine-grained classification and action recognition. Extensive experimental results show that: 1) The CGNL network are easy to optimize as the original non-local network; 2) Compared with the non-local module, CGNL module enjoys capturing richer features and dense clues for prediction, as shown in Figure 1, which leads to results substantially better than those of the original non-local module. Moreover, in the appendix of extensional experiments, the CGNL network can also promise a higher accuracy than the baseline on the large-scale ImageNet dataset [20]. 2 Related Works Channel Correlations: The mechanism of sharing the same conv kernel among channels of a layer in a ConvNet [12] can be seen as a basic way to capture correlations among channels, which aggregates the channels of feature maps by the operation of sum pooling. The SENet [10] may be the first work that explicitly models the interdependencies between the channels of its spatial features. It aims to select the useful feature maps and suppress the others, and only considers the global information of each channel. Inspired by [27], we present the generalized non-local (GNL) module, which generalizes the non-local (NL) module to learn the correlations between any two positions across the channels. Compared to the SENet, we model the interdependencies among channels in an explicit and dense manner. Compact Representation: After further investigation, we find that the non-local module contains a second-order feature space (Sect.3.1), which is used widely in previous computer vision tasks, e.g., SIFT [15], Fisher encoding [17], Bilinear model [14] [5] and segmentation task [2]. However, such second-order feature space involves high dimensions and heavy computational burdens. In the area of kernel learning [21], there are many prior works such as compact bilinear pooling (CBP) [5] that uses the Tensor Sketching [18] to address this problem. But this type of method is not perfect yet. Because the it cannot produce a light computation to the various size of sketching vectors. Fortunately, in mathematics, the whole non-local operation can be viewed as a trilinear formation. It can be fast computed with the associative law of matrix production. To the other types of pairwise function, such as Embedded Gaussian or RBF [19], we propose a tight approximation for them by using the Taylor expansion. 3 Approach In this section, we introduce a general formulation of the proposed general non-local operation. We then show that the original non-local and the bilinear pooling are special cases of this formulation. After that, we illustrate that the general non-local operation can be seen as a modality in the trilinear matrix production and show how to implement our generalized non-local (GNL) module in a compact representations. 3.1 Review of Non-local Operation We begin by briefly reviewing the original non-local operation [27] in matrix form. Suppose that an image or video is given to the network and let X ∈ RN×C denote (see notation1) the input feature map of the non-local module, where C is the number of channels. For the sake of notation clarity, we collapse all the spatial (width W and height H) and temporal (video length T ) positions in one dimension, i.e., N = HW or N = HWT . To capture long-range dependencies across the whole feature map, the original non-local operation computes the response Y ∈ RN×C as the weighted sum of the features at all positions, Y = f ( θ(X), φ(X) ) g(X), (1) where θ(·), φ(·), g(·) are learnable transformations on the input. In [27], the authors suggest using 1× 1 or 1× 1× 1 convolution for simplicity, i.e., the transformations can be written as θ(X) = XWθ ∈ RN×C , φ(X) = XWφ ∈ RN×C , g(X) = XWg ∈ RN×C , (2) parameterized by the weight matrices Wθ,Wφ,Wg ∈ RC×C respectively. The pairwise function f(·, ·) : RN×C ×RN×C → RN×N computes the affinity between all positions (space or space-time). There are multiple choices for f , among which dot-product is perhaps the simplest one, i.e., f ( θ(X), φ(X) ) = θ(X)φ(X)>. (3) Plugging Eq. 2 and Eq. 3 into Eq. 1 yields a trilinear interpretation of the non-local operation, Y = XWθW > φX >XWg, (4) where the pairwise matrix XWθW>φX > ∈ RN×N encodes the similarity between any locations of the input feature. The effect of non-local operation can be related to the self-attention module [1] based on the fact that each position (row) in the result Y is a linear combination of all the positions (rows) of XWg weighted by the corresponding row of the pairwise matrix. 3.2 Review of Bilinear Pooling Analogous to the conventional kernel trick [21], the idea of bilinear pooling [14] has recently been adopted in ConvNets for enhancing the feature representation in various tasks, such as fine-grained classification, person re-id, action recognition. At a glance, bilinear pooling models pairwise feature interactions using explicit outer product at the final classification layer: Z = X>X ∈ RC×C , (5) where X ∈ RN×C is the input feature map generated by the last convolutional layer. Each element of the final descriptor zc1c2 = ∑ n xnc1xnc2 sum-pools at each location n = 1, · · · , N the bilinear product xnc1xnc2 of the corresponding channel pair c1, c2 = 1, · · · , C. Despite the distinct design motivation, it is interesting to see that bilinear pooling (Eq. 5) can be viewed as a special case of the second-order term (Eq. 3) in the non-local operation if we consider, θ(X) = X> ∈ RC×N , φ(X) = X> ∈ RC×N . (6) 3.3 Generalized Non-local Operation The original non-local operation aims to directly capture long-range dependencies between any two positions in one layer. However, such dependencies are encoded in a joint location-wise matrix f(θ(X), φ(X)) by aggregating all channel information together. On the other hand, channel-wise correlation has been recently explored in both discriminative [14] and generative [24] models through the covariance analysis across channels. Inspired by these works, we generalize the original non-local operation to model long-range dependencies between any positions of any channels. 1Bold capital letters denote a matrix X, bold lower-case letters a column vector x. xi represents the ith column of the matrix X. xij denotes the scalar in the ith row and jth column of the matrix X. All non-bold letters represent scalars. 1m ∈ Rm is a vector of ones. In ∈ Rn×n is an identity matrix. vec(X) denotes the vectorization of matrix X. X ◦Y and X⊗Y are the Hadamard and Kronecker products of matrices. We first reshape the output of the transformations (Eq. 2) on X by merging channel into position: θ(X) = vec(XWθ) ∈ RNC , φ(X) = vec(XWφ) ∈ RNC , g(X) = vec(XWg) ∈ RNC . (7) By lifting the row space of the underlying transformations, our generalized non-local (GNL) operation pursues the same goal of Eq. 1 that computes the response Y ∈ RN×C as: vec(Y) = f ( vec(XWθ), vec(XWφ) ) vec(XWg). (8) Compared to the original non-local operation (Eq. 4), GNL utilizes a more general pairwise function f(·, ·) : RNC × RNC → RNC×NC that can differentiate between pairs of same location but at different channels. This richer similarity greatly augments the non-local operation in discriminating fine-grained object parts or action snippets that usually correspond to channels of the input feature. Compared to the bilinear pooling (Eq. 5) that can only be used after the last convolutional layer, GNL maintains the input size and can thus be flexibly plugged between any network blocks. In addition, bilinear pooling neglects the spatial correlation which, however, is preserved in GNL. Recently, the idea of dividing channels into groups has been established as a very effective technique in increasing the capacity of ConvNets. Well-known examples include Xception [3], MobileNet [9], ShuffleNet [31], ResNeXt [29] and Group Normalization [28]. Given its simplicity and independence, we also realize the channel grouping idea in GNL by grouping all C channels into G groups, each of which contains C ′ = C/G channels of the input feature. We then perform GNL operation independently for each group to compute Y′ and concatenate the results along the channel dimension to restore the full response Y. 3.4 Compact Representation A straightforward implementation of GNL (Eq. 8) is prohibitive as the quadratic increase with respect to the channel number C in the presence of the NC ×NC pairwise matrix. Although the channel grouping technique can reduce the channel number from C to C/G, the overall computational complexity is still much higher than the original non-local operation. To mitigate this problem, this section proposes a compact representation that leads to an affordable approximation for GNL. Let us denote θ = vec(XWθ), φ = vec(XWφ) and g = vec(XWg), each of which is a NC-D vector column. Without loss of generality, we assume f is a general kernel function (e.g., RBF, bilinear, etc.) that computes a NC ×NC matrix composed by the elements, [ f(θ,φ) ] ij ≈ P∑ p=0 α2p(θiφj) p, (9) which can be approximated by Taylor series up to certain order P . The coefficient αp can be computed in closed form once the kernel function is known. Taking RBF kernel for example, [f(θ,φ)]ij = exp(−γ‖θi − φj‖2) ≈ P∑ p=0 β (2γ)p p! (θiφj) p, (10) where α2p = β (2γ)p p! and β = exp ( − γ(‖θ‖2 + ‖φ‖2) ) is a constant and β = exp(−2γ) if the input vectors θ and φ are `2-normalized. By introducing two matrices, Θ = [α0θ 0, · · · , αPθP ] ∈ RNC×(P+1), Φ = [α0φ0, · · · , αPφP ] ∈ RNC×(P+1) (11) our compact generalized non-local (CGNL) operation approximates Eq. 8 via a trilinear equation, vec(Y) ≈ ΘΦ>g. (12) At first glance, the above approximation still involves the computation of a large pairwise matrix ΘΦ> ∈ RNC×NC . Fortunately, the order of Taylor series is usually relatively small P NC. According to the associative law, we could alternatively compute the vector z = Φ>g ∈ RP+1 first and then calculate Θz in a much smaller complexity ofO(NC(P +1)). In another view, the process that this bilinear form Φ>g is squeezed into scalars can be treated as a related concept of the SE module [10]. Complexity analysis: Table 1 compares the computational complexity of CGNL network with the GNL ones. We cannot afford for directly computing GNL operation because of its huge complexity of O(2(NC)2) in both time and space. Instead, our compact method dramatically eases the heavy calculation to O(NC(P + 1)). Table 1: Complexity comparison of GNL and CGNL operations, where N and C indicate the number of positions and channels respectively. General NL Method CGNL Method Strategy f ( ΘΦ> ) g ΘΦ>g Time O(2(NC)2) O(NC(P + 1)) Space O(2(NC)2) O(NC(P + 1)) 3.5 Implementation Details Fig. 2 illustrates the workflow of how CGNL module processes a feature map X of the size N × C, where N = H ×W or N = T ×H ×W . X is first fed into three 1× 1× 1 convolutional layers that are described by the weights Wθ,Wφ,Wg respectively in Eq. 7. To improve the capacity of neural networks, the channel grouping idea [29, 28] is then applied to divide the transformed feature along the channel dimension into G groups. As shown in Fig. 2, we approximate for each group the GNL operation (Eq. 8) using the Taylor series according to Eq. 12. To achieve generality and compatibility with existing neural network blocks, the CGNL block is implemented by wrapping Eq. 8 in an identity mapping of the input as in residual learning [8]: Z = concat(BN(Y′Wz)) + X, (13) where Wz ∈ RC×C denotes a 1× 1 or 1× 1× 1 convolution layer followed by a Batch Normalization [11] in each group. 4 Experiments 4.1 Datasets We evaluate the CGNL network on multiple tasks, including fine-grained classification and action recognition. For fine-grained classification, we experiment on the Birds-200-2011 (CUB) dataset [25], which contains 11788 images of 200 bird categories. For action recognition, we experiment on two challenging datasets, Mini-Kinetics [30] and UCF101 [22]. The Mini-Kinetics dataset contains 200 action categories. Due to some video links are unavaliable to download, we use 78265 videos for training and 4986 videos for validation. The UCF101 dataset contains 101 actions, which are separated into 25 groups with 4-7 videos of each action in a group. 4.2 Baselines Given the steady performance and efficiency, the ResNet [8] series (ResNet-50 and ResNet-101) are adopted as our baselines. For video tasks, we keep the same architecture configuration with [27], where the temporal dimension is trivially addressed by max pooling. Following [27] the convolutional layers in the baselines are implemented as 1× k × k kernels, and we insert our CGNL blocks into Table 2: Ablations. Top1 and top5 accuracy (%) on various datasets. (a) Results of adding 1 CGNL block on CUB. The kernel of dot production achieves the best result. The accuracies of others are at the edge of baselines. model top1 top5 R-50. 84.05 96.00 Dot Production 85.14 96.88 Gaussian RBF 84.10 95.78 Embedded Gaussian 84.01 96.08 (b) Results of comparison on UCF101. Note that CGNL network is not grouped in channel. model top1 top5 R-50. 81.62 94.62 + 1 NL block 82.88 95.74 + 1 CGNL block 83.38 95.42 (c) Results of channel grouped CGNL networks on CUB. A few groups can boost the performance. But more groups tend to prevent the CGNL block from capturing the correlations between positions across channels. model groups top1 top5 R-101 - 85.05 96.70 + 1 CGNL 1 86.17 97.82 block 4 86.24 97.05 8 86.35 97.86 16 86.13 96.75 32 86.04 96.69 model groups top1 top5 R-101 - 85.05 96.70 + 5 CGNL 1 86.01 95.97 block 4 86.19 96.07 8 86.24 97.23 16 86.43 98.89 32 86.10 97.13 (d) Results of grouped CGNL networks on Mini-Kinetics. More groups help the CGNL networks improve top1 accuracy obveriously. model gorups top1 top5 R-50 - 75.54 92.16 + 1 CGNL 1 77.16 93.56 block 4 77.56 93.008 77.76 93.18 model gorups top1 top5 R-101 - 77.44 93.18 + 1 CGNL 1 78.79 93.64 block 4 79.06 93.548 79.54 93.84 the network to turn them into compact generalized non-local (CGNL) networks. We investigate the configurations of adding 1 and 5 blocks. [27] suggests that adding 1 block on the res4 is slightly better than the others. So our experiments of adding 1 block all target the res4 of ResNet. The experiments of adding 5 blocks, on the other hand, are configured by inserting 2 blocks on the res3, and 3 blocks on the res4, to every other residual block in ResNet-50 and ResNet-101. Training: We use the models pretrained on ImageNet [20] to initialize the weights. The frames of a video are extracted in a dense manner. Following [27], we generate 32-frames input clips for models, first randomly crop out 64 consecutive frames from the full-length video and then drop every other frame. The way to choose these 32-frames input clips can be viewed as a temporal augmentation. The crop size for each clip is distributed evenly between 0.08 and 1.25 of the original image and its aspect ratio is chosen randomly between 3/4 and 4/3. Finally we resize it to 224. We use a weight decay of 0.0001 and momentum of 0.9 in default. The strategy of gradual warmup is used in the first ten epochs. The dropout [23] with ratio 0.5 is inserted between average pooling layer and last fully-connected layer. To keep same with [27], we use zero to initialize the weight and bias of the BatchNorm (BN) layer in both CGNL and NL blocks [6]. To train the networks on CUB dataset, we follow the same training strategy above but the final crop size of 448. Inference: The models are tested immediately after training is finished. In [27], spatially fullyconvolutional inference 2 is used for NL networks. For these video clips, the shorter side is resized to 256 pixels and use 3 crops to cover the entire spatial size along the longer side. The final prediction is the averaged softmax scores of all clips. For fine-grined classification, we do 1 center-crop testing in size of 448. 4.3 Ablation Experiments Kernel Functions: We use three popular kernel functions, namely dot production, embedded Gaussian and Gaussian RBF, in our ablation studies. For dot production, Eq. 12 will be held for direct computation. For embedded Gaussian, the α2p will be 1 p! in Eq. 9. And for Gaussian RBF, the corresponding formula is defined as Eq. 10. We expend the Taylor series with third order and the hyperparameter γ for RBF is set by 1e-4 [4]. Table 2a suggests that dot production is the best kernel functions for CGNL networks. Such experimental observations are consistent with [27]. The other kernel functions we used, Embedded Gaussion and Gaussian RBF, has a little improvements for performance. Therefore, we choose the dot production as our main experimental configuration for other tasks. 2https://github.com/facebookresearch/video-nonlocal-net Grouping: The grouping strategy is another important technique. On Mini-Kinetics, Table 2d shows that grouping can bring higher accuracy. The improvements brought in by adding groups are larger than those by reducing the channel reduction ratio. The best top1 accuracy is achieved by splitting into 8 groups for CGNL networks. On the other hand, however, it is worthwhile to see if more groups can always improve the results, and Table 2c gives the answer that more groups will hamper the performance improvements. This is actually expected, as the affinity in CGNL block considers the points across channels. When we split the channels into a few groups, it can facilitate the restricted optimization and ease the training. However, if too many groups are adopted, it hinder the affinity to capture the rich correlations between elements across the channels. Comparison of CGNL Block to Simple Residual Block: There is a confusion about the efficiency caused by the possibility that the scalars from Φ>g in Eq. 12 could be wiped out by the BN layer. Because according to Algorithm 1 in [11], the output of input Θ weighted by the scalars s = Φ>g can be approximated to O = sΘ−E(sΘ)√ V ar(sΘ) ∗ γ + β = sΘ−sE(Θ)√ s2V ar(Θ) ∗ γ + β = Θ−E(Θ)√ V ar(Θ) ∗ γ + β. At first glance, the scalars s is totally erased by BN in this mathmatical process. However, the de facto operation of a convolutional module has a process order to aggregate the features. Before passing into the BN layer, the scalars s has already saturated in the input features Θ and then been transformed into a different feature space by a learnable parameter Wz . In other words, it is Wz that "protects" s from being erased by BN via the convolutional operation. To eliminate this confusion, we further compare adding 1 CGNL block (with the kernel of dot production) in Fig 3 and adding 1 simple residual block in Fig 4 on CUB dataset in Table 3. The top1 accuracy 84.11% of adding a simple residual block is slightly better than 84.05% of the baseline, but still worse than 85.14% of adding a linear kerenlized CGNL module. We think that the marginal improvement (84.06%→ 84.11%) is due to the more parameters from the added simple residual block. Table 4: Main results. Top1 and top5 accuracy (%) on various datasets. (a) Main validation results on Mini-Kinetics. The CGNL networks is build within 8 groups. model top1 top5 R-50 75.54 92.16 + 1 NL block 76.53 92.90 + 1 CGNL block 77.76 93.18 + 5 NL block 77.53 94.00 + 5 CGNL block 78.79 94.37 R-101 77.44 93.18 + 1 NL block 78.02 93.86 + 1 CGNL block 79.54 93.84 + 5 NL block 79.21 93.21 + 5 CGNL block 79.88 93.37 (b) Results on CUB. The CGNL networks are set by 8 channel groups. model top1 top5 R-50 84.05 96.00 + 1 NL block 84.79 96.76 + 1 CGNL block 85.14 96.88 + 5 NL block 85.10 96.18 + 5 CGNL block 85.68 96.69 model top1 top5 R-101 85.05 96.70 + 1 NL block 85.49 97.04 + 1 CGNL block 86.35 97.86 + 5 NL block 86.10 96.35 + 5 CGNL block 86.24 97.23 (c) Results on COCO. 1 NL or 1 CGNL block is added in Mask R-CNN. model APbox APbox50 AP box 75 AP mask APmask50 AP mask 75 Baseline 34.47 54.87 36.58 30.44 51.55 31.95 + 1 NL block 35.02 55.79 37.54 30.23 52.40 32.77 + 1 CGNL block 35.70 56.07 38.69 31.22 52.44 32.67 4.4 Main Results Table 4a shows that although adding 5 NL and CGNL blocks in the baseline networks can both improve the accuracy, the improvement of CGNL network is larger. The same applies to Table 2b and Table 4b. In experiments on UCF101 and CUB dataset, the similar results are also observed that adding 5 CGNL blocks provides the optimal results both for R-50 and R-101. Table 4a shows the main results on Mini-Kinetics dataset. Compared to the baseline R-50 whose top1 is 75.54%, adding 1 NL block brings improvement by about 1.0%. Similar results can be found in the experiments based on R-101, where adding 1 CGNL provides about more than 2% improvement, which is larger than that of adding 1NL block. Table 2b shows the main results on the UCF101 dataset, where adding 1CGNL block achieves higher accuracy than adding 1NL block. And Table 4b shows the main results on the CUB dataset. To understand the effects brought by CGNL network, we show the visualization analysis as shown in Fig 5 and Fig 6. Additionly, to investigate the capacity and the generalization ability of our CGNL network. We test them on the task of object detection and instance segmentation. We add 1 NL and 1 CGNL block in the R-50 backbone for Mask-RCNN [7]. Table 4c shows the main results on COCO2017 dataset [13] by adopting our 1 CGNL block in the backbone of Mask-RCNN [7]. It shows that the performance of adding 1 CGNL block is still better than that of adding 1 NL block. We observe that adding CGNL block can always obtain better results than adding the NL block with the same blocks number. These experiments suggest that considering the correlations between any two positions across the channels can significantly improve the performance than that of original non-local methods. 5 Conclusion We have introduced a simple approximated formulation of compact generalized non-local operation, and have validated it on the task of fine-grained classification and action recognition from RGB images. Our formulation allows for explicit modeling of rich interdependencies between any positions across channels in the feature space. To ease the heavy computation of generalized non-local operation, we propose a compact representation with the simple matrix production by using Taylor expansion for multiple kernel functions. It is easy to implement and requires little additional parameters, making it an attractive alternative to the original non-local block, which only considers the correlations between two positions along the specific channel. Our model produces competitive or state-of-the-art results on various benchmarked datasets. Appendix: Experiments on ImageNet As a general method, the CGNL block is compatible with complementary techniques developed for the image task of fine-grained classification, temporal feature needed task of action recognition and the basic task of object detection. In this appendix, we further report the results of our spatial CGNL network on the large-scale ImageNet [20] dataset, which has 1.2 million training images and 50000 images for validation in 1000 object categories. The training strategy and configurations of our CGNL networks is kept same as those in Sec 4, only except the crop size here used for input is 224. For a better demonstration of the generality of our CGNL network, we investigate both adding 1 dot production CGNL block and 1 Gaussian RBF CGNL block (identified by CGNLx) in Table 5. We compare these models with two strong baselines, R-50 and R-152. In Table 5, all the best top1 and top5 accuracies are reported under the single center crop testing. The CGNL networks beat the basemodels by larger than 1 point no matter whichever the dot production or Gaussian RBF plays as the kernel function in the CGNL module.
1. What is the main contribution of the paper? 2. What inspired the proposed method? 3. How does the proposed method differ from the non-local module? 4. What are the limitations of the proposed method? 5. How does the compact representation help in approximating GNL? 6. Are the technical aspects of the paper sound? 7. Is the novelty of the paper sufficient? 8. What is the significance of the problem addressed by the paper? 9. Are the experiments comprehensive and reproducible? 10. Are there any minor issues with the paper?
Review
Review This paper proposes a method to capture long-range relations in images and videos. It's done by modeling interactions between any positions of any channels of a set of feature-maps. It's inspired by non-local module (NL) [31]. While NL aggregates all channel information together to encode position dependencies, the proposed method encode position dependencies between any channel. I like the paper flow: it addresses a valid drawback of the non-local module, with a clear visualization in fig. 1 and proposes a generalized non-local (GNL) module to tackle the problem. Then, it counts the limitation of a naive implementation of the proposed method and tries to overcome it by proposing a compact representation to approximate GNL. The paper seems to be technically correct. the formulations are correct as long as I checked them. It's well-written, well-structured, easy to follow and in details. The related work is okay and up to date. The novelty of this paper is sufficient. It addresses the valid problem of NL in capturing fine-grained interactions of objects and actions. The paper proposes a generalized extension of NL where all the interactions between every position of every channel are modeled. As this Generalization is computationally prohibitive, the paper approximates it by Taylor series up to a certain order. I think this paper seems to be a useful contribution to the community. The experiments are conducted on well-known datasets both in image and vision domain. Experiments are comprehensive on three tasks of fine-grained bird classification, action recognition, and object detection and in most of the cases, the proposed method outperforms others. Ablation study is there and informative. it seems experiments are reproducible. minor: missing closed parentheses in table 1. L69-76 some repetitions.
NIPS
Title Compact Generalized Non-local Network Abstract The non-local module [27] is designed for capturing long-range spatio-temporal dependencies in images and videos. Although having shown excellent performance, it lacks the mechanism to model the interactions between positions across channels, which are of vital importance in recognizing fine-grained objects and actions. To address this limitation, we generalize the non-local module and take the correlations between the positions of any two channels into account. This extension utilizes the compact representation for multiple kernel functions with Taylor expansion that makes the generalized non-local module in a fast and low-complexity computation flow. Moreover, we implement our generalized non-local method within channel groups to ease the optimization. Experimental results illustrate the clear-cut improvements and practical applicability of the generalized non-local module on both fine-grained object recognition and video classification. Code is available at: https://github.com/KaiyuYue/cgnl-network.pytorch. 1 Introduction Capturing spatio-temporal dependencies between spatial pixels or temporal frames plays a key role in the tasks of fine-grained object and action classification. Modeling such interactions among images and videos is the major topic of various feature extraction techniques, including SIFT, LBP, Dense Trajectory [26], etc. In the past few years, deep neural network automates the feature designing pipeline by stacking multiple end-to-end convolutional or recurrent modules, where each of them processes correlation within spatial or temporal local regions. In general, capturing the long-range dependencies among images or videos still requires multiple stacking of these modules, which greatly hinders the learning and inference efficiency. A recent work [16] also suggests that stacking more layers cannot always increase the effective receptive fields to capture enough local relations. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Inspired by the classical non-local means for image filtering, the recently proposed non-local neural network [27] addresses this challenge by directly modeling the correlation between any two positions in the feature maps in a single module. Without bells and whistles, the non-local method can greatly improve the performances of existing networks on many video classification benchmarks. Despite its great performances, the original non-local network only considers the global spatio-temporal correlation by merging channels, and it might miss the subtle but important cross-channel clues for discriminating fine-grained objects or actions. For instance, the body, the ball and their interaction are all necessary for describing the action of kicking the ball in Fig. 1, while the original non-local operation learns to focus on the body part relations but neglect the body-ball interactions that usually correspond to different channels of the input features. To improve the effectiveness in fine-grained object and action recognition tasks, this work extends the non-local module by learning explicit correlations among all of the elements across the channels. First, this extension scale-ups the representation power of the non-local operation to attend the interaction between subtle object parts (e.g., the body and ball in Fig. 1). Second, we propose its compact representation for various kernel functions to address the high computation burden issue. We show that as a self-contained module, the compact generalized non-local (CGNL) module provides steady improvements in classification tasks. Third, we also investigate the grouped CGNL blocks, which model the correlations across channels within each group. We evaluate the proposed CGNL method on the task of fine-grained classification and action recognition. Extensive experimental results show that: 1) The CGNL network are easy to optimize as the original non-local network; 2) Compared with the non-local module, CGNL module enjoys capturing richer features and dense clues for prediction, as shown in Figure 1, which leads to results substantially better than those of the original non-local module. Moreover, in the appendix of extensional experiments, the CGNL network can also promise a higher accuracy than the baseline on the large-scale ImageNet dataset [20]. 2 Related Works Channel Correlations: The mechanism of sharing the same conv kernel among channels of a layer in a ConvNet [12] can be seen as a basic way to capture correlations among channels, which aggregates the channels of feature maps by the operation of sum pooling. The SENet [10] may be the first work that explicitly models the interdependencies between the channels of its spatial features. It aims to select the useful feature maps and suppress the others, and only considers the global information of each channel. Inspired by [27], we present the generalized non-local (GNL) module, which generalizes the non-local (NL) module to learn the correlations between any two positions across the channels. Compared to the SENet, we model the interdependencies among channels in an explicit and dense manner. Compact Representation: After further investigation, we find that the non-local module contains a second-order feature space (Sect.3.1), which is used widely in previous computer vision tasks, e.g., SIFT [15], Fisher encoding [17], Bilinear model [14] [5] and segmentation task [2]. However, such second-order feature space involves high dimensions and heavy computational burdens. In the area of kernel learning [21], there are many prior works such as compact bilinear pooling (CBP) [5] that uses the Tensor Sketching [18] to address this problem. But this type of method is not perfect yet. Because the it cannot produce a light computation to the various size of sketching vectors. Fortunately, in mathematics, the whole non-local operation can be viewed as a trilinear formation. It can be fast computed with the associative law of matrix production. To the other types of pairwise function, such as Embedded Gaussian or RBF [19], we propose a tight approximation for them by using the Taylor expansion. 3 Approach In this section, we introduce a general formulation of the proposed general non-local operation. We then show that the original non-local and the bilinear pooling are special cases of this formulation. After that, we illustrate that the general non-local operation can be seen as a modality in the trilinear matrix production and show how to implement our generalized non-local (GNL) module in a compact representations. 3.1 Review of Non-local Operation We begin by briefly reviewing the original non-local operation [27] in matrix form. Suppose that an image or video is given to the network and let X ∈ RN×C denote (see notation1) the input feature map of the non-local module, where C is the number of channels. For the sake of notation clarity, we collapse all the spatial (width W and height H) and temporal (video length T ) positions in one dimension, i.e., N = HW or N = HWT . To capture long-range dependencies across the whole feature map, the original non-local operation computes the response Y ∈ RN×C as the weighted sum of the features at all positions, Y = f ( θ(X), φ(X) ) g(X), (1) where θ(·), φ(·), g(·) are learnable transformations on the input. In [27], the authors suggest using 1× 1 or 1× 1× 1 convolution for simplicity, i.e., the transformations can be written as θ(X) = XWθ ∈ RN×C , φ(X) = XWφ ∈ RN×C , g(X) = XWg ∈ RN×C , (2) parameterized by the weight matrices Wθ,Wφ,Wg ∈ RC×C respectively. The pairwise function f(·, ·) : RN×C ×RN×C → RN×N computes the affinity between all positions (space or space-time). There are multiple choices for f , among which dot-product is perhaps the simplest one, i.e., f ( θ(X), φ(X) ) = θ(X)φ(X)>. (3) Plugging Eq. 2 and Eq. 3 into Eq. 1 yields a trilinear interpretation of the non-local operation, Y = XWθW > φX >XWg, (4) where the pairwise matrix XWθW>φX > ∈ RN×N encodes the similarity between any locations of the input feature. The effect of non-local operation can be related to the self-attention module [1] based on the fact that each position (row) in the result Y is a linear combination of all the positions (rows) of XWg weighted by the corresponding row of the pairwise matrix. 3.2 Review of Bilinear Pooling Analogous to the conventional kernel trick [21], the idea of bilinear pooling [14] has recently been adopted in ConvNets for enhancing the feature representation in various tasks, such as fine-grained classification, person re-id, action recognition. At a glance, bilinear pooling models pairwise feature interactions using explicit outer product at the final classification layer: Z = X>X ∈ RC×C , (5) where X ∈ RN×C is the input feature map generated by the last convolutional layer. Each element of the final descriptor zc1c2 = ∑ n xnc1xnc2 sum-pools at each location n = 1, · · · , N the bilinear product xnc1xnc2 of the corresponding channel pair c1, c2 = 1, · · · , C. Despite the distinct design motivation, it is interesting to see that bilinear pooling (Eq. 5) can be viewed as a special case of the second-order term (Eq. 3) in the non-local operation if we consider, θ(X) = X> ∈ RC×N , φ(X) = X> ∈ RC×N . (6) 3.3 Generalized Non-local Operation The original non-local operation aims to directly capture long-range dependencies between any two positions in one layer. However, such dependencies are encoded in a joint location-wise matrix f(θ(X), φ(X)) by aggregating all channel information together. On the other hand, channel-wise correlation has been recently explored in both discriminative [14] and generative [24] models through the covariance analysis across channels. Inspired by these works, we generalize the original non-local operation to model long-range dependencies between any positions of any channels. 1Bold capital letters denote a matrix X, bold lower-case letters a column vector x. xi represents the ith column of the matrix X. xij denotes the scalar in the ith row and jth column of the matrix X. All non-bold letters represent scalars. 1m ∈ Rm is a vector of ones. In ∈ Rn×n is an identity matrix. vec(X) denotes the vectorization of matrix X. X ◦Y and X⊗Y are the Hadamard and Kronecker products of matrices. We first reshape the output of the transformations (Eq. 2) on X by merging channel into position: θ(X) = vec(XWθ) ∈ RNC , φ(X) = vec(XWφ) ∈ RNC , g(X) = vec(XWg) ∈ RNC . (7) By lifting the row space of the underlying transformations, our generalized non-local (GNL) operation pursues the same goal of Eq. 1 that computes the response Y ∈ RN×C as: vec(Y) = f ( vec(XWθ), vec(XWφ) ) vec(XWg). (8) Compared to the original non-local operation (Eq. 4), GNL utilizes a more general pairwise function f(·, ·) : RNC × RNC → RNC×NC that can differentiate between pairs of same location but at different channels. This richer similarity greatly augments the non-local operation in discriminating fine-grained object parts or action snippets that usually correspond to channels of the input feature. Compared to the bilinear pooling (Eq. 5) that can only be used after the last convolutional layer, GNL maintains the input size and can thus be flexibly plugged between any network blocks. In addition, bilinear pooling neglects the spatial correlation which, however, is preserved in GNL. Recently, the idea of dividing channels into groups has been established as a very effective technique in increasing the capacity of ConvNets. Well-known examples include Xception [3], MobileNet [9], ShuffleNet [31], ResNeXt [29] and Group Normalization [28]. Given its simplicity and independence, we also realize the channel grouping idea in GNL by grouping all C channels into G groups, each of which contains C ′ = C/G channels of the input feature. We then perform GNL operation independently for each group to compute Y′ and concatenate the results along the channel dimension to restore the full response Y. 3.4 Compact Representation A straightforward implementation of GNL (Eq. 8) is prohibitive as the quadratic increase with respect to the channel number C in the presence of the NC ×NC pairwise matrix. Although the channel grouping technique can reduce the channel number from C to C/G, the overall computational complexity is still much higher than the original non-local operation. To mitigate this problem, this section proposes a compact representation that leads to an affordable approximation for GNL. Let us denote θ = vec(XWθ), φ = vec(XWφ) and g = vec(XWg), each of which is a NC-D vector column. Without loss of generality, we assume f is a general kernel function (e.g., RBF, bilinear, etc.) that computes a NC ×NC matrix composed by the elements, [ f(θ,φ) ] ij ≈ P∑ p=0 α2p(θiφj) p, (9) which can be approximated by Taylor series up to certain order P . The coefficient αp can be computed in closed form once the kernel function is known. Taking RBF kernel for example, [f(θ,φ)]ij = exp(−γ‖θi − φj‖2) ≈ P∑ p=0 β (2γ)p p! (θiφj) p, (10) where α2p = β (2γ)p p! and β = exp ( − γ(‖θ‖2 + ‖φ‖2) ) is a constant and β = exp(−2γ) if the input vectors θ and φ are `2-normalized. By introducing two matrices, Θ = [α0θ 0, · · · , αPθP ] ∈ RNC×(P+1), Φ = [α0φ0, · · · , αPφP ] ∈ RNC×(P+1) (11) our compact generalized non-local (CGNL) operation approximates Eq. 8 via a trilinear equation, vec(Y) ≈ ΘΦ>g. (12) At first glance, the above approximation still involves the computation of a large pairwise matrix ΘΦ> ∈ RNC×NC . Fortunately, the order of Taylor series is usually relatively small P NC. According to the associative law, we could alternatively compute the vector z = Φ>g ∈ RP+1 first and then calculate Θz in a much smaller complexity ofO(NC(P +1)). In another view, the process that this bilinear form Φ>g is squeezed into scalars can be treated as a related concept of the SE module [10]. Complexity analysis: Table 1 compares the computational complexity of CGNL network with the GNL ones. We cannot afford for directly computing GNL operation because of its huge complexity of O(2(NC)2) in both time and space. Instead, our compact method dramatically eases the heavy calculation to O(NC(P + 1)). Table 1: Complexity comparison of GNL and CGNL operations, where N and C indicate the number of positions and channels respectively. General NL Method CGNL Method Strategy f ( ΘΦ> ) g ΘΦ>g Time O(2(NC)2) O(NC(P + 1)) Space O(2(NC)2) O(NC(P + 1)) 3.5 Implementation Details Fig. 2 illustrates the workflow of how CGNL module processes a feature map X of the size N × C, where N = H ×W or N = T ×H ×W . X is first fed into three 1× 1× 1 convolutional layers that are described by the weights Wθ,Wφ,Wg respectively in Eq. 7. To improve the capacity of neural networks, the channel grouping idea [29, 28] is then applied to divide the transformed feature along the channel dimension into G groups. As shown in Fig. 2, we approximate for each group the GNL operation (Eq. 8) using the Taylor series according to Eq. 12. To achieve generality and compatibility with existing neural network blocks, the CGNL block is implemented by wrapping Eq. 8 in an identity mapping of the input as in residual learning [8]: Z = concat(BN(Y′Wz)) + X, (13) where Wz ∈ RC×C denotes a 1× 1 or 1× 1× 1 convolution layer followed by a Batch Normalization [11] in each group. 4 Experiments 4.1 Datasets We evaluate the CGNL network on multiple tasks, including fine-grained classification and action recognition. For fine-grained classification, we experiment on the Birds-200-2011 (CUB) dataset [25], which contains 11788 images of 200 bird categories. For action recognition, we experiment on two challenging datasets, Mini-Kinetics [30] and UCF101 [22]. The Mini-Kinetics dataset contains 200 action categories. Due to some video links are unavaliable to download, we use 78265 videos for training and 4986 videos for validation. The UCF101 dataset contains 101 actions, which are separated into 25 groups with 4-7 videos of each action in a group. 4.2 Baselines Given the steady performance and efficiency, the ResNet [8] series (ResNet-50 and ResNet-101) are adopted as our baselines. For video tasks, we keep the same architecture configuration with [27], where the temporal dimension is trivially addressed by max pooling. Following [27] the convolutional layers in the baselines are implemented as 1× k × k kernels, and we insert our CGNL blocks into Table 2: Ablations. Top1 and top5 accuracy (%) on various datasets. (a) Results of adding 1 CGNL block on CUB. The kernel of dot production achieves the best result. The accuracies of others are at the edge of baselines. model top1 top5 R-50. 84.05 96.00 Dot Production 85.14 96.88 Gaussian RBF 84.10 95.78 Embedded Gaussian 84.01 96.08 (b) Results of comparison on UCF101. Note that CGNL network is not grouped in channel. model top1 top5 R-50. 81.62 94.62 + 1 NL block 82.88 95.74 + 1 CGNL block 83.38 95.42 (c) Results of channel grouped CGNL networks on CUB. A few groups can boost the performance. But more groups tend to prevent the CGNL block from capturing the correlations between positions across channels. model groups top1 top5 R-101 - 85.05 96.70 + 1 CGNL 1 86.17 97.82 block 4 86.24 97.05 8 86.35 97.86 16 86.13 96.75 32 86.04 96.69 model groups top1 top5 R-101 - 85.05 96.70 + 5 CGNL 1 86.01 95.97 block 4 86.19 96.07 8 86.24 97.23 16 86.43 98.89 32 86.10 97.13 (d) Results of grouped CGNL networks on Mini-Kinetics. More groups help the CGNL networks improve top1 accuracy obveriously. model gorups top1 top5 R-50 - 75.54 92.16 + 1 CGNL 1 77.16 93.56 block 4 77.56 93.008 77.76 93.18 model gorups top1 top5 R-101 - 77.44 93.18 + 1 CGNL 1 78.79 93.64 block 4 79.06 93.548 79.54 93.84 the network to turn them into compact generalized non-local (CGNL) networks. We investigate the configurations of adding 1 and 5 blocks. [27] suggests that adding 1 block on the res4 is slightly better than the others. So our experiments of adding 1 block all target the res4 of ResNet. The experiments of adding 5 blocks, on the other hand, are configured by inserting 2 blocks on the res3, and 3 blocks on the res4, to every other residual block in ResNet-50 and ResNet-101. Training: We use the models pretrained on ImageNet [20] to initialize the weights. The frames of a video are extracted in a dense manner. Following [27], we generate 32-frames input clips for models, first randomly crop out 64 consecutive frames from the full-length video and then drop every other frame. The way to choose these 32-frames input clips can be viewed as a temporal augmentation. The crop size for each clip is distributed evenly between 0.08 and 1.25 of the original image and its aspect ratio is chosen randomly between 3/4 and 4/3. Finally we resize it to 224. We use a weight decay of 0.0001 and momentum of 0.9 in default. The strategy of gradual warmup is used in the first ten epochs. The dropout [23] with ratio 0.5 is inserted between average pooling layer and last fully-connected layer. To keep same with [27], we use zero to initialize the weight and bias of the BatchNorm (BN) layer in both CGNL and NL blocks [6]. To train the networks on CUB dataset, we follow the same training strategy above but the final crop size of 448. Inference: The models are tested immediately after training is finished. In [27], spatially fullyconvolutional inference 2 is used for NL networks. For these video clips, the shorter side is resized to 256 pixels and use 3 crops to cover the entire spatial size along the longer side. The final prediction is the averaged softmax scores of all clips. For fine-grined classification, we do 1 center-crop testing in size of 448. 4.3 Ablation Experiments Kernel Functions: We use three popular kernel functions, namely dot production, embedded Gaussian and Gaussian RBF, in our ablation studies. For dot production, Eq. 12 will be held for direct computation. For embedded Gaussian, the α2p will be 1 p! in Eq. 9. And for Gaussian RBF, the corresponding formula is defined as Eq. 10. We expend the Taylor series with third order and the hyperparameter γ for RBF is set by 1e-4 [4]. Table 2a suggests that dot production is the best kernel functions for CGNL networks. Such experimental observations are consistent with [27]. The other kernel functions we used, Embedded Gaussion and Gaussian RBF, has a little improvements for performance. Therefore, we choose the dot production as our main experimental configuration for other tasks. 2https://github.com/facebookresearch/video-nonlocal-net Grouping: The grouping strategy is another important technique. On Mini-Kinetics, Table 2d shows that grouping can bring higher accuracy. The improvements brought in by adding groups are larger than those by reducing the channel reduction ratio. The best top1 accuracy is achieved by splitting into 8 groups for CGNL networks. On the other hand, however, it is worthwhile to see if more groups can always improve the results, and Table 2c gives the answer that more groups will hamper the performance improvements. This is actually expected, as the affinity in CGNL block considers the points across channels. When we split the channels into a few groups, it can facilitate the restricted optimization and ease the training. However, if too many groups are adopted, it hinder the affinity to capture the rich correlations between elements across the channels. Comparison of CGNL Block to Simple Residual Block: There is a confusion about the efficiency caused by the possibility that the scalars from Φ>g in Eq. 12 could be wiped out by the BN layer. Because according to Algorithm 1 in [11], the output of input Θ weighted by the scalars s = Φ>g can be approximated to O = sΘ−E(sΘ)√ V ar(sΘ) ∗ γ + β = sΘ−sE(Θ)√ s2V ar(Θ) ∗ γ + β = Θ−E(Θ)√ V ar(Θ) ∗ γ + β. At first glance, the scalars s is totally erased by BN in this mathmatical process. However, the de facto operation of a convolutional module has a process order to aggregate the features. Before passing into the BN layer, the scalars s has already saturated in the input features Θ and then been transformed into a different feature space by a learnable parameter Wz . In other words, it is Wz that "protects" s from being erased by BN via the convolutional operation. To eliminate this confusion, we further compare adding 1 CGNL block (with the kernel of dot production) in Fig 3 and adding 1 simple residual block in Fig 4 on CUB dataset in Table 3. The top1 accuracy 84.11% of adding a simple residual block is slightly better than 84.05% of the baseline, but still worse than 85.14% of adding a linear kerenlized CGNL module. We think that the marginal improvement (84.06%→ 84.11%) is due to the more parameters from the added simple residual block. Table 4: Main results. Top1 and top5 accuracy (%) on various datasets. (a) Main validation results on Mini-Kinetics. The CGNL networks is build within 8 groups. model top1 top5 R-50 75.54 92.16 + 1 NL block 76.53 92.90 + 1 CGNL block 77.76 93.18 + 5 NL block 77.53 94.00 + 5 CGNL block 78.79 94.37 R-101 77.44 93.18 + 1 NL block 78.02 93.86 + 1 CGNL block 79.54 93.84 + 5 NL block 79.21 93.21 + 5 CGNL block 79.88 93.37 (b) Results on CUB. The CGNL networks are set by 8 channel groups. model top1 top5 R-50 84.05 96.00 + 1 NL block 84.79 96.76 + 1 CGNL block 85.14 96.88 + 5 NL block 85.10 96.18 + 5 CGNL block 85.68 96.69 model top1 top5 R-101 85.05 96.70 + 1 NL block 85.49 97.04 + 1 CGNL block 86.35 97.86 + 5 NL block 86.10 96.35 + 5 CGNL block 86.24 97.23 (c) Results on COCO. 1 NL or 1 CGNL block is added in Mask R-CNN. model APbox APbox50 AP box 75 AP mask APmask50 AP mask 75 Baseline 34.47 54.87 36.58 30.44 51.55 31.95 + 1 NL block 35.02 55.79 37.54 30.23 52.40 32.77 + 1 CGNL block 35.70 56.07 38.69 31.22 52.44 32.67 4.4 Main Results Table 4a shows that although adding 5 NL and CGNL blocks in the baseline networks can both improve the accuracy, the improvement of CGNL network is larger. The same applies to Table 2b and Table 4b. In experiments on UCF101 and CUB dataset, the similar results are also observed that adding 5 CGNL blocks provides the optimal results both for R-50 and R-101. Table 4a shows the main results on Mini-Kinetics dataset. Compared to the baseline R-50 whose top1 is 75.54%, adding 1 NL block brings improvement by about 1.0%. Similar results can be found in the experiments based on R-101, where adding 1 CGNL provides about more than 2% improvement, which is larger than that of adding 1NL block. Table 2b shows the main results on the UCF101 dataset, where adding 1CGNL block achieves higher accuracy than adding 1NL block. And Table 4b shows the main results on the CUB dataset. To understand the effects brought by CGNL network, we show the visualization analysis as shown in Fig 5 and Fig 6. Additionly, to investigate the capacity and the generalization ability of our CGNL network. We test them on the task of object detection and instance segmentation. We add 1 NL and 1 CGNL block in the R-50 backbone for Mask-RCNN [7]. Table 4c shows the main results on COCO2017 dataset [13] by adopting our 1 CGNL block in the backbone of Mask-RCNN [7]. It shows that the performance of adding 1 CGNL block is still better than that of adding 1 NL block. We observe that adding CGNL block can always obtain better results than adding the NL block with the same blocks number. These experiments suggest that considering the correlations between any two positions across the channels can significantly improve the performance than that of original non-local methods. 5 Conclusion We have introduced a simple approximated formulation of compact generalized non-local operation, and have validated it on the task of fine-grained classification and action recognition from RGB images. Our formulation allows for explicit modeling of rich interdependencies between any positions across channels in the feature space. To ease the heavy computation of generalized non-local operation, we propose a compact representation with the simple matrix production by using Taylor expansion for multiple kernel functions. It is easy to implement and requires little additional parameters, making it an attractive alternative to the original non-local block, which only considers the correlations between two positions along the specific channel. Our model produces competitive or state-of-the-art results on various benchmarked datasets. Appendix: Experiments on ImageNet As a general method, the CGNL block is compatible with complementary techniques developed for the image task of fine-grained classification, temporal feature needed task of action recognition and the basic task of object detection. In this appendix, we further report the results of our spatial CGNL network on the large-scale ImageNet [20] dataset, which has 1.2 million training images and 50000 images for validation in 1000 object categories. The training strategy and configurations of our CGNL networks is kept same as those in Sec 4, only except the crop size here used for input is 224. For a better demonstration of the generality of our CGNL network, we investigate both adding 1 dot production CGNL block and 1 Gaussian RBF CGNL block (identified by CGNLx) in Table 5. We compare these models with two strong baselines, R-50 and R-152. In Table 5, all the best top1 and top5 accuracies are reported under the single center crop testing. The CGNL networks beat the basemodels by larger than 1 point no matter whichever the dot production or Gaussian RBF plays as the kernel function in the CGNL module.
1. What is the main contribution of the paper regarding ConvNets? 2. What are the strengths of the proposed module, particularly in its ability to capture global correlations? 3. What are some concerns or weaknesses of the paper, especially regarding comparisons with other methods? 4. How does the author address the issue of less discussion on the linear version of CGNL using dot product for f? 5. How does the author respond to the concern about missing fundamental comparison to the simple ResBlock?
Review
Review This paper proposes a novel network module to exploit global (non-local) correlations in the feature map for improving ConvNets. The authors focus on the weakness of the non-local (NL) module [31] that the correlations across channels are less taken into account, and then formulate the compact generalized non-local (CGNL) module to remedy the issue through summarizing the previous methods of NL and bilinear pooling [14] in a unified manner. The CGNL is evaluated on thorough experiments for action and fine-grained classification tasks, exhibiting promising performance competitive to the state-of-the-arts. Positives: + The paper is well organized and easy to follow. + The generalized formulation (8,9) to unify bilinear pooling and non-local module is theoretically sound. + Good performance. Negatives: - Less discussion on the linear version of CGNL using dot product for f. - Missing fundamental comparison to the simple ResBlock. The authors nicely present the generalized formulation toward CGNL by unifying the two previous works of bilinear pooling and non-local module. Though the kernelized (non-linear) correlation function f is well theoretically motivated, the actual form of f that achieves the better empirical performance is a “linear” form (dot product). In this regard, the reviewer has the following concerns. - Less discussion about the linear form. If the reviewer correctly understands the CGNL formulation, the linear function f of dot product f (line 204) can greatly simplify the CGNL into Y = X * W_theta * tr[(X*W_phi)’ * (X*W_g)] = X * W_theta * tr[(X’X)* W_g* W_phi’] = s * X * W_theta, where s = tr[(X’X) * W_g * W_phi’]= tr[(X’X)* W] is just a scalar and W = W_g*W_phi’. This reformulation would be beneficial from the following viewpoints. > It reduces the parameters from {W_theta, W_phi, W_g} to {W_theta, W}, which facilitates the implementation. > It is closely related to squeeze-and-excitation (SE) module [9]. The above formulation can be regarded as a bilinear extension of SE from “squeeze” viewpoint since it “squeezes” the feature map X into the bilinear form of X’X while SE simply employs an average-pooling. Such discussions as above would help the readers to further understand the methods and to further extend the method. - Missing comparison. Based on the above discussion, one can think that the baseline for the linear CGNL is a simple ResBlock of Z = BatchNorm( X * W_z ) + X, while the linear CGNL is Z = BatchNorm( s * X * W_theta * W_z ) + X = BatchNorm( s * X * W_tz ) + X. The only difference is the scaling factor s that is also build on X. Through batch normalization, such a scaling might be less effective (during the training) and thus by comparing these closely-related methods, the authors have to clarify its effectiveness of CGNL empirically. Due to this concern, the reviewer can not fairly evaluate the impact of the method on classification performance. [After Rebuttal] The reviewer appreciates the authors’ efforts to perform the comparison experiments in such a short rebuttal period. The comparison with the standard ResBlock clarifies the effectiveness of the proposed method as well as helps us to further understand how it works.
NIPS
Title Compact Generalized Non-local Network Abstract The non-local module [27] is designed for capturing long-range spatio-temporal dependencies in images and videos. Although having shown excellent performance, it lacks the mechanism to model the interactions between positions across channels, which are of vital importance in recognizing fine-grained objects and actions. To address this limitation, we generalize the non-local module and take the correlations between the positions of any two channels into account. This extension utilizes the compact representation for multiple kernel functions with Taylor expansion that makes the generalized non-local module in a fast and low-complexity computation flow. Moreover, we implement our generalized non-local method within channel groups to ease the optimization. Experimental results illustrate the clear-cut improvements and practical applicability of the generalized non-local module on both fine-grained object recognition and video classification. Code is available at: https://github.com/KaiyuYue/cgnl-network.pytorch. 1 Introduction Capturing spatio-temporal dependencies between spatial pixels or temporal frames plays a key role in the tasks of fine-grained object and action classification. Modeling such interactions among images and videos is the major topic of various feature extraction techniques, including SIFT, LBP, Dense Trajectory [26], etc. In the past few years, deep neural network automates the feature designing pipeline by stacking multiple end-to-end convolutional or recurrent modules, where each of them processes correlation within spatial or temporal local regions. In general, capturing the long-range dependencies among images or videos still requires multiple stacking of these modules, which greatly hinders the learning and inference efficiency. A recent work [16] also suggests that stacking more layers cannot always increase the effective receptive fields to capture enough local relations. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Inspired by the classical non-local means for image filtering, the recently proposed non-local neural network [27] addresses this challenge by directly modeling the correlation between any two positions in the feature maps in a single module. Without bells and whistles, the non-local method can greatly improve the performances of existing networks on many video classification benchmarks. Despite its great performances, the original non-local network only considers the global spatio-temporal correlation by merging channels, and it might miss the subtle but important cross-channel clues for discriminating fine-grained objects or actions. For instance, the body, the ball and their interaction are all necessary for describing the action of kicking the ball in Fig. 1, while the original non-local operation learns to focus on the body part relations but neglect the body-ball interactions that usually correspond to different channels of the input features. To improve the effectiveness in fine-grained object and action recognition tasks, this work extends the non-local module by learning explicit correlations among all of the elements across the channels. First, this extension scale-ups the representation power of the non-local operation to attend the interaction between subtle object parts (e.g., the body and ball in Fig. 1). Second, we propose its compact representation for various kernel functions to address the high computation burden issue. We show that as a self-contained module, the compact generalized non-local (CGNL) module provides steady improvements in classification tasks. Third, we also investigate the grouped CGNL blocks, which model the correlations across channels within each group. We evaluate the proposed CGNL method on the task of fine-grained classification and action recognition. Extensive experimental results show that: 1) The CGNL network are easy to optimize as the original non-local network; 2) Compared with the non-local module, CGNL module enjoys capturing richer features and dense clues for prediction, as shown in Figure 1, which leads to results substantially better than those of the original non-local module. Moreover, in the appendix of extensional experiments, the CGNL network can also promise a higher accuracy than the baseline on the large-scale ImageNet dataset [20]. 2 Related Works Channel Correlations: The mechanism of sharing the same conv kernel among channels of a layer in a ConvNet [12] can be seen as a basic way to capture correlations among channels, which aggregates the channels of feature maps by the operation of sum pooling. The SENet [10] may be the first work that explicitly models the interdependencies between the channels of its spatial features. It aims to select the useful feature maps and suppress the others, and only considers the global information of each channel. Inspired by [27], we present the generalized non-local (GNL) module, which generalizes the non-local (NL) module to learn the correlations between any two positions across the channels. Compared to the SENet, we model the interdependencies among channels in an explicit and dense manner. Compact Representation: After further investigation, we find that the non-local module contains a second-order feature space (Sect.3.1), which is used widely in previous computer vision tasks, e.g., SIFT [15], Fisher encoding [17], Bilinear model [14] [5] and segmentation task [2]. However, such second-order feature space involves high dimensions and heavy computational burdens. In the area of kernel learning [21], there are many prior works such as compact bilinear pooling (CBP) [5] that uses the Tensor Sketching [18] to address this problem. But this type of method is not perfect yet. Because the it cannot produce a light computation to the various size of sketching vectors. Fortunately, in mathematics, the whole non-local operation can be viewed as a trilinear formation. It can be fast computed with the associative law of matrix production. To the other types of pairwise function, such as Embedded Gaussian or RBF [19], we propose a tight approximation for them by using the Taylor expansion. 3 Approach In this section, we introduce a general formulation of the proposed general non-local operation. We then show that the original non-local and the bilinear pooling are special cases of this formulation. After that, we illustrate that the general non-local operation can be seen as a modality in the trilinear matrix production and show how to implement our generalized non-local (GNL) module in a compact representations. 3.1 Review of Non-local Operation We begin by briefly reviewing the original non-local operation [27] in matrix form. Suppose that an image or video is given to the network and let X ∈ RN×C denote (see notation1) the input feature map of the non-local module, where C is the number of channels. For the sake of notation clarity, we collapse all the spatial (width W and height H) and temporal (video length T ) positions in one dimension, i.e., N = HW or N = HWT . To capture long-range dependencies across the whole feature map, the original non-local operation computes the response Y ∈ RN×C as the weighted sum of the features at all positions, Y = f ( θ(X), φ(X) ) g(X), (1) where θ(·), φ(·), g(·) are learnable transformations on the input. In [27], the authors suggest using 1× 1 or 1× 1× 1 convolution for simplicity, i.e., the transformations can be written as θ(X) = XWθ ∈ RN×C , φ(X) = XWφ ∈ RN×C , g(X) = XWg ∈ RN×C , (2) parameterized by the weight matrices Wθ,Wφ,Wg ∈ RC×C respectively. The pairwise function f(·, ·) : RN×C ×RN×C → RN×N computes the affinity between all positions (space or space-time). There are multiple choices for f , among which dot-product is perhaps the simplest one, i.e., f ( θ(X), φ(X) ) = θ(X)φ(X)>. (3) Plugging Eq. 2 and Eq. 3 into Eq. 1 yields a trilinear interpretation of the non-local operation, Y = XWθW > φX >XWg, (4) where the pairwise matrix XWθW>φX > ∈ RN×N encodes the similarity between any locations of the input feature. The effect of non-local operation can be related to the self-attention module [1] based on the fact that each position (row) in the result Y is a linear combination of all the positions (rows) of XWg weighted by the corresponding row of the pairwise matrix. 3.2 Review of Bilinear Pooling Analogous to the conventional kernel trick [21], the idea of bilinear pooling [14] has recently been adopted in ConvNets for enhancing the feature representation in various tasks, such as fine-grained classification, person re-id, action recognition. At a glance, bilinear pooling models pairwise feature interactions using explicit outer product at the final classification layer: Z = X>X ∈ RC×C , (5) where X ∈ RN×C is the input feature map generated by the last convolutional layer. Each element of the final descriptor zc1c2 = ∑ n xnc1xnc2 sum-pools at each location n = 1, · · · , N the bilinear product xnc1xnc2 of the corresponding channel pair c1, c2 = 1, · · · , C. Despite the distinct design motivation, it is interesting to see that bilinear pooling (Eq. 5) can be viewed as a special case of the second-order term (Eq. 3) in the non-local operation if we consider, θ(X) = X> ∈ RC×N , φ(X) = X> ∈ RC×N . (6) 3.3 Generalized Non-local Operation The original non-local operation aims to directly capture long-range dependencies between any two positions in one layer. However, such dependencies are encoded in a joint location-wise matrix f(θ(X), φ(X)) by aggregating all channel information together. On the other hand, channel-wise correlation has been recently explored in both discriminative [14] and generative [24] models through the covariance analysis across channels. Inspired by these works, we generalize the original non-local operation to model long-range dependencies between any positions of any channels. 1Bold capital letters denote a matrix X, bold lower-case letters a column vector x. xi represents the ith column of the matrix X. xij denotes the scalar in the ith row and jth column of the matrix X. All non-bold letters represent scalars. 1m ∈ Rm is a vector of ones. In ∈ Rn×n is an identity matrix. vec(X) denotes the vectorization of matrix X. X ◦Y and X⊗Y are the Hadamard and Kronecker products of matrices. We first reshape the output of the transformations (Eq. 2) on X by merging channel into position: θ(X) = vec(XWθ) ∈ RNC , φ(X) = vec(XWφ) ∈ RNC , g(X) = vec(XWg) ∈ RNC . (7) By lifting the row space of the underlying transformations, our generalized non-local (GNL) operation pursues the same goal of Eq. 1 that computes the response Y ∈ RN×C as: vec(Y) = f ( vec(XWθ), vec(XWφ) ) vec(XWg). (8) Compared to the original non-local operation (Eq. 4), GNL utilizes a more general pairwise function f(·, ·) : RNC × RNC → RNC×NC that can differentiate between pairs of same location but at different channels. This richer similarity greatly augments the non-local operation in discriminating fine-grained object parts or action snippets that usually correspond to channels of the input feature. Compared to the bilinear pooling (Eq. 5) that can only be used after the last convolutional layer, GNL maintains the input size and can thus be flexibly plugged between any network blocks. In addition, bilinear pooling neglects the spatial correlation which, however, is preserved in GNL. Recently, the idea of dividing channels into groups has been established as a very effective technique in increasing the capacity of ConvNets. Well-known examples include Xception [3], MobileNet [9], ShuffleNet [31], ResNeXt [29] and Group Normalization [28]. Given its simplicity and independence, we also realize the channel grouping idea in GNL by grouping all C channels into G groups, each of which contains C ′ = C/G channels of the input feature. We then perform GNL operation independently for each group to compute Y′ and concatenate the results along the channel dimension to restore the full response Y. 3.4 Compact Representation A straightforward implementation of GNL (Eq. 8) is prohibitive as the quadratic increase with respect to the channel number C in the presence of the NC ×NC pairwise matrix. Although the channel grouping technique can reduce the channel number from C to C/G, the overall computational complexity is still much higher than the original non-local operation. To mitigate this problem, this section proposes a compact representation that leads to an affordable approximation for GNL. Let us denote θ = vec(XWθ), φ = vec(XWφ) and g = vec(XWg), each of which is a NC-D vector column. Without loss of generality, we assume f is a general kernel function (e.g., RBF, bilinear, etc.) that computes a NC ×NC matrix composed by the elements, [ f(θ,φ) ] ij ≈ P∑ p=0 α2p(θiφj) p, (9) which can be approximated by Taylor series up to certain order P . The coefficient αp can be computed in closed form once the kernel function is known. Taking RBF kernel for example, [f(θ,φ)]ij = exp(−γ‖θi − φj‖2) ≈ P∑ p=0 β (2γ)p p! (θiφj) p, (10) where α2p = β (2γ)p p! and β = exp ( − γ(‖θ‖2 + ‖φ‖2) ) is a constant and β = exp(−2γ) if the input vectors θ and φ are `2-normalized. By introducing two matrices, Θ = [α0θ 0, · · · , αPθP ] ∈ RNC×(P+1), Φ = [α0φ0, · · · , αPφP ] ∈ RNC×(P+1) (11) our compact generalized non-local (CGNL) operation approximates Eq. 8 via a trilinear equation, vec(Y) ≈ ΘΦ>g. (12) At first glance, the above approximation still involves the computation of a large pairwise matrix ΘΦ> ∈ RNC×NC . Fortunately, the order of Taylor series is usually relatively small P NC. According to the associative law, we could alternatively compute the vector z = Φ>g ∈ RP+1 first and then calculate Θz in a much smaller complexity ofO(NC(P +1)). In another view, the process that this bilinear form Φ>g is squeezed into scalars can be treated as a related concept of the SE module [10]. Complexity analysis: Table 1 compares the computational complexity of CGNL network with the GNL ones. We cannot afford for directly computing GNL operation because of its huge complexity of O(2(NC)2) in both time and space. Instead, our compact method dramatically eases the heavy calculation to O(NC(P + 1)). Table 1: Complexity comparison of GNL and CGNL operations, where N and C indicate the number of positions and channels respectively. General NL Method CGNL Method Strategy f ( ΘΦ> ) g ΘΦ>g Time O(2(NC)2) O(NC(P + 1)) Space O(2(NC)2) O(NC(P + 1)) 3.5 Implementation Details Fig. 2 illustrates the workflow of how CGNL module processes a feature map X of the size N × C, where N = H ×W or N = T ×H ×W . X is first fed into three 1× 1× 1 convolutional layers that are described by the weights Wθ,Wφ,Wg respectively in Eq. 7. To improve the capacity of neural networks, the channel grouping idea [29, 28] is then applied to divide the transformed feature along the channel dimension into G groups. As shown in Fig. 2, we approximate for each group the GNL operation (Eq. 8) using the Taylor series according to Eq. 12. To achieve generality and compatibility with existing neural network blocks, the CGNL block is implemented by wrapping Eq. 8 in an identity mapping of the input as in residual learning [8]: Z = concat(BN(Y′Wz)) + X, (13) where Wz ∈ RC×C denotes a 1× 1 or 1× 1× 1 convolution layer followed by a Batch Normalization [11] in each group. 4 Experiments 4.1 Datasets We evaluate the CGNL network on multiple tasks, including fine-grained classification and action recognition. For fine-grained classification, we experiment on the Birds-200-2011 (CUB) dataset [25], which contains 11788 images of 200 bird categories. For action recognition, we experiment on two challenging datasets, Mini-Kinetics [30] and UCF101 [22]. The Mini-Kinetics dataset contains 200 action categories. Due to some video links are unavaliable to download, we use 78265 videos for training and 4986 videos for validation. The UCF101 dataset contains 101 actions, which are separated into 25 groups with 4-7 videos of each action in a group. 4.2 Baselines Given the steady performance and efficiency, the ResNet [8] series (ResNet-50 and ResNet-101) are adopted as our baselines. For video tasks, we keep the same architecture configuration with [27], where the temporal dimension is trivially addressed by max pooling. Following [27] the convolutional layers in the baselines are implemented as 1× k × k kernels, and we insert our CGNL blocks into Table 2: Ablations. Top1 and top5 accuracy (%) on various datasets. (a) Results of adding 1 CGNL block on CUB. The kernel of dot production achieves the best result. The accuracies of others are at the edge of baselines. model top1 top5 R-50. 84.05 96.00 Dot Production 85.14 96.88 Gaussian RBF 84.10 95.78 Embedded Gaussian 84.01 96.08 (b) Results of comparison on UCF101. Note that CGNL network is not grouped in channel. model top1 top5 R-50. 81.62 94.62 + 1 NL block 82.88 95.74 + 1 CGNL block 83.38 95.42 (c) Results of channel grouped CGNL networks on CUB. A few groups can boost the performance. But more groups tend to prevent the CGNL block from capturing the correlations between positions across channels. model groups top1 top5 R-101 - 85.05 96.70 + 1 CGNL 1 86.17 97.82 block 4 86.24 97.05 8 86.35 97.86 16 86.13 96.75 32 86.04 96.69 model groups top1 top5 R-101 - 85.05 96.70 + 5 CGNL 1 86.01 95.97 block 4 86.19 96.07 8 86.24 97.23 16 86.43 98.89 32 86.10 97.13 (d) Results of grouped CGNL networks on Mini-Kinetics. More groups help the CGNL networks improve top1 accuracy obveriously. model gorups top1 top5 R-50 - 75.54 92.16 + 1 CGNL 1 77.16 93.56 block 4 77.56 93.008 77.76 93.18 model gorups top1 top5 R-101 - 77.44 93.18 + 1 CGNL 1 78.79 93.64 block 4 79.06 93.548 79.54 93.84 the network to turn them into compact generalized non-local (CGNL) networks. We investigate the configurations of adding 1 and 5 blocks. [27] suggests that adding 1 block on the res4 is slightly better than the others. So our experiments of adding 1 block all target the res4 of ResNet. The experiments of adding 5 blocks, on the other hand, are configured by inserting 2 blocks on the res3, and 3 blocks on the res4, to every other residual block in ResNet-50 and ResNet-101. Training: We use the models pretrained on ImageNet [20] to initialize the weights. The frames of a video are extracted in a dense manner. Following [27], we generate 32-frames input clips for models, first randomly crop out 64 consecutive frames from the full-length video and then drop every other frame. The way to choose these 32-frames input clips can be viewed as a temporal augmentation. The crop size for each clip is distributed evenly between 0.08 and 1.25 of the original image and its aspect ratio is chosen randomly between 3/4 and 4/3. Finally we resize it to 224. We use a weight decay of 0.0001 and momentum of 0.9 in default. The strategy of gradual warmup is used in the first ten epochs. The dropout [23] with ratio 0.5 is inserted between average pooling layer and last fully-connected layer. To keep same with [27], we use zero to initialize the weight and bias of the BatchNorm (BN) layer in both CGNL and NL blocks [6]. To train the networks on CUB dataset, we follow the same training strategy above but the final crop size of 448. Inference: The models are tested immediately after training is finished. In [27], spatially fullyconvolutional inference 2 is used for NL networks. For these video clips, the shorter side is resized to 256 pixels and use 3 crops to cover the entire spatial size along the longer side. The final prediction is the averaged softmax scores of all clips. For fine-grined classification, we do 1 center-crop testing in size of 448. 4.3 Ablation Experiments Kernel Functions: We use three popular kernel functions, namely dot production, embedded Gaussian and Gaussian RBF, in our ablation studies. For dot production, Eq. 12 will be held for direct computation. For embedded Gaussian, the α2p will be 1 p! in Eq. 9. And for Gaussian RBF, the corresponding formula is defined as Eq. 10. We expend the Taylor series with third order and the hyperparameter γ for RBF is set by 1e-4 [4]. Table 2a suggests that dot production is the best kernel functions for CGNL networks. Such experimental observations are consistent with [27]. The other kernel functions we used, Embedded Gaussion and Gaussian RBF, has a little improvements for performance. Therefore, we choose the dot production as our main experimental configuration for other tasks. 2https://github.com/facebookresearch/video-nonlocal-net Grouping: The grouping strategy is another important technique. On Mini-Kinetics, Table 2d shows that grouping can bring higher accuracy. The improvements brought in by adding groups are larger than those by reducing the channel reduction ratio. The best top1 accuracy is achieved by splitting into 8 groups for CGNL networks. On the other hand, however, it is worthwhile to see if more groups can always improve the results, and Table 2c gives the answer that more groups will hamper the performance improvements. This is actually expected, as the affinity in CGNL block considers the points across channels. When we split the channels into a few groups, it can facilitate the restricted optimization and ease the training. However, if too many groups are adopted, it hinder the affinity to capture the rich correlations between elements across the channels. Comparison of CGNL Block to Simple Residual Block: There is a confusion about the efficiency caused by the possibility that the scalars from Φ>g in Eq. 12 could be wiped out by the BN layer. Because according to Algorithm 1 in [11], the output of input Θ weighted by the scalars s = Φ>g can be approximated to O = sΘ−E(sΘ)√ V ar(sΘ) ∗ γ + β = sΘ−sE(Θ)√ s2V ar(Θ) ∗ γ + β = Θ−E(Θ)√ V ar(Θ) ∗ γ + β. At first glance, the scalars s is totally erased by BN in this mathmatical process. However, the de facto operation of a convolutional module has a process order to aggregate the features. Before passing into the BN layer, the scalars s has already saturated in the input features Θ and then been transformed into a different feature space by a learnable parameter Wz . In other words, it is Wz that "protects" s from being erased by BN via the convolutional operation. To eliminate this confusion, we further compare adding 1 CGNL block (with the kernel of dot production) in Fig 3 and adding 1 simple residual block in Fig 4 on CUB dataset in Table 3. The top1 accuracy 84.11% of adding a simple residual block is slightly better than 84.05% of the baseline, but still worse than 85.14% of adding a linear kerenlized CGNL module. We think that the marginal improvement (84.06%→ 84.11%) is due to the more parameters from the added simple residual block. Table 4: Main results. Top1 and top5 accuracy (%) on various datasets. (a) Main validation results on Mini-Kinetics. The CGNL networks is build within 8 groups. model top1 top5 R-50 75.54 92.16 + 1 NL block 76.53 92.90 + 1 CGNL block 77.76 93.18 + 5 NL block 77.53 94.00 + 5 CGNL block 78.79 94.37 R-101 77.44 93.18 + 1 NL block 78.02 93.86 + 1 CGNL block 79.54 93.84 + 5 NL block 79.21 93.21 + 5 CGNL block 79.88 93.37 (b) Results on CUB. The CGNL networks are set by 8 channel groups. model top1 top5 R-50 84.05 96.00 + 1 NL block 84.79 96.76 + 1 CGNL block 85.14 96.88 + 5 NL block 85.10 96.18 + 5 CGNL block 85.68 96.69 model top1 top5 R-101 85.05 96.70 + 1 NL block 85.49 97.04 + 1 CGNL block 86.35 97.86 + 5 NL block 86.10 96.35 + 5 CGNL block 86.24 97.23 (c) Results on COCO. 1 NL or 1 CGNL block is added in Mask R-CNN. model APbox APbox50 AP box 75 AP mask APmask50 AP mask 75 Baseline 34.47 54.87 36.58 30.44 51.55 31.95 + 1 NL block 35.02 55.79 37.54 30.23 52.40 32.77 + 1 CGNL block 35.70 56.07 38.69 31.22 52.44 32.67 4.4 Main Results Table 4a shows that although adding 5 NL and CGNL blocks in the baseline networks can both improve the accuracy, the improvement of CGNL network is larger. The same applies to Table 2b and Table 4b. In experiments on UCF101 and CUB dataset, the similar results are also observed that adding 5 CGNL blocks provides the optimal results both for R-50 and R-101. Table 4a shows the main results on Mini-Kinetics dataset. Compared to the baseline R-50 whose top1 is 75.54%, adding 1 NL block brings improvement by about 1.0%. Similar results can be found in the experiments based on R-101, where adding 1 CGNL provides about more than 2% improvement, which is larger than that of adding 1NL block. Table 2b shows the main results on the UCF101 dataset, where adding 1CGNL block achieves higher accuracy than adding 1NL block. And Table 4b shows the main results on the CUB dataset. To understand the effects brought by CGNL network, we show the visualization analysis as shown in Fig 5 and Fig 6. Additionly, to investigate the capacity and the generalization ability of our CGNL network. We test them on the task of object detection and instance segmentation. We add 1 NL and 1 CGNL block in the R-50 backbone for Mask-RCNN [7]. Table 4c shows the main results on COCO2017 dataset [13] by adopting our 1 CGNL block in the backbone of Mask-RCNN [7]. It shows that the performance of adding 1 CGNL block is still better than that of adding 1 NL block. We observe that adding CGNL block can always obtain better results than adding the NL block with the same blocks number. These experiments suggest that considering the correlations between any two positions across the channels can significantly improve the performance than that of original non-local methods. 5 Conclusion We have introduced a simple approximated formulation of compact generalized non-local operation, and have validated it on the task of fine-grained classification and action recognition from RGB images. Our formulation allows for explicit modeling of rich interdependencies between any positions across channels in the feature space. To ease the heavy computation of generalized non-local operation, we propose a compact representation with the simple matrix production by using Taylor expansion for multiple kernel functions. It is easy to implement and requires little additional parameters, making it an attractive alternative to the original non-local block, which only considers the correlations between two positions along the specific channel. Our model produces competitive or state-of-the-art results on various benchmarked datasets. Appendix: Experiments on ImageNet As a general method, the CGNL block is compatible with complementary techniques developed for the image task of fine-grained classification, temporal feature needed task of action recognition and the basic task of object detection. In this appendix, we further report the results of our spatial CGNL network on the large-scale ImageNet [20] dataset, which has 1.2 million training images and 50000 images for validation in 1000 object categories. The training strategy and configurations of our CGNL networks is kept same as those in Sec 4, only except the crop size here used for input is 224. For a better demonstration of the generality of our CGNL network, we investigate both adding 1 dot production CGNL block and 1 Gaussian RBF CGNL block (identified by CGNLx) in Table 5. We compare these models with two strong baselines, R-50 and R-152. In Table 5, all the best top1 and top5 accuracies are reported under the single center crop testing. The CGNL networks beat the basemodels by larger than 1 point no matter whichever the dot production or Gaussian RBF plays as the kernel function in the CGNL module.
1. What is the main contribution of the paper, and how does it extend previous works in non-local neural networks? 2. What are the strengths of the paper, particularly in its motivation and writing style? 3. What are the weaknesses of the paper, especially regarding its technical novelty and performance improvement? 4. How does the reviewer assess the significance of the proposed dense correlations in capturing subtle relationships between objects and scenes? 5. Are there any concerns or questions regarding the combination of NL module and group idea, as well as the application of compact bilinear pooling?
Review
Review This work is an extension of non-local (NL) neural network [31]. Inspired by SENet [9], the authors generalize the non-local module and take the correlations between the positions of any two channels into account. The dense correlations bring improvements for fine-grained visual tasks. Strengths: 1. The motivation of the paper is clear. In order to capture the subtle relationships between objects and scenes, they propose dense correlations to extract information from any channels. The writing is good and concise. Weakness: 1. Technical novelty is limited. There are two contributions in the paper, one is generalized NL module, the other is the compact representation. For the first contribution, it is basically the combination of NL module [31] and group idea [2,8,32,33,35]. For the second contribution, it is the application of compact bilinear pooling from CVPR 2017. 2. The performance improvement is not consistent, and usually quite marginal (below 0.5%). From the visualizations in Figure 4, we may argue that the better performance is brought by the dense connections, not the generalized NL module.
NIPS
Title AdaGAN: Boosting Generative Models Abstract Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. N/A Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. 1 Introduction Imagine we have a large corpus, containing unlabeled pictures of animals, and our task is to build a generative probabilistic model of the data. We run a recently proposed algorithm and end up with a model which produces impressive pictures of cats and dogs, but not a single giraffe. A natural way to fix this would be to manually remove all cats and dogs from the training set and run the algorithm on the updated corpus. The algorithm would then have no choice but to produce new animals and, by iterating this process until there’s only giraffes left in the training set, we would arrive at a model generating giraffes (assuming sufficient sample size). At the end, we aggregate the models obtained by building a mixture model. Unfortunately, the described meta-algorithm requires manual work for removing certain pictures from the unlabeled training set at every iteration. Let us turn this into an automatic approach, and rather than including or excluding a picture, put continuous weights on them. To this end, we train a binary classifier to separate “true” pictures of the original corpus from the set of “synthetic” pictures generated by the mixture of all the models trained so far. We would expect the classifier to make confident predictions for the true pictures of animals missed by the model (giraffes), because there are no synthetic pictures nearby to be confused with them. By a similar argument, the classifier should make less confident predictions for the true pictures containing animals already generated by one of the trained models (cats and dogs). For each picture in the corpus, we can thus use the classifier’s confidence to compute a weight which we use for that picture in the next iteration, to be performed on the re-weighted dataset. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The present work provides a principled way to perform this re-weighting, with theoretical guarantees showing that the resulting mixture models indeed approach the true data distribution.1 ALGORITHM 1 AdaGAN, a meta-algorithm to construct a “strong” mixture of T individual generative models (f.ex. GANs), trained sequentially. Input: Training sample SN := {X1, . . . , XN}. Output: Mixture generative model G = GT . Train vanilla GAN G1 = GAN(SN ,W1) with a uniform weight W1 = (1/N, . . . , 1/N) over the training points for t = 2, . . . , T do #Choose the overall weight of the next mixture component βt = ChooseMixtureWeight(t) #Update the weight of each training example Wt = UpdateTrainingWeights(Gt−1, SN , βt) #Train t-th “weak” component generator Gct Gct = GAN(SN ,Wt) #Update the overall generative model: #Form a mixture of Gt−1 and Gct . Gt = (1− βt)Gt−1 + βtGct end for Before discussing how to build the mixture, let us consider the question of building a single generative model. A recent trend in modelling high dimensional data such as natural images is to use neural networks [1, 2]. One popular approach are Generative Adversarial Networks (GAN) [2], where the generator is trained adversarially against a classifier, which tries to differentiate the true from the generated data. While the original GAN algorithm often produces realistically looking data, several issues were reported in the literature, among which the missing modes problem, where the generator converges to only one or a few modes of the data distribution, thus not providing enough variability in the generated data. This seems to match the situation described earlier, which is why we will most often illustrate our algorithm with a GAN as the underlying base generator. We call it AdaGAN, for Adaptive GAN, but we could actually use any other generator: a Gaussian mixture model, a VAE [1], a WGAN [3], or even an unrolled [4] or mode-regularized GAN [5], which were both already specifically developed to tackle the missing mode problem. Thus, we do not aim at improving the original GAN or any other generative algorithm. We rather propose and analyse a meta-algorithm that can be used on top of any of them. This meta-algorithm is similar in spirit to AdaBoost in the sense that each iteration corresponds to learning a “weak” generative model (e.g., GAN) with respect to a re-weighted data distribution. The weights change over time to focus on the “hard” examples, i.e. those that the mixture has not been able to properly generate so far. Related Work Several authors [6, 7, 8] have proposed to use boosting techniques in the context of density estimation by incrementally adding components in the log domain. This idea was applied to GANs in [8]. A major downside of these approaches is that the resulting mixture is a product of components and sampling from such a model is nontrivial (at least when applied to GANs where the model density is not expressed analytically) and requires techniques such as Annealed Importance Sampling [9] for the normalization. When the log likelihood can be computed, [10] proposed to use an additive mixture model. They derived the update rule via computing the steepest descent direction when adding a component with infinitesimal weight. However, their results do not apply once the weight β becomes non-infinitesimal. In contrast, for any fixed weight of the new component our approach gives the overall optimal update (rather than just the best direction) for a specified f -divergence. In both theories, improvements of the mixture are guaranteed only if the new “weak” learner is still good enough (see Conditions 10&11) Similarly, [11] studied the construction of mixtures minimizing the Kullback divergence and proposed a greedy procedure for doing so. They also proved that under certain conditions, finite mixtures can approximate arbitrary mixtures at a rate 1/k where k is the number of components in the mixture when the weight of each newly added component is 1/k. These results are specific to the Kullback divergence but are consistent with our more general results. An additive procedure similar to ours was proposed in [12] but with a different re-weighting scheme, which is not motivated by a theoretical analysis of optimality conditions. On every new iteration the authors run GAN on the k training examples with maximal values of the discriminator from the last iteration. 1Note that the term “mixture” should not be interpreted to imply that each component models only one mode: the models to be combined into a mixture can themselves cover multiple modes. Finally, many papers investigate completely different approaches for addressing the same issue by directly modifying the training objective of an individual GAN. For instance, [5] add an autoencoding cost to the training objective of GAN, while [4] allow the generator to “look few steps ahead” when making a gradient step. The paper is organized as follows. In Section 2 we present our main theoretical results regarding iterative optimization of mixture models under general f -divergences. In Section 2.4 we show that if optimization at each step is perfect, the process converges to the true data distribution at exponential rate (or even in a finite number of steps, for which we provide a necessary and sufficient condition). Then we show in Section 2.5 that imperfect solutions still lead to the exponential rate of convergence under certain “weak learnability” conditions. These results naturally lead to a new boosting-style iterative procedure for constructing generative models. When used with GANs, it results in our AdaGAN algorithm, detailed in Section 3 . Finally, we report initial empirical results in Section 4, where we compare AdaGAN with several benchmarks, including original GAN and uniform mixture of multiple independently trained GANs. Part of new theoretical results are reported without proofs, which can be found in appendices. 2 Minimizing f -divergence with Mixtures 2.1 Preliminaries and notations Generative Density Estimation In density estimation, one tries to approximate a real data distribution Pd, defined over the data space X , by a model distribution Pmodel. In the generative approach one builds a function G : Z → X that transforms a fixed probability distribution PZ (often called the noise distribution) over a latent space Z into a distribution over X . Hence Pmodel is the pushforward of PZ , i.e. Pmodel(A) = PZ(G−1(A)). With this approach it is in general impossible to compute the density dPmodel(x) and the log-likelihood of the training data under the model, but one can easily sample from Pmodel by sampling from PZ and applying G. Thus, to construct G, instead of comparing Pmodel directly with Pd, one compares their samples. To do so, one uses a similarity measure D(Pmodel‖Pd) which can be estimated from samples of those distributions, and thus approximately minimized over a class G of functions. f -Divergences In order to measure the agreement between the model distribution and the true distribution we will use an f -divergence defined in the following way: Df (Q‖P ) := ∫ f ( dQ dP (x) ) dP (x) (1) for any pair of distributions P,Q with densities dP , dQ with respect to some dominating reference measure µ (we refer to Appendix D for more details about such divergences and their domain of definition). Here we assume that f is convex, defined on (0,∞), and satisfies f(1) = 0. We will denote by F the set of such functions. 2 As demonstrated in [16, 17], several commonly used symmetric f -divergences are Hilbertian metrics, which in particular means that their square root satisfies the triangle inequality. This is true for the Jensen-Shannon divergence3, the Hellinger distance and the Total Variation among others. We will denote by FH the set of functions f such that Df is a Hilbertian metric. GAN and f -divergences The original GAN algorithm [2] optimizes the following criterion: min G max D EPd [logD(X)] + EPZ [log(1−D(G(Z)))] , (2) where D and G are two functions represented by neural networks. This optimization is performed on a pair of samples (a training sample from Pd and a “fake” sample from PZ), which corresponds to approximating the above criterion by using the empirical distributions. In the non-parametric limit for D, this is equivalent to minimizing the Jensen-Shannon divergence [2]. This point of view can be generalized to any other f -divergence [13]. Because of this strong connection between adversarial 2Examples of f -divergences include the Kullback-Leibler divergence (obtained for f(x) = x log x) and Jensen-Shannon divergence (f(x) = −(x+ 1) log x+1 2 + x log x). Other examples can be found in [13]. For further details we refer to Section 1.3 of [14] and [15]. 3which means such a property can be used in the context of the original GAN algorithm. training of generative models and minimization of f -divergences, we cast the results of this section into the context of general f -divergences. Generative Mixture Models In order to model complex data distributions, it can be convenient to use a mixture model of the following form: PTmodel := ∑T i=1 αiPi, where αi ≥ 0, ∑ i αi = 1, and each of the T components is a generative density model. This is natural in the generative context, since sampling from a mixture corresponds to a two-step sampling, where one first picks the mixture component (according to the multinomial distribution with parameters αi) and then samples from it. Also, this allows to construct complex models from simpler ones. 2.2 Incremental Mixture Building We restrict ourselves to the case of f -divergences and assume that, given an i.i.d. sample from any unknown distribution P , we can construct a simple model Q ∈ G which approximately minimizes4 min Q∈G Df (Q ‖P ). (3) Instead of modelling the data with a single distribution, we now want to model it with a mixture of distributions Pi,where each Pi is obtained by a training procedure of the form (3) with (possibly) different target distributions P for each i. A natural way to build a mixture is to do it incrementally: we train the first model P1 to minimize Df (P1 ‖Pd) and set the corresponding weight to α1 = 1, leading to P 1model = P1. Then after having trained t components P1, . . . , Pt ∈ G we can form the (t+ 1)-st mixture model by adding a new component Q with weight β as follows: P t+1model := t∑ i=1 (1− β)αiPi + βQ. (4) where β ∈ [0, 1] and Q ∈ G is computed by minimizing: min Q Df ((1− β)Pg + βQ ‖Pd), (5) where we denoted Pg := P tmodel the current generative mixture model before adding the new component. We do not expect to find the optimal Q that minimizes (5) at each step, but we aim at constructing some Q that slightly improves our current approximation of Pd, i.e. such that for c < 1 Df ((1− β)Pg + βQ ‖Pd) ≤ c ·Df (Pg ‖Pd) . (6) This greedy approach has a significant drawback in practice. As we build up the mixture, we need to make β decrease (as P tmodel approximates Pd better and better, one should make the correction at each step smaller and smaller). Since we are approximating (5) using samples from both distributions, this means that the sample from the mixture will only contain a fraction β of examples from Q. So, as t increases, getting meaningful information from a sample so as to tune Q becomes harder and harder (the information is “diluted”). To address this issue, we propose to optimize an upper bound on (5) which involves a term of the form Df (Q ‖R) for some distribution R, which can be computed as a re-weighting of the original data distribution Pd. This procedure is reminiscent of the AdaBoost algorithm [18], which combines multiple weak predictors into one strong composition. On each step AdaBoost adds new predictor to the current composition, which is trained to minimize the binary loss on the re-weighted training set. The weights are constantly updated to bias the next weak learner towards “hard” examples, which were incorrectly classified during previous stages. In the following we will analyze the properties of (5) and derive upper bounds that provide practical optimization criteria for building the mixture. We will also show that under certain assumptions, the minimization of the upper bound leads to the optimum of the original criterion. 2.3 Upper Bounds We provide two upper bounds on the divergence of the mixture in terms of the divergence of the additive component Q with respect to some reference distribution R. 4One example of such a setting is running GANs. Lemma 1 Given two distributions Pd, Pg and some β ∈ [0, 1], then, for any Q and R, and f ∈ FH :√ Df ((1− β)Pg + βQ ‖Pd) ≤ √ βDf (Q ‖R) + √ Df ((1− β)Pg + βR ‖Pd) . (7) If, more generally, f ∈ F , but βdR ≤ dPd, then: Df ((1− β)Pg + βQ ‖Pd) ≤ βDf (Q ‖R) + (1− β)Df ( Pg ‖ Pd − βR 1− β ) . (8) We can thus exploit those bounds by introducing some well-chosen distributionR and then minimizing them with respect to Q. A natural choice for R is a distribution that minimizes the last term of the upper bound (which does not depend on Q). Our main result indicates the shape of the distributions minimizing the right-most terms in those bounds. Theorem 1 For any f -divergence Df , with f ∈ F and f differentiable, any fixed distributions Pd, Pg , and any β ∈ (0, 1], the minimizer of (5) over all probability distributions P has density dQ∗β(x) = 1 β (λ∗dPd(x)− (1− β)dPg(x))+ = dPd β ( λ∗ − (1− β)dPg dPd ) + . (9) for the unique λ∗ ∈ [β, 1] satisfying ∫ dQ∗β = 1. Also, λ ∗ = 1 if and only if Pd((1− β)dPg > dPd) = 0, which is equivalent to βdQ∗β = dPd − (1− β)dPg . Theorem 2 Given two distributions Pd, Pg and some β ∈ (0, 1], assume Pd (dPg = 0) < β. Let f ∈ F . The problem min Q:βdQ≤dPd Df ( Pg ‖ Pd − βQ 1− β ) has a solution with the density dQ†β(x) = 1 β ( dPd(x)− λ†(1− β)dPg(x) ) + for the unique λ† ≥ 1 that satisfies ∫ dQ†β = 1. Surprisingly, in both Theorems 1 and 2, the solutions do not depend on the choice of the function f , which means that the solution is the same for any f -divergence5. Note that λ∗ is implicitly defined by a fixed-point equation. In Section 3 we will show how it can be computed efficiently in the case of empirical distributions. 2.4 Convergence Analysis for Optimal Updates In previous section we derived analytical expressions for the distributions R minimizing last terms in upper bounds (8) and (7). Assuming Q can perfectly match R, i.e.Df (Q ‖R) = 0, we are now interested in the convergence of the mixture (4) to the true data distribution Pd when Q = Q∗β or Q = Q†β . We start with simple results showing that adding Q ∗ β or Q † β to the current mixture would yield a strict improvement of the divergence. Lemma 2 (Property 6: exponential improvements) Under the conditions of Theorem 1, we have Df ( (1− β)Pg + βQ∗β ∥∥Pd) ≤ Df((1− β)Pg + βPd ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Under the conditions of Theorem 2, we have Df ( Pg ∥∥ Pd − βQ†β 1− β ) ≤ Df (Pg ‖Pd) and Df ( (1− β)Pg + βQ†β ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Imagine repeatedly adding T new components to the current mixture Pg , where on every step we use the same weight β and choose the components described in Theorem 1. In this case Lemma 2 guarantees that the original objective value Df (Pg ‖Pd) would be reduced at least to (1− β)TDf (Pg ‖Pd). 5in particular, by replacing f with f◦(x) := xf(1/x), we get the same solution for the criterion written in the other direction. Hence the order in which we write the divergence does not matter and the optimal solution is optimal for both orders. This exponential rate of convergence, which at first may look surprisingly good, is simply explained by the fact that Q∗β depends on the true distribution Pd, which is of course unknown. Lemma 2 also suggests setting β as large as possible since we assume we can compute the optimal mixture component (which for β = 1 is Pd). However, in practice we may prefer to keep β relatively small, preserving what we learned so far through Pg: for instance, when Pg already covered part of the modes of Pd and we want Q to cover the remaining ones. We provide further discussions on choosing β in Section 3. 2.5 Weak to Strong Learnability In practice the component Q that we add to the mixture is not exactly Q∗β or Q † β , but rather an approximation to them. In this section we show that if this approximation is good enough, then we retain the property (6) (exponential improvements). Looking again at Lemma 1 we notice that the first upper bound is less tight than the second one. Indeed, take the optimal distributions provided by Theorems 1 and 2 and plug them back as R into the upper bounds of Lemma 1. Also assume that Q can match R exactly, i.e.Df (Q ‖R) = 0. In this case both sides of (7) are equal to Df ((1− β)Pg + βQ∗β ‖Pd), which is the optimal value for the original objective (5). On the other hand, (8) does not become an equality and the r.h.s. is not the optimal one for (5). However, earlier we agreed that our aim is to reach the modest goal (6) and next we show that this is indeed possible.Corollaries 1 and 2 provide sufficient conditions for strict improvements when we use the upper bounds (8) and (7) respectively. Corollary 1 Given Pd, Pg, and some β ∈ (0, 1], assume Pd ( dPg dPd = 0 ) < β. Let Q†β be as defined in Theorem 2. If Q is such that Df (Q ‖Q†β) ≤ γDf (Pg ‖Pd) (10) for γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ (1− β(1− γ))Df (Pg ‖Pd). Corollary 2 Let f ∈ FH . Take any β ∈ (0, 1], Pd, Pg , and let Q∗β be as defined in Theorem 1. If Q is such that Df (Q ‖Q∗β) ≤ γDf (Pg ‖Pd) (11) for some γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ Cγ,β · Df (Pg ‖Pd) , where Cγ,β =(√ γβ + √ 1− β )2 is strictly smaller than 1 as soon as γ < β/4 (and β > 0). Conditions 10 and 11 may be compared to the “weak learnability” condition of AdaBoost. As long as our weak learner is able to solve the surrogate problem (3) of matching respectively Q†β or Q ∗ β accurately enough, the original objective (5) is guaranteed to decrease as well. It should be however noted that Condition 11 with γ < β/4 is perhaps too strong to call it “weak learnability”. Indeed, as already mentioned before, the weight β is expected to decrease to zero as the number of components in the mixture distribution Pg increases. This leads to γ → 0, making it harder to meet Condition 11. This obstacle may be partially resolved by the fact that we will use a GAN to fit Q, which corresponds to a relatively rich6 class of models G in (3). In other words, our weak learner is not so weak. On the other hand, Condition 10 of Corollary 1 is milder. No matter what γ ∈ [0, 1] and β ∈ (0, 1] are, the new component Q is guaranteed to strictly improve the objective functional. This comes at the price of the additional condition Pd(dPg/dPd = 0) < β, which asserts that β should be larger than the mass of true data Pd missed by the current model Pg. We argue that this is a rather reasonable condition: if Pg misses many modes of Pd we would prefer assigning a relatively large weight β to the new component Q. However, in practice, both Conditions 10 and 11 are difficult to check. A rigorous analysis of situations when they are guaranteed is a direction for future research. 6The hardness of meeting Condition 11 of course largely depends on the class of models G used to fit Q in (3). For now we ignore this question and leave it for future research. 3 AdaGAN We now describe the functions ChooseMixtureWeight and UpdateTrainingWeights of Algorithm 1. The complete AdaGAN meta-algorithm with the details of UpdateTrainingWeight and ChooseMixtureWeight, is summarized in Algorithm 3 of Appendix A. UpdateTrainingWeights At each iteration we add a new component Q to the current mixture Pg with weight β. The component Q should approach the “optimal target” Q∗β provided by (9) in Theorem 1. This distribution depends on the density ratio dPg/dPd, which is not directly accessible, but it can be estimated using adversarial training. Indeed, we can train a separate mixture discriminator DM to distinguish between samples from Pd and samples from the current mixture Pg. It is known [13] that for an arbitrary f -divergence, there exists a corresponding function h such that the values of the optimal discriminator DM are related to the density ratio by dPg dPd (x) = h ( DM (x) ) . (12) We can replace dPg(x)/dPd(x) in (9) with h ( DM (x) ) . For the Jensen-Shannon divergence, used by the original GAN algorithm, h(z) = 1−zz . In practice, when we compute dQ ∗ β on the training sample SN = (X1, . . . , XN ), each example Xi receives weight wi = 1 βN ( λ∗ − (1− β)h(di) ) + , where di = DM (Xi) . (13) The only remaining task is to determine λ∗. As the weights wi in (13) must sum to 1, we get: λ∗ = β∑ i∈I(λ∗) pi 1 + (1− β) β ∑ i∈I(λ∗) pih(di) (14) where I(λ) := {i : λ > (1− β)h(di)}. To find I(λ∗), we sort h(di) in increasing order: h(d1) ≤ . . . ≤ h(dN ). Then I(λ∗) is a set consisting of the first k indices. We then successively test all k-s until the λ given by (14) verifies (1−β)h(dk) < λ ≤ (1−β)h(dk+1) . This procedure is guaranteed to converge by Theorem 1. It is summarized in Algorithm 2 of Appendix A ChooseMixtureWeight For every β there is an optimal re-weighting scheme with weights given by (13). If the GAN could perfectly approximate its target Q∗β , then choosing β = 1 would be optimal, because Q∗1 = Pd. But in practice, GANs cannot do that. So we propose to choose β heuristically by imposing that each generator of the final mixture model has same weight. This yields βt = 1/t, where t is the iteration index. Other heuristics are proposed in Appendix B, but did not lead to any significant difference. The optimal discriminator In practice it is of course hard to find the optimal discriminator DM achieving the global maximum of the variational representation for the f-divergence and verifying (12). For the JS-divergence this would mean that DM is the classifier achieving minimal expected crossentropy loss in the binary classification between Pg and Pd. In practice, we observed that the reweighting (13) leads to the desired property of emphasizing at least some of the missing modes as long as DM distinguishes reasonably between data points already covered by the current model Pg and those which are still missing. We found an early stopping (while training DM ) sufficient to achieve this. In the worst case, when DM overfits and returns 1 for all true data points, the reweighting simply leads to the uniform distribution over the training set. 4 Experiments We ran AdaGAN7 on toy datasets, for which we can interpret the missing modes in a clear and reproducible way, and on MNIST, which is a high-dimensional dataset. The goal of these experiments was not to evaluate the visual quality of individual sample points, but to demonstrate that the re-weighting scheme of AdaGAN promotes diversity and effectively covers the missing modes. 7Code available online at https://github.com/tolstikhin/adagan Toy Datasets Our target distribution is a mixture of isotropic Gaussians over R2. The distances between the means are large enough to roughly avoid overlaps between different Gaussian components. We vary the number of modes to test how well each algorithm performs when there are fewer or more expected modes. We compare the baseline GAN algorithm with AdaGAN variations, and with other meta-algorithms that all use the same underlying GAN procedure. For details on these algorithms and on the architectures of the underlying generator and discriminator, see Appendix B. To evaluate how well the generated distribution matches the target distribution, we use a coverage metric C. We compute the probability mass of the true data “covered” by the model Pmodel. More precisely, we compute C := Pd(dPmodel > t) with t such that Pmodel(dPmodel > t) = 0.95. This metric is more interpretable than the likelihood, making it easier to assess the difference in performance of the algorithms. To approximate the density of Pmodel we use a kernel density estimation, where the bandwidth is chosen by cross validation. We repeat the run 35 times with the same parameters (but different random seeds). For each run, the learning rate is optimized using a grid search on a validation set. We report the median over those multiple runs, and the interval corresponding to the 5% and 95% percentiles. Figure 2 summarizes the performance of algorithms as a function of the number of iterations T . Both the ensemble and the boosting approaches significantly outperform the vanilla GAN and the “best of T ” algorithm. Interestingly, the improvements are significant even after just one or two additional iterations (T = 2 or 3). Our boosting approach converges much faster. In addition, its variance is much lower, improving the likelihood that a given run gives good results. On this setup, the vanilla GAN approach has a significant number of catastrophic failures (visible in the lower bounds of the intervals). Further empirical results are available in Appendix B, where we compared AdaGAN variations to several other baseline meta-algorithms in more details (Table 1) and combined AdaGAN with the unrolled GANs (UGAN) [4] (Figure 3). Interestingly, Figure 3 shows that AdaGAN ran with UGAN outperforms the vanilla UGAN on the toy datasets, demonstrating the advantage of using AdaGAN as a way to further improve the mode coverage of any existing GAN implementations. MNIST and MNIST3 We ran experiments both on the original MNIST and on the 3-digit MNIST (MNIST3) [5, 4] dataset, obtained by concatenating 3 randomly chosen MNIST images to form a 3-digit number between 0 and 999. According to [5, 4], MNIST contains 10 modes, while MNIST3 contains 1000 modes, and these modes can be detected using the pre-trained MNIST classifier. We combined AdaGAN both with simple MLP GANs and DCGANs [19]. We used T ∈ {5, 10}, tried models of various sizes and performed a reasonable amount of hyperparameter search. Similarly to [4, Sec 3.3.1] we failed to reproduce the missing modes problem for MNIST3 reported in [5] and found that simple GAN architectures are capable of generating all 1000 numbers. The authors of [4] proposed to artificially introduce the missing modes again by limiting the generators’ flexibility. In our experiments, GANs trained with the architectures reported in [4] were often generating poorly looking digits. As a result, the pre-trained MNIST classifier was outputting random labels, which again led to full coverage of the 1000 numbers. We tried to threshold the confidence of the pre-trained classifier, but decided that this metric was too ad-hoc. For MNIST we noticed that the re-weighted distribution was often concentrating its mass on digits having very specific strokes: on different rounds it could highlight thick, thin, vertical, or diagonal digits, indicating that these traits were underrepresented in the generated samples (see Figure 2). This suggests that AdaGAN does a reasonable job at picking up different modes of the dataset, but also that there are more than 10 modes in MNIST (and more than 1000 in MNIST3). It is not clear how to evaluate the quality of generative models in this context. We also tried to use the “inversion” metric discussed in Section 3.4.1 of [4]. For MNIST3 we noticed that a single GAN was capable of reconstructing most of the training points very accurately both visually and in the `2-reconstruction sense. The “inversion” metric tests whether the trained model can generate certain examples or not, but unfortunately it does not take into account the probabilities of doing so. 5 Conclusion We studied the problem of minimizing general f -divergences with additive mixtures of distributions. The main contribution of this work is a detailed theoretical analysis, which naturally leads to an iterative greedy procedure. On every iteration the mixture is updated with a new component, which minimizes f -divergence with a re-weighted target distribution. We provided conditions under which this procedure is guaranteed to converge to the target distribution at an exponential rate. While our results can be combined with any generative modelling techniques, we focused on GANs and provided a boosting-style algorithm AdaGAN. Preliminary experiments show that AdaGAN successfully produces a mixture which iteratively covers the missing modes.
1. How does the paper approach the problem of missing modes in data? 2. What are the strengths of the proposed mega-algorithm, particularly in its ability to incorporate various sub-models? 3. Are there any concerns regarding the optimization of the discriminator D_M in the calculation of data point weights? 4. How does the paper analyze the optimal and suboptimal distributions added as mixture components? 5. Are there any limitations in the application of AdaBoost and other mixture model spirits in the context of GAN?
Review
Review This paper builds a mega-algorithm that can incorporate various sub-models to tackle with missing mode problem. It tactfully applied AdaBoost and other mixture model spirits in the context of GAN. The paper theoretically analyzed the optimal and suboptimal distributions that are added as mixture components, and get the corresponding convergence results. The paper is well organized and the details are clearly elaborated. There are some places that are better to be explained more clearly: 1. Whether the mixture model weight matches well the corresponding mode height? Many of the results(median) in table 1 is larger than 0.95 and it may suggest some mode drop. In the Toy Dataset, it can possibly be shown by changing 0.95 in calculating C to be 0.1 or even 0.05 and see the results. 2. In the calculation of data point weights, the algorithm requires an optimum discriminator D_M between the original data and the current mixture. Can this optimum discriminator be obtained during training in practice? If not, how does this influence the data points weight and the new model component? In general, the paper is theoretically sound and the results are supportive.
NIPS
Title AdaGAN: Boosting Generative Models Abstract Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. N/A Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. 1 Introduction Imagine we have a large corpus, containing unlabeled pictures of animals, and our task is to build a generative probabilistic model of the data. We run a recently proposed algorithm and end up with a model which produces impressive pictures of cats and dogs, but not a single giraffe. A natural way to fix this would be to manually remove all cats and dogs from the training set and run the algorithm on the updated corpus. The algorithm would then have no choice but to produce new animals and, by iterating this process until there’s only giraffes left in the training set, we would arrive at a model generating giraffes (assuming sufficient sample size). At the end, we aggregate the models obtained by building a mixture model. Unfortunately, the described meta-algorithm requires manual work for removing certain pictures from the unlabeled training set at every iteration. Let us turn this into an automatic approach, and rather than including or excluding a picture, put continuous weights on them. To this end, we train a binary classifier to separate “true” pictures of the original corpus from the set of “synthetic” pictures generated by the mixture of all the models trained so far. We would expect the classifier to make confident predictions for the true pictures of animals missed by the model (giraffes), because there are no synthetic pictures nearby to be confused with them. By a similar argument, the classifier should make less confident predictions for the true pictures containing animals already generated by one of the trained models (cats and dogs). For each picture in the corpus, we can thus use the classifier’s confidence to compute a weight which we use for that picture in the next iteration, to be performed on the re-weighted dataset. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The present work provides a principled way to perform this re-weighting, with theoretical guarantees showing that the resulting mixture models indeed approach the true data distribution.1 ALGORITHM 1 AdaGAN, a meta-algorithm to construct a “strong” mixture of T individual generative models (f.ex. GANs), trained sequentially. Input: Training sample SN := {X1, . . . , XN}. Output: Mixture generative model G = GT . Train vanilla GAN G1 = GAN(SN ,W1) with a uniform weight W1 = (1/N, . . . , 1/N) over the training points for t = 2, . . . , T do #Choose the overall weight of the next mixture component βt = ChooseMixtureWeight(t) #Update the weight of each training example Wt = UpdateTrainingWeights(Gt−1, SN , βt) #Train t-th “weak” component generator Gct Gct = GAN(SN ,Wt) #Update the overall generative model: #Form a mixture of Gt−1 and Gct . Gt = (1− βt)Gt−1 + βtGct end for Before discussing how to build the mixture, let us consider the question of building a single generative model. A recent trend in modelling high dimensional data such as natural images is to use neural networks [1, 2]. One popular approach are Generative Adversarial Networks (GAN) [2], where the generator is trained adversarially against a classifier, which tries to differentiate the true from the generated data. While the original GAN algorithm often produces realistically looking data, several issues were reported in the literature, among which the missing modes problem, where the generator converges to only one or a few modes of the data distribution, thus not providing enough variability in the generated data. This seems to match the situation described earlier, which is why we will most often illustrate our algorithm with a GAN as the underlying base generator. We call it AdaGAN, for Adaptive GAN, but we could actually use any other generator: a Gaussian mixture model, a VAE [1], a WGAN [3], or even an unrolled [4] or mode-regularized GAN [5], which were both already specifically developed to tackle the missing mode problem. Thus, we do not aim at improving the original GAN or any other generative algorithm. We rather propose and analyse a meta-algorithm that can be used on top of any of them. This meta-algorithm is similar in spirit to AdaBoost in the sense that each iteration corresponds to learning a “weak” generative model (e.g., GAN) with respect to a re-weighted data distribution. The weights change over time to focus on the “hard” examples, i.e. those that the mixture has not been able to properly generate so far. Related Work Several authors [6, 7, 8] have proposed to use boosting techniques in the context of density estimation by incrementally adding components in the log domain. This idea was applied to GANs in [8]. A major downside of these approaches is that the resulting mixture is a product of components and sampling from such a model is nontrivial (at least when applied to GANs where the model density is not expressed analytically) and requires techniques such as Annealed Importance Sampling [9] for the normalization. When the log likelihood can be computed, [10] proposed to use an additive mixture model. They derived the update rule via computing the steepest descent direction when adding a component with infinitesimal weight. However, their results do not apply once the weight β becomes non-infinitesimal. In contrast, for any fixed weight of the new component our approach gives the overall optimal update (rather than just the best direction) for a specified f -divergence. In both theories, improvements of the mixture are guaranteed only if the new “weak” learner is still good enough (see Conditions 10&11) Similarly, [11] studied the construction of mixtures minimizing the Kullback divergence and proposed a greedy procedure for doing so. They also proved that under certain conditions, finite mixtures can approximate arbitrary mixtures at a rate 1/k where k is the number of components in the mixture when the weight of each newly added component is 1/k. These results are specific to the Kullback divergence but are consistent with our more general results. An additive procedure similar to ours was proposed in [12] but with a different re-weighting scheme, which is not motivated by a theoretical analysis of optimality conditions. On every new iteration the authors run GAN on the k training examples with maximal values of the discriminator from the last iteration. 1Note that the term “mixture” should not be interpreted to imply that each component models only one mode: the models to be combined into a mixture can themselves cover multiple modes. Finally, many papers investigate completely different approaches for addressing the same issue by directly modifying the training objective of an individual GAN. For instance, [5] add an autoencoding cost to the training objective of GAN, while [4] allow the generator to “look few steps ahead” when making a gradient step. The paper is organized as follows. In Section 2 we present our main theoretical results regarding iterative optimization of mixture models under general f -divergences. In Section 2.4 we show that if optimization at each step is perfect, the process converges to the true data distribution at exponential rate (or even in a finite number of steps, for which we provide a necessary and sufficient condition). Then we show in Section 2.5 that imperfect solutions still lead to the exponential rate of convergence under certain “weak learnability” conditions. These results naturally lead to a new boosting-style iterative procedure for constructing generative models. When used with GANs, it results in our AdaGAN algorithm, detailed in Section 3 . Finally, we report initial empirical results in Section 4, where we compare AdaGAN with several benchmarks, including original GAN and uniform mixture of multiple independently trained GANs. Part of new theoretical results are reported without proofs, which can be found in appendices. 2 Minimizing f -divergence with Mixtures 2.1 Preliminaries and notations Generative Density Estimation In density estimation, one tries to approximate a real data distribution Pd, defined over the data space X , by a model distribution Pmodel. In the generative approach one builds a function G : Z → X that transforms a fixed probability distribution PZ (often called the noise distribution) over a latent space Z into a distribution over X . Hence Pmodel is the pushforward of PZ , i.e. Pmodel(A) = PZ(G−1(A)). With this approach it is in general impossible to compute the density dPmodel(x) and the log-likelihood of the training data under the model, but one can easily sample from Pmodel by sampling from PZ and applying G. Thus, to construct G, instead of comparing Pmodel directly with Pd, one compares their samples. To do so, one uses a similarity measure D(Pmodel‖Pd) which can be estimated from samples of those distributions, and thus approximately minimized over a class G of functions. f -Divergences In order to measure the agreement between the model distribution and the true distribution we will use an f -divergence defined in the following way: Df (Q‖P ) := ∫ f ( dQ dP (x) ) dP (x) (1) for any pair of distributions P,Q with densities dP , dQ with respect to some dominating reference measure µ (we refer to Appendix D for more details about such divergences and their domain of definition). Here we assume that f is convex, defined on (0,∞), and satisfies f(1) = 0. We will denote by F the set of such functions. 2 As demonstrated in [16, 17], several commonly used symmetric f -divergences are Hilbertian metrics, which in particular means that their square root satisfies the triangle inequality. This is true for the Jensen-Shannon divergence3, the Hellinger distance and the Total Variation among others. We will denote by FH the set of functions f such that Df is a Hilbertian metric. GAN and f -divergences The original GAN algorithm [2] optimizes the following criterion: min G max D EPd [logD(X)] + EPZ [log(1−D(G(Z)))] , (2) where D and G are two functions represented by neural networks. This optimization is performed on a pair of samples (a training sample from Pd and a “fake” sample from PZ), which corresponds to approximating the above criterion by using the empirical distributions. In the non-parametric limit for D, this is equivalent to minimizing the Jensen-Shannon divergence [2]. This point of view can be generalized to any other f -divergence [13]. Because of this strong connection between adversarial 2Examples of f -divergences include the Kullback-Leibler divergence (obtained for f(x) = x log x) and Jensen-Shannon divergence (f(x) = −(x+ 1) log x+1 2 + x log x). Other examples can be found in [13]. For further details we refer to Section 1.3 of [14] and [15]. 3which means such a property can be used in the context of the original GAN algorithm. training of generative models and minimization of f -divergences, we cast the results of this section into the context of general f -divergences. Generative Mixture Models In order to model complex data distributions, it can be convenient to use a mixture model of the following form: PTmodel := ∑T i=1 αiPi, where αi ≥ 0, ∑ i αi = 1, and each of the T components is a generative density model. This is natural in the generative context, since sampling from a mixture corresponds to a two-step sampling, where one first picks the mixture component (according to the multinomial distribution with parameters αi) and then samples from it. Also, this allows to construct complex models from simpler ones. 2.2 Incremental Mixture Building We restrict ourselves to the case of f -divergences and assume that, given an i.i.d. sample from any unknown distribution P , we can construct a simple model Q ∈ G which approximately minimizes4 min Q∈G Df (Q ‖P ). (3) Instead of modelling the data with a single distribution, we now want to model it with a mixture of distributions Pi,where each Pi is obtained by a training procedure of the form (3) with (possibly) different target distributions P for each i. A natural way to build a mixture is to do it incrementally: we train the first model P1 to minimize Df (P1 ‖Pd) and set the corresponding weight to α1 = 1, leading to P 1model = P1. Then after having trained t components P1, . . . , Pt ∈ G we can form the (t+ 1)-st mixture model by adding a new component Q with weight β as follows: P t+1model := t∑ i=1 (1− β)αiPi + βQ. (4) where β ∈ [0, 1] and Q ∈ G is computed by minimizing: min Q Df ((1− β)Pg + βQ ‖Pd), (5) where we denoted Pg := P tmodel the current generative mixture model before adding the new component. We do not expect to find the optimal Q that minimizes (5) at each step, but we aim at constructing some Q that slightly improves our current approximation of Pd, i.e. such that for c < 1 Df ((1− β)Pg + βQ ‖Pd) ≤ c ·Df (Pg ‖Pd) . (6) This greedy approach has a significant drawback in practice. As we build up the mixture, we need to make β decrease (as P tmodel approximates Pd better and better, one should make the correction at each step smaller and smaller). Since we are approximating (5) using samples from both distributions, this means that the sample from the mixture will only contain a fraction β of examples from Q. So, as t increases, getting meaningful information from a sample so as to tune Q becomes harder and harder (the information is “diluted”). To address this issue, we propose to optimize an upper bound on (5) which involves a term of the form Df (Q ‖R) for some distribution R, which can be computed as a re-weighting of the original data distribution Pd. This procedure is reminiscent of the AdaBoost algorithm [18], which combines multiple weak predictors into one strong composition. On each step AdaBoost adds new predictor to the current composition, which is trained to minimize the binary loss on the re-weighted training set. The weights are constantly updated to bias the next weak learner towards “hard” examples, which were incorrectly classified during previous stages. In the following we will analyze the properties of (5) and derive upper bounds that provide practical optimization criteria for building the mixture. We will also show that under certain assumptions, the minimization of the upper bound leads to the optimum of the original criterion. 2.3 Upper Bounds We provide two upper bounds on the divergence of the mixture in terms of the divergence of the additive component Q with respect to some reference distribution R. 4One example of such a setting is running GANs. Lemma 1 Given two distributions Pd, Pg and some β ∈ [0, 1], then, for any Q and R, and f ∈ FH :√ Df ((1− β)Pg + βQ ‖Pd) ≤ √ βDf (Q ‖R) + √ Df ((1− β)Pg + βR ‖Pd) . (7) If, more generally, f ∈ F , but βdR ≤ dPd, then: Df ((1− β)Pg + βQ ‖Pd) ≤ βDf (Q ‖R) + (1− β)Df ( Pg ‖ Pd − βR 1− β ) . (8) We can thus exploit those bounds by introducing some well-chosen distributionR and then minimizing them with respect to Q. A natural choice for R is a distribution that minimizes the last term of the upper bound (which does not depend on Q). Our main result indicates the shape of the distributions minimizing the right-most terms in those bounds. Theorem 1 For any f -divergence Df , with f ∈ F and f differentiable, any fixed distributions Pd, Pg , and any β ∈ (0, 1], the minimizer of (5) over all probability distributions P has density dQ∗β(x) = 1 β (λ∗dPd(x)− (1− β)dPg(x))+ = dPd β ( λ∗ − (1− β)dPg dPd ) + . (9) for the unique λ∗ ∈ [β, 1] satisfying ∫ dQ∗β = 1. Also, λ ∗ = 1 if and only if Pd((1− β)dPg > dPd) = 0, which is equivalent to βdQ∗β = dPd − (1− β)dPg . Theorem 2 Given two distributions Pd, Pg and some β ∈ (0, 1], assume Pd (dPg = 0) < β. Let f ∈ F . The problem min Q:βdQ≤dPd Df ( Pg ‖ Pd − βQ 1− β ) has a solution with the density dQ†β(x) = 1 β ( dPd(x)− λ†(1− β)dPg(x) ) + for the unique λ† ≥ 1 that satisfies ∫ dQ†β = 1. Surprisingly, in both Theorems 1 and 2, the solutions do not depend on the choice of the function f , which means that the solution is the same for any f -divergence5. Note that λ∗ is implicitly defined by a fixed-point equation. In Section 3 we will show how it can be computed efficiently in the case of empirical distributions. 2.4 Convergence Analysis for Optimal Updates In previous section we derived analytical expressions for the distributions R minimizing last terms in upper bounds (8) and (7). Assuming Q can perfectly match R, i.e.Df (Q ‖R) = 0, we are now interested in the convergence of the mixture (4) to the true data distribution Pd when Q = Q∗β or Q = Q†β . We start with simple results showing that adding Q ∗ β or Q † β to the current mixture would yield a strict improvement of the divergence. Lemma 2 (Property 6: exponential improvements) Under the conditions of Theorem 1, we have Df ( (1− β)Pg + βQ∗β ∥∥Pd) ≤ Df((1− β)Pg + βPd ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Under the conditions of Theorem 2, we have Df ( Pg ∥∥ Pd − βQ†β 1− β ) ≤ Df (Pg ‖Pd) and Df ( (1− β)Pg + βQ†β ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Imagine repeatedly adding T new components to the current mixture Pg , where on every step we use the same weight β and choose the components described in Theorem 1. In this case Lemma 2 guarantees that the original objective value Df (Pg ‖Pd) would be reduced at least to (1− β)TDf (Pg ‖Pd). 5in particular, by replacing f with f◦(x) := xf(1/x), we get the same solution for the criterion written in the other direction. Hence the order in which we write the divergence does not matter and the optimal solution is optimal for both orders. This exponential rate of convergence, which at first may look surprisingly good, is simply explained by the fact that Q∗β depends on the true distribution Pd, which is of course unknown. Lemma 2 also suggests setting β as large as possible since we assume we can compute the optimal mixture component (which for β = 1 is Pd). However, in practice we may prefer to keep β relatively small, preserving what we learned so far through Pg: for instance, when Pg already covered part of the modes of Pd and we want Q to cover the remaining ones. We provide further discussions on choosing β in Section 3. 2.5 Weak to Strong Learnability In practice the component Q that we add to the mixture is not exactly Q∗β or Q † β , but rather an approximation to them. In this section we show that if this approximation is good enough, then we retain the property (6) (exponential improvements). Looking again at Lemma 1 we notice that the first upper bound is less tight than the second one. Indeed, take the optimal distributions provided by Theorems 1 and 2 and plug them back as R into the upper bounds of Lemma 1. Also assume that Q can match R exactly, i.e.Df (Q ‖R) = 0. In this case both sides of (7) are equal to Df ((1− β)Pg + βQ∗β ‖Pd), which is the optimal value for the original objective (5). On the other hand, (8) does not become an equality and the r.h.s. is not the optimal one for (5). However, earlier we agreed that our aim is to reach the modest goal (6) and next we show that this is indeed possible.Corollaries 1 and 2 provide sufficient conditions for strict improvements when we use the upper bounds (8) and (7) respectively. Corollary 1 Given Pd, Pg, and some β ∈ (0, 1], assume Pd ( dPg dPd = 0 ) < β. Let Q†β be as defined in Theorem 2. If Q is such that Df (Q ‖Q†β) ≤ γDf (Pg ‖Pd) (10) for γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ (1− β(1− γ))Df (Pg ‖Pd). Corollary 2 Let f ∈ FH . Take any β ∈ (0, 1], Pd, Pg , and let Q∗β be as defined in Theorem 1. If Q is such that Df (Q ‖Q∗β) ≤ γDf (Pg ‖Pd) (11) for some γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ Cγ,β · Df (Pg ‖Pd) , where Cγ,β =(√ γβ + √ 1− β )2 is strictly smaller than 1 as soon as γ < β/4 (and β > 0). Conditions 10 and 11 may be compared to the “weak learnability” condition of AdaBoost. As long as our weak learner is able to solve the surrogate problem (3) of matching respectively Q†β or Q ∗ β accurately enough, the original objective (5) is guaranteed to decrease as well. It should be however noted that Condition 11 with γ < β/4 is perhaps too strong to call it “weak learnability”. Indeed, as already mentioned before, the weight β is expected to decrease to zero as the number of components in the mixture distribution Pg increases. This leads to γ → 0, making it harder to meet Condition 11. This obstacle may be partially resolved by the fact that we will use a GAN to fit Q, which corresponds to a relatively rich6 class of models G in (3). In other words, our weak learner is not so weak. On the other hand, Condition 10 of Corollary 1 is milder. No matter what γ ∈ [0, 1] and β ∈ (0, 1] are, the new component Q is guaranteed to strictly improve the objective functional. This comes at the price of the additional condition Pd(dPg/dPd = 0) < β, which asserts that β should be larger than the mass of true data Pd missed by the current model Pg. We argue that this is a rather reasonable condition: if Pg misses many modes of Pd we would prefer assigning a relatively large weight β to the new component Q. However, in practice, both Conditions 10 and 11 are difficult to check. A rigorous analysis of situations when they are guaranteed is a direction for future research. 6The hardness of meeting Condition 11 of course largely depends on the class of models G used to fit Q in (3). For now we ignore this question and leave it for future research. 3 AdaGAN We now describe the functions ChooseMixtureWeight and UpdateTrainingWeights of Algorithm 1. The complete AdaGAN meta-algorithm with the details of UpdateTrainingWeight and ChooseMixtureWeight, is summarized in Algorithm 3 of Appendix A. UpdateTrainingWeights At each iteration we add a new component Q to the current mixture Pg with weight β. The component Q should approach the “optimal target” Q∗β provided by (9) in Theorem 1. This distribution depends on the density ratio dPg/dPd, which is not directly accessible, but it can be estimated using adversarial training. Indeed, we can train a separate mixture discriminator DM to distinguish between samples from Pd and samples from the current mixture Pg. It is known [13] that for an arbitrary f -divergence, there exists a corresponding function h such that the values of the optimal discriminator DM are related to the density ratio by dPg dPd (x) = h ( DM (x) ) . (12) We can replace dPg(x)/dPd(x) in (9) with h ( DM (x) ) . For the Jensen-Shannon divergence, used by the original GAN algorithm, h(z) = 1−zz . In practice, when we compute dQ ∗ β on the training sample SN = (X1, . . . , XN ), each example Xi receives weight wi = 1 βN ( λ∗ − (1− β)h(di) ) + , where di = DM (Xi) . (13) The only remaining task is to determine λ∗. As the weights wi in (13) must sum to 1, we get: λ∗ = β∑ i∈I(λ∗) pi 1 + (1− β) β ∑ i∈I(λ∗) pih(di) (14) where I(λ) := {i : λ > (1− β)h(di)}. To find I(λ∗), we sort h(di) in increasing order: h(d1) ≤ . . . ≤ h(dN ). Then I(λ∗) is a set consisting of the first k indices. We then successively test all k-s until the λ given by (14) verifies (1−β)h(dk) < λ ≤ (1−β)h(dk+1) . This procedure is guaranteed to converge by Theorem 1. It is summarized in Algorithm 2 of Appendix A ChooseMixtureWeight For every β there is an optimal re-weighting scheme with weights given by (13). If the GAN could perfectly approximate its target Q∗β , then choosing β = 1 would be optimal, because Q∗1 = Pd. But in practice, GANs cannot do that. So we propose to choose β heuristically by imposing that each generator of the final mixture model has same weight. This yields βt = 1/t, where t is the iteration index. Other heuristics are proposed in Appendix B, but did not lead to any significant difference. The optimal discriminator In practice it is of course hard to find the optimal discriminator DM achieving the global maximum of the variational representation for the f-divergence and verifying (12). For the JS-divergence this would mean that DM is the classifier achieving minimal expected crossentropy loss in the binary classification between Pg and Pd. In practice, we observed that the reweighting (13) leads to the desired property of emphasizing at least some of the missing modes as long as DM distinguishes reasonably between data points already covered by the current model Pg and those which are still missing. We found an early stopping (while training DM ) sufficient to achieve this. In the worst case, when DM overfits and returns 1 for all true data points, the reweighting simply leads to the uniform distribution over the training set. 4 Experiments We ran AdaGAN7 on toy datasets, for which we can interpret the missing modes in a clear and reproducible way, and on MNIST, which is a high-dimensional dataset. The goal of these experiments was not to evaluate the visual quality of individual sample points, but to demonstrate that the re-weighting scheme of AdaGAN promotes diversity and effectively covers the missing modes. 7Code available online at https://github.com/tolstikhin/adagan Toy Datasets Our target distribution is a mixture of isotropic Gaussians over R2. The distances between the means are large enough to roughly avoid overlaps between different Gaussian components. We vary the number of modes to test how well each algorithm performs when there are fewer or more expected modes. We compare the baseline GAN algorithm with AdaGAN variations, and with other meta-algorithms that all use the same underlying GAN procedure. For details on these algorithms and on the architectures of the underlying generator and discriminator, see Appendix B. To evaluate how well the generated distribution matches the target distribution, we use a coverage metric C. We compute the probability mass of the true data “covered” by the model Pmodel. More precisely, we compute C := Pd(dPmodel > t) with t such that Pmodel(dPmodel > t) = 0.95. This metric is more interpretable than the likelihood, making it easier to assess the difference in performance of the algorithms. To approximate the density of Pmodel we use a kernel density estimation, where the bandwidth is chosen by cross validation. We repeat the run 35 times with the same parameters (but different random seeds). For each run, the learning rate is optimized using a grid search on a validation set. We report the median over those multiple runs, and the interval corresponding to the 5% and 95% percentiles. Figure 2 summarizes the performance of algorithms as a function of the number of iterations T . Both the ensemble and the boosting approaches significantly outperform the vanilla GAN and the “best of T ” algorithm. Interestingly, the improvements are significant even after just one or two additional iterations (T = 2 or 3). Our boosting approach converges much faster. In addition, its variance is much lower, improving the likelihood that a given run gives good results. On this setup, the vanilla GAN approach has a significant number of catastrophic failures (visible in the lower bounds of the intervals). Further empirical results are available in Appendix B, where we compared AdaGAN variations to several other baseline meta-algorithms in more details (Table 1) and combined AdaGAN with the unrolled GANs (UGAN) [4] (Figure 3). Interestingly, Figure 3 shows that AdaGAN ran with UGAN outperforms the vanilla UGAN on the toy datasets, demonstrating the advantage of using AdaGAN as a way to further improve the mode coverage of any existing GAN implementations. MNIST and MNIST3 We ran experiments both on the original MNIST and on the 3-digit MNIST (MNIST3) [5, 4] dataset, obtained by concatenating 3 randomly chosen MNIST images to form a 3-digit number between 0 and 999. According to [5, 4], MNIST contains 10 modes, while MNIST3 contains 1000 modes, and these modes can be detected using the pre-trained MNIST classifier. We combined AdaGAN both with simple MLP GANs and DCGANs [19]. We used T ∈ {5, 10}, tried models of various sizes and performed a reasonable amount of hyperparameter search. Similarly to [4, Sec 3.3.1] we failed to reproduce the missing modes problem for MNIST3 reported in [5] and found that simple GAN architectures are capable of generating all 1000 numbers. The authors of [4] proposed to artificially introduce the missing modes again by limiting the generators’ flexibility. In our experiments, GANs trained with the architectures reported in [4] were often generating poorly looking digits. As a result, the pre-trained MNIST classifier was outputting random labels, which again led to full coverage of the 1000 numbers. We tried to threshold the confidence of the pre-trained classifier, but decided that this metric was too ad-hoc. For MNIST we noticed that the re-weighted distribution was often concentrating its mass on digits having very specific strokes: on different rounds it could highlight thick, thin, vertical, or diagonal digits, indicating that these traits were underrepresented in the generated samples (see Figure 2). This suggests that AdaGAN does a reasonable job at picking up different modes of the dataset, but also that there are more than 10 modes in MNIST (and more than 1000 in MNIST3). It is not clear how to evaluate the quality of generative models in this context. We also tried to use the “inversion” metric discussed in Section 3.4.1 of [4]. For MNIST3 we noticed that a single GAN was capable of reconstructing most of the training points very accurately both visually and in the `2-reconstruction sense. The “inversion” metric tests whether the trained model can generate certain examples or not, but unfortunately it does not take into account the probabilities of doing so. 5 Conclusion We studied the problem of minimizing general f -divergences with additive mixtures of distributions. The main contribution of this work is a detailed theoretical analysis, which naturally leads to an iterative greedy procedure. On every iteration the mixture is updated with a new component, which minimizes f -divergence with a re-weighted target distribution. We provided conditions under which this procedure is guaranteed to converge to the target distribution at an exponential rate. While our results can be combined with any generative modelling techniques, we focused on GANs and provided a boosting-style algorithm AdaGAN. Preliminary experiments show that AdaGAN successfully produces a mixture which iteratively covers the missing modes.
1. What is the focus of the paper, and what problem does it aim to solve in GAN training? 2. What are the strengths of the proposed method, particularly in its theoretical analysis and empirical results? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns or suggestions for improving the practical applicability of the proposed method?
Review
Review The paper proposes a new method inspired by AdaBoost to address the missing mode problem often occurs in GAN training. In general, the paper is quite well written. The studied problem is interesting and important, and the theoretical analyses seem solid. The proposed algorithm is novel and shows good empirical results on synthetic dataset. Below are my minor comments: 1. I know it is probably due to space limit, but it would be good if the authors can provide more explanation of the intuition on the theoretical results such as Theorem 1 and 2. This will make the paper much easier to understand. 2. The proposed method does not seem to have significant advantage over the standard GAN on real datasets such as MNIST and MNIST3. It would be good if the authors can try more datasets such as ImageNet. Otherwise the practical applicability is still in question.
NIPS
Title AdaGAN: Boosting Generative Models Abstract Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. N/A Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. 1 Introduction Imagine we have a large corpus, containing unlabeled pictures of animals, and our task is to build a generative probabilistic model of the data. We run a recently proposed algorithm and end up with a model which produces impressive pictures of cats and dogs, but not a single giraffe. A natural way to fix this would be to manually remove all cats and dogs from the training set and run the algorithm on the updated corpus. The algorithm would then have no choice but to produce new animals and, by iterating this process until there’s only giraffes left in the training set, we would arrive at a model generating giraffes (assuming sufficient sample size). At the end, we aggregate the models obtained by building a mixture model. Unfortunately, the described meta-algorithm requires manual work for removing certain pictures from the unlabeled training set at every iteration. Let us turn this into an automatic approach, and rather than including or excluding a picture, put continuous weights on them. To this end, we train a binary classifier to separate “true” pictures of the original corpus from the set of “synthetic” pictures generated by the mixture of all the models trained so far. We would expect the classifier to make confident predictions for the true pictures of animals missed by the model (giraffes), because there are no synthetic pictures nearby to be confused with them. By a similar argument, the classifier should make less confident predictions for the true pictures containing animals already generated by one of the trained models (cats and dogs). For each picture in the corpus, we can thus use the classifier’s confidence to compute a weight which we use for that picture in the next iteration, to be performed on the re-weighted dataset. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The present work provides a principled way to perform this re-weighting, with theoretical guarantees showing that the resulting mixture models indeed approach the true data distribution.1 ALGORITHM 1 AdaGAN, a meta-algorithm to construct a “strong” mixture of T individual generative models (f.ex. GANs), trained sequentially. Input: Training sample SN := {X1, . . . , XN}. Output: Mixture generative model G = GT . Train vanilla GAN G1 = GAN(SN ,W1) with a uniform weight W1 = (1/N, . . . , 1/N) over the training points for t = 2, . . . , T do #Choose the overall weight of the next mixture component βt = ChooseMixtureWeight(t) #Update the weight of each training example Wt = UpdateTrainingWeights(Gt−1, SN , βt) #Train t-th “weak” component generator Gct Gct = GAN(SN ,Wt) #Update the overall generative model: #Form a mixture of Gt−1 and Gct . Gt = (1− βt)Gt−1 + βtGct end for Before discussing how to build the mixture, let us consider the question of building a single generative model. A recent trend in modelling high dimensional data such as natural images is to use neural networks [1, 2]. One popular approach are Generative Adversarial Networks (GAN) [2], where the generator is trained adversarially against a classifier, which tries to differentiate the true from the generated data. While the original GAN algorithm often produces realistically looking data, several issues were reported in the literature, among which the missing modes problem, where the generator converges to only one or a few modes of the data distribution, thus not providing enough variability in the generated data. This seems to match the situation described earlier, which is why we will most often illustrate our algorithm with a GAN as the underlying base generator. We call it AdaGAN, for Adaptive GAN, but we could actually use any other generator: a Gaussian mixture model, a VAE [1], a WGAN [3], or even an unrolled [4] or mode-regularized GAN [5], which were both already specifically developed to tackle the missing mode problem. Thus, we do not aim at improving the original GAN or any other generative algorithm. We rather propose and analyse a meta-algorithm that can be used on top of any of them. This meta-algorithm is similar in spirit to AdaBoost in the sense that each iteration corresponds to learning a “weak” generative model (e.g., GAN) with respect to a re-weighted data distribution. The weights change over time to focus on the “hard” examples, i.e. those that the mixture has not been able to properly generate so far. Related Work Several authors [6, 7, 8] have proposed to use boosting techniques in the context of density estimation by incrementally adding components in the log domain. This idea was applied to GANs in [8]. A major downside of these approaches is that the resulting mixture is a product of components and sampling from such a model is nontrivial (at least when applied to GANs where the model density is not expressed analytically) and requires techniques such as Annealed Importance Sampling [9] for the normalization. When the log likelihood can be computed, [10] proposed to use an additive mixture model. They derived the update rule via computing the steepest descent direction when adding a component with infinitesimal weight. However, their results do not apply once the weight β becomes non-infinitesimal. In contrast, for any fixed weight of the new component our approach gives the overall optimal update (rather than just the best direction) for a specified f -divergence. In both theories, improvements of the mixture are guaranteed only if the new “weak” learner is still good enough (see Conditions 10&11) Similarly, [11] studied the construction of mixtures minimizing the Kullback divergence and proposed a greedy procedure for doing so. They also proved that under certain conditions, finite mixtures can approximate arbitrary mixtures at a rate 1/k where k is the number of components in the mixture when the weight of each newly added component is 1/k. These results are specific to the Kullback divergence but are consistent with our more general results. An additive procedure similar to ours was proposed in [12] but with a different re-weighting scheme, which is not motivated by a theoretical analysis of optimality conditions. On every new iteration the authors run GAN on the k training examples with maximal values of the discriminator from the last iteration. 1Note that the term “mixture” should not be interpreted to imply that each component models only one mode: the models to be combined into a mixture can themselves cover multiple modes. Finally, many papers investigate completely different approaches for addressing the same issue by directly modifying the training objective of an individual GAN. For instance, [5] add an autoencoding cost to the training objective of GAN, while [4] allow the generator to “look few steps ahead” when making a gradient step. The paper is organized as follows. In Section 2 we present our main theoretical results regarding iterative optimization of mixture models under general f -divergences. In Section 2.4 we show that if optimization at each step is perfect, the process converges to the true data distribution at exponential rate (or even in a finite number of steps, for which we provide a necessary and sufficient condition). Then we show in Section 2.5 that imperfect solutions still lead to the exponential rate of convergence under certain “weak learnability” conditions. These results naturally lead to a new boosting-style iterative procedure for constructing generative models. When used with GANs, it results in our AdaGAN algorithm, detailed in Section 3 . Finally, we report initial empirical results in Section 4, where we compare AdaGAN with several benchmarks, including original GAN and uniform mixture of multiple independently trained GANs. Part of new theoretical results are reported without proofs, which can be found in appendices. 2 Minimizing f -divergence with Mixtures 2.1 Preliminaries and notations Generative Density Estimation In density estimation, one tries to approximate a real data distribution Pd, defined over the data space X , by a model distribution Pmodel. In the generative approach one builds a function G : Z → X that transforms a fixed probability distribution PZ (often called the noise distribution) over a latent space Z into a distribution over X . Hence Pmodel is the pushforward of PZ , i.e. Pmodel(A) = PZ(G−1(A)). With this approach it is in general impossible to compute the density dPmodel(x) and the log-likelihood of the training data under the model, but one can easily sample from Pmodel by sampling from PZ and applying G. Thus, to construct G, instead of comparing Pmodel directly with Pd, one compares their samples. To do so, one uses a similarity measure D(Pmodel‖Pd) which can be estimated from samples of those distributions, and thus approximately minimized over a class G of functions. f -Divergences In order to measure the agreement between the model distribution and the true distribution we will use an f -divergence defined in the following way: Df (Q‖P ) := ∫ f ( dQ dP (x) ) dP (x) (1) for any pair of distributions P,Q with densities dP , dQ with respect to some dominating reference measure µ (we refer to Appendix D for more details about such divergences and their domain of definition). Here we assume that f is convex, defined on (0,∞), and satisfies f(1) = 0. We will denote by F the set of such functions. 2 As demonstrated in [16, 17], several commonly used symmetric f -divergences are Hilbertian metrics, which in particular means that their square root satisfies the triangle inequality. This is true for the Jensen-Shannon divergence3, the Hellinger distance and the Total Variation among others. We will denote by FH the set of functions f such that Df is a Hilbertian metric. GAN and f -divergences The original GAN algorithm [2] optimizes the following criterion: min G max D EPd [logD(X)] + EPZ [log(1−D(G(Z)))] , (2) where D and G are two functions represented by neural networks. This optimization is performed on a pair of samples (a training sample from Pd and a “fake” sample from PZ), which corresponds to approximating the above criterion by using the empirical distributions. In the non-parametric limit for D, this is equivalent to minimizing the Jensen-Shannon divergence [2]. This point of view can be generalized to any other f -divergence [13]. Because of this strong connection between adversarial 2Examples of f -divergences include the Kullback-Leibler divergence (obtained for f(x) = x log x) and Jensen-Shannon divergence (f(x) = −(x+ 1) log x+1 2 + x log x). Other examples can be found in [13]. For further details we refer to Section 1.3 of [14] and [15]. 3which means such a property can be used in the context of the original GAN algorithm. training of generative models and minimization of f -divergences, we cast the results of this section into the context of general f -divergences. Generative Mixture Models In order to model complex data distributions, it can be convenient to use a mixture model of the following form: PTmodel := ∑T i=1 αiPi, where αi ≥ 0, ∑ i αi = 1, and each of the T components is a generative density model. This is natural in the generative context, since sampling from a mixture corresponds to a two-step sampling, where one first picks the mixture component (according to the multinomial distribution with parameters αi) and then samples from it. Also, this allows to construct complex models from simpler ones. 2.2 Incremental Mixture Building We restrict ourselves to the case of f -divergences and assume that, given an i.i.d. sample from any unknown distribution P , we can construct a simple model Q ∈ G which approximately minimizes4 min Q∈G Df (Q ‖P ). (3) Instead of modelling the data with a single distribution, we now want to model it with a mixture of distributions Pi,where each Pi is obtained by a training procedure of the form (3) with (possibly) different target distributions P for each i. A natural way to build a mixture is to do it incrementally: we train the first model P1 to minimize Df (P1 ‖Pd) and set the corresponding weight to α1 = 1, leading to P 1model = P1. Then after having trained t components P1, . . . , Pt ∈ G we can form the (t+ 1)-st mixture model by adding a new component Q with weight β as follows: P t+1model := t∑ i=1 (1− β)αiPi + βQ. (4) where β ∈ [0, 1] and Q ∈ G is computed by minimizing: min Q Df ((1− β)Pg + βQ ‖Pd), (5) where we denoted Pg := P tmodel the current generative mixture model before adding the new component. We do not expect to find the optimal Q that minimizes (5) at each step, but we aim at constructing some Q that slightly improves our current approximation of Pd, i.e. such that for c < 1 Df ((1− β)Pg + βQ ‖Pd) ≤ c ·Df (Pg ‖Pd) . (6) This greedy approach has a significant drawback in practice. As we build up the mixture, we need to make β decrease (as P tmodel approximates Pd better and better, one should make the correction at each step smaller and smaller). Since we are approximating (5) using samples from both distributions, this means that the sample from the mixture will only contain a fraction β of examples from Q. So, as t increases, getting meaningful information from a sample so as to tune Q becomes harder and harder (the information is “diluted”). To address this issue, we propose to optimize an upper bound on (5) which involves a term of the form Df (Q ‖R) for some distribution R, which can be computed as a re-weighting of the original data distribution Pd. This procedure is reminiscent of the AdaBoost algorithm [18], which combines multiple weak predictors into one strong composition. On each step AdaBoost adds new predictor to the current composition, which is trained to minimize the binary loss on the re-weighted training set. The weights are constantly updated to bias the next weak learner towards “hard” examples, which were incorrectly classified during previous stages. In the following we will analyze the properties of (5) and derive upper bounds that provide practical optimization criteria for building the mixture. We will also show that under certain assumptions, the minimization of the upper bound leads to the optimum of the original criterion. 2.3 Upper Bounds We provide two upper bounds on the divergence of the mixture in terms of the divergence of the additive component Q with respect to some reference distribution R. 4One example of such a setting is running GANs. Lemma 1 Given two distributions Pd, Pg and some β ∈ [0, 1], then, for any Q and R, and f ∈ FH :√ Df ((1− β)Pg + βQ ‖Pd) ≤ √ βDf (Q ‖R) + √ Df ((1− β)Pg + βR ‖Pd) . (7) If, more generally, f ∈ F , but βdR ≤ dPd, then: Df ((1− β)Pg + βQ ‖Pd) ≤ βDf (Q ‖R) + (1− β)Df ( Pg ‖ Pd − βR 1− β ) . (8) We can thus exploit those bounds by introducing some well-chosen distributionR and then minimizing them with respect to Q. A natural choice for R is a distribution that minimizes the last term of the upper bound (which does not depend on Q). Our main result indicates the shape of the distributions minimizing the right-most terms in those bounds. Theorem 1 For any f -divergence Df , with f ∈ F and f differentiable, any fixed distributions Pd, Pg , and any β ∈ (0, 1], the minimizer of (5) over all probability distributions P has density dQ∗β(x) = 1 β (λ∗dPd(x)− (1− β)dPg(x))+ = dPd β ( λ∗ − (1− β)dPg dPd ) + . (9) for the unique λ∗ ∈ [β, 1] satisfying ∫ dQ∗β = 1. Also, λ ∗ = 1 if and only if Pd((1− β)dPg > dPd) = 0, which is equivalent to βdQ∗β = dPd − (1− β)dPg . Theorem 2 Given two distributions Pd, Pg and some β ∈ (0, 1], assume Pd (dPg = 0) < β. Let f ∈ F . The problem min Q:βdQ≤dPd Df ( Pg ‖ Pd − βQ 1− β ) has a solution with the density dQ†β(x) = 1 β ( dPd(x)− λ†(1− β)dPg(x) ) + for the unique λ† ≥ 1 that satisfies ∫ dQ†β = 1. Surprisingly, in both Theorems 1 and 2, the solutions do not depend on the choice of the function f , which means that the solution is the same for any f -divergence5. Note that λ∗ is implicitly defined by a fixed-point equation. In Section 3 we will show how it can be computed efficiently in the case of empirical distributions. 2.4 Convergence Analysis for Optimal Updates In previous section we derived analytical expressions for the distributions R minimizing last terms in upper bounds (8) and (7). Assuming Q can perfectly match R, i.e.Df (Q ‖R) = 0, we are now interested in the convergence of the mixture (4) to the true data distribution Pd when Q = Q∗β or Q = Q†β . We start with simple results showing that adding Q ∗ β or Q † β to the current mixture would yield a strict improvement of the divergence. Lemma 2 (Property 6: exponential improvements) Under the conditions of Theorem 1, we have Df ( (1− β)Pg + βQ∗β ∥∥Pd) ≤ Df((1− β)Pg + βPd ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Under the conditions of Theorem 2, we have Df ( Pg ∥∥ Pd − βQ†β 1− β ) ≤ Df (Pg ‖Pd) and Df ( (1− β)Pg + βQ†β ∥∥Pd) ≤ (1− β)Df (Pg ‖Pd). Imagine repeatedly adding T new components to the current mixture Pg , where on every step we use the same weight β and choose the components described in Theorem 1. In this case Lemma 2 guarantees that the original objective value Df (Pg ‖Pd) would be reduced at least to (1− β)TDf (Pg ‖Pd). 5in particular, by replacing f with f◦(x) := xf(1/x), we get the same solution for the criterion written in the other direction. Hence the order in which we write the divergence does not matter and the optimal solution is optimal for both orders. This exponential rate of convergence, which at first may look surprisingly good, is simply explained by the fact that Q∗β depends on the true distribution Pd, which is of course unknown. Lemma 2 also suggests setting β as large as possible since we assume we can compute the optimal mixture component (which for β = 1 is Pd). However, in practice we may prefer to keep β relatively small, preserving what we learned so far through Pg: for instance, when Pg already covered part of the modes of Pd and we want Q to cover the remaining ones. We provide further discussions on choosing β in Section 3. 2.5 Weak to Strong Learnability In practice the component Q that we add to the mixture is not exactly Q∗β or Q † β , but rather an approximation to them. In this section we show that if this approximation is good enough, then we retain the property (6) (exponential improvements). Looking again at Lemma 1 we notice that the first upper bound is less tight than the second one. Indeed, take the optimal distributions provided by Theorems 1 and 2 and plug them back as R into the upper bounds of Lemma 1. Also assume that Q can match R exactly, i.e.Df (Q ‖R) = 0. In this case both sides of (7) are equal to Df ((1− β)Pg + βQ∗β ‖Pd), which is the optimal value for the original objective (5). On the other hand, (8) does not become an equality and the r.h.s. is not the optimal one for (5). However, earlier we agreed that our aim is to reach the modest goal (6) and next we show that this is indeed possible.Corollaries 1 and 2 provide sufficient conditions for strict improvements when we use the upper bounds (8) and (7) respectively. Corollary 1 Given Pd, Pg, and some β ∈ (0, 1], assume Pd ( dPg dPd = 0 ) < β. Let Q†β be as defined in Theorem 2. If Q is such that Df (Q ‖Q†β) ≤ γDf (Pg ‖Pd) (10) for γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ (1− β(1− γ))Df (Pg ‖Pd). Corollary 2 Let f ∈ FH . Take any β ∈ (0, 1], Pd, Pg , and let Q∗β be as defined in Theorem 1. If Q is such that Df (Q ‖Q∗β) ≤ γDf (Pg ‖Pd) (11) for some γ ∈ [0, 1], then Df ((1− β)Pg + βQ ‖Pd) ≤ Cγ,β · Df (Pg ‖Pd) , where Cγ,β =(√ γβ + √ 1− β )2 is strictly smaller than 1 as soon as γ < β/4 (and β > 0). Conditions 10 and 11 may be compared to the “weak learnability” condition of AdaBoost. As long as our weak learner is able to solve the surrogate problem (3) of matching respectively Q†β or Q ∗ β accurately enough, the original objective (5) is guaranteed to decrease as well. It should be however noted that Condition 11 with γ < β/4 is perhaps too strong to call it “weak learnability”. Indeed, as already mentioned before, the weight β is expected to decrease to zero as the number of components in the mixture distribution Pg increases. This leads to γ → 0, making it harder to meet Condition 11. This obstacle may be partially resolved by the fact that we will use a GAN to fit Q, which corresponds to a relatively rich6 class of models G in (3). In other words, our weak learner is not so weak. On the other hand, Condition 10 of Corollary 1 is milder. No matter what γ ∈ [0, 1] and β ∈ (0, 1] are, the new component Q is guaranteed to strictly improve the objective functional. This comes at the price of the additional condition Pd(dPg/dPd = 0) < β, which asserts that β should be larger than the mass of true data Pd missed by the current model Pg. We argue that this is a rather reasonable condition: if Pg misses many modes of Pd we would prefer assigning a relatively large weight β to the new component Q. However, in practice, both Conditions 10 and 11 are difficult to check. A rigorous analysis of situations when they are guaranteed is a direction for future research. 6The hardness of meeting Condition 11 of course largely depends on the class of models G used to fit Q in (3). For now we ignore this question and leave it for future research. 3 AdaGAN We now describe the functions ChooseMixtureWeight and UpdateTrainingWeights of Algorithm 1. The complete AdaGAN meta-algorithm with the details of UpdateTrainingWeight and ChooseMixtureWeight, is summarized in Algorithm 3 of Appendix A. UpdateTrainingWeights At each iteration we add a new component Q to the current mixture Pg with weight β. The component Q should approach the “optimal target” Q∗β provided by (9) in Theorem 1. This distribution depends on the density ratio dPg/dPd, which is not directly accessible, but it can be estimated using adversarial training. Indeed, we can train a separate mixture discriminator DM to distinguish between samples from Pd and samples from the current mixture Pg. It is known [13] that for an arbitrary f -divergence, there exists a corresponding function h such that the values of the optimal discriminator DM are related to the density ratio by dPg dPd (x) = h ( DM (x) ) . (12) We can replace dPg(x)/dPd(x) in (9) with h ( DM (x) ) . For the Jensen-Shannon divergence, used by the original GAN algorithm, h(z) = 1−zz . In practice, when we compute dQ ∗ β on the training sample SN = (X1, . . . , XN ), each example Xi receives weight wi = 1 βN ( λ∗ − (1− β)h(di) ) + , where di = DM (Xi) . (13) The only remaining task is to determine λ∗. As the weights wi in (13) must sum to 1, we get: λ∗ = β∑ i∈I(λ∗) pi 1 + (1− β) β ∑ i∈I(λ∗) pih(di) (14) where I(λ) := {i : λ > (1− β)h(di)}. To find I(λ∗), we sort h(di) in increasing order: h(d1) ≤ . . . ≤ h(dN ). Then I(λ∗) is a set consisting of the first k indices. We then successively test all k-s until the λ given by (14) verifies (1−β)h(dk) < λ ≤ (1−β)h(dk+1) . This procedure is guaranteed to converge by Theorem 1. It is summarized in Algorithm 2 of Appendix A ChooseMixtureWeight For every β there is an optimal re-weighting scheme with weights given by (13). If the GAN could perfectly approximate its target Q∗β , then choosing β = 1 would be optimal, because Q∗1 = Pd. But in practice, GANs cannot do that. So we propose to choose β heuristically by imposing that each generator of the final mixture model has same weight. This yields βt = 1/t, where t is the iteration index. Other heuristics are proposed in Appendix B, but did not lead to any significant difference. The optimal discriminator In practice it is of course hard to find the optimal discriminator DM achieving the global maximum of the variational representation for the f-divergence and verifying (12). For the JS-divergence this would mean that DM is the classifier achieving minimal expected crossentropy loss in the binary classification between Pg and Pd. In practice, we observed that the reweighting (13) leads to the desired property of emphasizing at least some of the missing modes as long as DM distinguishes reasonably between data points already covered by the current model Pg and those which are still missing. We found an early stopping (while training DM ) sufficient to achieve this. In the worst case, when DM overfits and returns 1 for all true data points, the reweighting simply leads to the uniform distribution over the training set. 4 Experiments We ran AdaGAN7 on toy datasets, for which we can interpret the missing modes in a clear and reproducible way, and on MNIST, which is a high-dimensional dataset. The goal of these experiments was not to evaluate the visual quality of individual sample points, but to demonstrate that the re-weighting scheme of AdaGAN promotes diversity and effectively covers the missing modes. 7Code available online at https://github.com/tolstikhin/adagan Toy Datasets Our target distribution is a mixture of isotropic Gaussians over R2. The distances between the means are large enough to roughly avoid overlaps between different Gaussian components. We vary the number of modes to test how well each algorithm performs when there are fewer or more expected modes. We compare the baseline GAN algorithm with AdaGAN variations, and with other meta-algorithms that all use the same underlying GAN procedure. For details on these algorithms and on the architectures of the underlying generator and discriminator, see Appendix B. To evaluate how well the generated distribution matches the target distribution, we use a coverage metric C. We compute the probability mass of the true data “covered” by the model Pmodel. More precisely, we compute C := Pd(dPmodel > t) with t such that Pmodel(dPmodel > t) = 0.95. This metric is more interpretable than the likelihood, making it easier to assess the difference in performance of the algorithms. To approximate the density of Pmodel we use a kernel density estimation, where the bandwidth is chosen by cross validation. We repeat the run 35 times with the same parameters (but different random seeds). For each run, the learning rate is optimized using a grid search on a validation set. We report the median over those multiple runs, and the interval corresponding to the 5% and 95% percentiles. Figure 2 summarizes the performance of algorithms as a function of the number of iterations T . Both the ensemble and the boosting approaches significantly outperform the vanilla GAN and the “best of T ” algorithm. Interestingly, the improvements are significant even after just one or two additional iterations (T = 2 or 3). Our boosting approach converges much faster. In addition, its variance is much lower, improving the likelihood that a given run gives good results. On this setup, the vanilla GAN approach has a significant number of catastrophic failures (visible in the lower bounds of the intervals). Further empirical results are available in Appendix B, where we compared AdaGAN variations to several other baseline meta-algorithms in more details (Table 1) and combined AdaGAN with the unrolled GANs (UGAN) [4] (Figure 3). Interestingly, Figure 3 shows that AdaGAN ran with UGAN outperforms the vanilla UGAN on the toy datasets, demonstrating the advantage of using AdaGAN as a way to further improve the mode coverage of any existing GAN implementations. MNIST and MNIST3 We ran experiments both on the original MNIST and on the 3-digit MNIST (MNIST3) [5, 4] dataset, obtained by concatenating 3 randomly chosen MNIST images to form a 3-digit number between 0 and 999. According to [5, 4], MNIST contains 10 modes, while MNIST3 contains 1000 modes, and these modes can be detected using the pre-trained MNIST classifier. We combined AdaGAN both with simple MLP GANs and DCGANs [19]. We used T ∈ {5, 10}, tried models of various sizes and performed a reasonable amount of hyperparameter search. Similarly to [4, Sec 3.3.1] we failed to reproduce the missing modes problem for MNIST3 reported in [5] and found that simple GAN architectures are capable of generating all 1000 numbers. The authors of [4] proposed to artificially introduce the missing modes again by limiting the generators’ flexibility. In our experiments, GANs trained with the architectures reported in [4] were often generating poorly looking digits. As a result, the pre-trained MNIST classifier was outputting random labels, which again led to full coverage of the 1000 numbers. We tried to threshold the confidence of the pre-trained classifier, but decided that this metric was too ad-hoc. For MNIST we noticed that the re-weighted distribution was often concentrating its mass on digits having very specific strokes: on different rounds it could highlight thick, thin, vertical, or diagonal digits, indicating that these traits were underrepresented in the generated samples (see Figure 2). This suggests that AdaGAN does a reasonable job at picking up different modes of the dataset, but also that there are more than 10 modes in MNIST (and more than 1000 in MNIST3). It is not clear how to evaluate the quality of generative models in this context. We also tried to use the “inversion” metric discussed in Section 3.4.1 of [4]. For MNIST3 we noticed that a single GAN was capable of reconstructing most of the training points very accurately both visually and in the `2-reconstruction sense. The “inversion” metric tests whether the trained model can generate certain examples or not, but unfortunately it does not take into account the probabilities of doing so. 5 Conclusion We studied the problem of minimizing general f -divergences with additive mixtures of distributions. The main contribution of this work is a detailed theoretical analysis, which naturally leads to an iterative greedy procedure. On every iteration the mixture is updated with a new component, which minimizes f -divergence with a re-weighted target distribution. We provided conditions under which this procedure is guaranteed to converge to the target distribution at an exponential rate. While our results can be combined with any generative modelling techniques, we focused on GANs and provided a boosting-style algorithm AdaGAN. Preliminary experiments show that AdaGAN successfully produces a mixture which iteratively covers the missing modes.
1. What is the focus and contribution of the paper on Generative Adversarial Networks? 2. What are the strengths and weaknesses of the proposed AdaGAN approach, particularly in its ability to address the mode-missing problem? 3. Do you have any concerns regarding the paper's claims and assumptions, especially when compared to other state-of-the-art generative models like WGAN or mode-regularized GAN? 4. How do the theoretical guarantees provided in Corollaries 1 and 2 relate to practical applications, considering the issue of disjoint manifolds in GAN training?
Review
Review AdaGAN is a meta-algorithm proposed for GAN. The key idea of AdaGAN is: at each step reweight the samples and fits a generative model on the reweighted samples. The final model is a weighted addition of the learned generative models. The main motivation is to reduce the mode-missing problem of GAN by reweighting samples at each step. It is claimed in the Introduction that AdaGAN can also use WGAN or mode-regularized GAN as base generators (line 55). This claim seems to be an overstatement. The key assumption of AdaGAN is that the base generator aims at minimizing the f-divergence (as mentioned in Line 136-137). This assumption does not hold for WGAN or mode-regularized GAN: WGAN minimizes the Wasserstein distance, and the mode-regularized GAN has an additional mode regularizer in the objective. WGAN and mode-regularized GAN are state-of-the-art generative models targeting the mode-missing problem. It is more convincing if the author can demonstrate that AdaGAN outperforms the two algorithms. Corollary 1 and 2 assume that the support of learned distribution P_g and the true distribution P_d should overlap for at least 1-beta. This is usually not true in practice. A common situation when training GAN is that the data manifold and the generation manifold are disjoint (see e.g., the WGAN paper).
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the main contribution of the paper regarding community detection and matrix completion? 2. What are the strengths of the proposed approach, particularly in dealing with sparse and noisy networks? 3. Do you have any concerns or questions regarding the technical aspects of the paper, such as the new quantity IPR and its relation to community detection performances? 4. How does the reviewer assess the effectiveness of X-laplacians in recovering communities, and what proof or simulations support this claim? 5. What are the limitations of the paper regarding its claims and comparisons with other works in the literature? 6. How does the reviewer evaluate the clarity and quality of the paper's writing?
Review
Review The paper proposes a new type of regularized laplacian for performing community detection, clustering and matrix completion. The paper proposes a way to learn the regularization of the laplacian based on the network data, such that community detection and such tasks are possible for sparse networks. The new type of regularization has also been claimed to produced a more robust laplacian, which uncovers cluster structure even under errors in network data.The paper proposes a way to learn the regularization of laplacians for dealing with the localization of eigenvectors of sparse and noisy networks. The iterative method of learning the regularization that has been proposed in the paper seems promising. However, the technical portion of the paper is lacking a bit. The paper proposes a new quantity - Inverse Participation ratio (IPR), but does not link the IPR with community detection performances theoretically. The paper claims that lower the IPR the better the eigenvector is for sparse and noisy networks, however, no formal result is stated about the bounds of IPR of the regularized laplacians, under which community detection will be possible. Also, the formal proof of effectiveness of X-laplacian in recovering community is not provided (which has been recently proved for regularized laplacian and non-backtracking matrices). No formal definition of robustness of the laplacians and how X-laplacians achieve them has been stated and proved in the paper. The paper uses simulation to substantiate many of the claims. The paper tries to build a nice connection between the different forms of regularized laplacian/adjacency matrices proposed in the literature for sparse networks. The way of learning the regularization is also nice. The X-laplacian method, if properly substantiated, can be quite a good way of achieving community detection and clustering for sparse and noisy networks. The writing of the paper is okay. It has some typos (like, pg 2, line 68, by learn should be by learning; pg 3, line 116, X-Lapacian should be X-Laplacian). The notation epsilon and epsilon* has been interchangingly used in page 5 and 6. Overall, the paper seems a bit pre-mature. It contains a nice idea, but a more formal analysis of X-Laplacians should be done before it is ready for publication.
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the main contribution of the paper regarding spectral detection methods? 2. What are the concerns regarding the proposed approach and its relation to other methods? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any questions or suggestions for improvements in the paper?
Review
Review This paper proposes, in spectral detection of global structures in data matrix, a means to resolve the localization problem, which is known to cause several spectral detection methods to fail in sparse and/or noisy cases. The proposal is based on learning a proper regularizer on the basis of localized eigenvectors of the data matrix, where the algorithm is such that one iteratively updates the regularizer so that the regularized matrix has no significant eigenvalues, with the associated eigenvectors being localized in terms of the inverse participation ratio (IPR). Although the use of IPR in quantifying localization would work well in practice, as demonstrated in the section on numerical evaluations, it seems heuristic, since there would be several different quantities which would have qualitatively the same property as the IPR. Why it is a good quantifier compared with other possibilities in terms of detection of global structures should be discussed. The author claims that the approach of trimming throws away part of the information, but one can see in the algorithm proposed in [12] that while trimming is used in the first stage the whole dataset is used in the final stage. I was not convinced by the argument in page 3, which claims that spectral algorithms using the non-backtracking matrix can also be interpreted as an example of equation (1). In the interpretation the regularizer is dependent on the eigenvalue, which obscures the interpretation of these spectral algorithms as instances of the regularized spectral method (equation (1)). Specifically, I do not see any description of the claimed direct relation between the vector v appearing in Line 90 and the eigenvectors of the non-backtracking matrix. Minor points: - Line 32: Weigner's -> Wigner's - Line 35: the wigner's -> Wigner's - Line 37: which (is -> are) roughly - Line 68: by learn(ing) a proper - Line 75: that correlate(d) - Line 103: that (are) specific - Line 108: have (a large eigenvalue -> large eigenvalues) / which represent(s) - Line 110: one who reveal -> ones which reveal - Line 127: semi-cycle -> semi-circle (But in this case the bulk of the spectrum is not exactly a semi-circle.) - Line 129: The spectra(l) density - Line 130: the continu(es -> ous) part - Line 152: the inverse participat(e -> ion) ratio - Lines 179, 293: state-of-art -> state-of-the-art - Line 220: between (duplicate) - Line 227: assign items a hidden clusters -> assigns to items hidden clusters - Line 228: then generate(s) similarity between (a) randomly sampled pairs of items - Line 236: Here the term overlap is briefly explained, but it has already appeared several times before. The brief explanation should be given at its first appearance. - Line 246: each selected node(s) - Line 249: fail(s) to work - Line 255: that (is) closely related - Line 276: where (one) can see - Line 286: a(n) accurate completion
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the main contribution of the paper on spectral clustering? 2. What are the strengths of the proposed approach, particularly in its adaptive regularization method? 3. What are the weaknesses of the paper regarding its language, grammar, and readability? 4. How does the reviewer assess the effectiveness and robustness of the proposed method in dealing with small structures in graphs? 5. What are some potential future research directions related to the proposed method?
Review
Review This paper proposes a self-adaptive regularization method for spectral clustering. Various versions of spectral clustering from sparse data (sparse networks, or sparse set of similarities) all share the undesirable property that there are small structures in the graphs (either naturally, e.g. high degree hubs or hanging trees, or put by an adversary) that create large eigenvalues in the spectrum with eigen-vectors localized on the structure. Several methods (discussed in the paper) were previously suggested to deal with this problem, but none of them seems to be very universal. The current paper proposed an adaptive way to learn a regularization of the Laplacian, called x-Laplacian, and illustrated that this strategy makes the spectral clustering robust to a wide range of perturbations. I find this paper very exciting. It is possibly a generic way how to make spectral methods robust to small adversarial changes in the output in a similar manner as e.g. semi-definite programming is known to be. The proposed method is clear and relatively simple to understand and implement. The present paper provides very good empirical assessment of the method. Not so much on the theoretical side, but I think this will definitely trigger research in this direction (making the implementation more efficient, providing guarantees, etc.). However, the language and grammar are not very good and surely the authors should make additional effort in this direction (out of many Wigner is not "Weigner" nor "wigner", methods are popular as *they give*, ...). The words "detectability transition" are unreadable in Fig. 3, and barely in Fig. 4.
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the focus and contribution of the paper on detecting global structures in data? 2. What are the motivations and previous approaches discussed in the paper? 3. What is the proposed adaptive regularizer and how does it penalize localized eigenvectors? 4. Are there any concerns or confusion regarding the illustration in Figure 1?
Review
Review A spectral algorithm for the detection of global structures in data is presented. Motivations and previous approaches are discussed. The main contribution is an adaptive regularizer which penalizes localized eigenvectors.The proposed method is a bit ad-hoc but seems to have some positive effects. For me it wasn't quite clear what Figure 1 is supposed to illustrate.
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the main contribution of the paper on sparse matrix spectral analysis? 2. What are the concerns regarding the vagueness of the paper's goal? 3. How can the algorithm handle the issue of singular eigenvectors in sparse matrices? 4. What kind of theoretical analysis or real-world applications would make the reviewer more convinced about the proposed method?
Review
Review This paper presents a method to regularize sparse matrix spectral analysis problems. The authors claim to solve the localized eigenvectors' problem. A heuristic algorithm is proposed to learn the regularization which basically suppresses down eigenvalues associated with singular eigenvectors. This paper proposes an algorithm to learn regularization to solve the problem of singular eigenvectors in sparse matrices. However, the goal of this paper is vague: the authors state that the goal is to create a non-localized leading eigenvectors matrix that is close to the original sparse matrix. Two things are unclear: 1. What does close to mean exactly? Is there a distance metric defined? Does the algorithm find the closest one or sub-optimal one? What about the perturbation error? 2. What does non-localized leading eigenvectors matrix mean? Do you assume full rank? What if the sparse matrix is of high dimension, but the rank is actually low? I believe that the idea behind the algorithm is neat, but I would be more convinced if the authors provided more precise theoretical analysis or real world applications.
NIPS
Title Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈ {0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erdős-Rényi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P (λ) = √ 4c− λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large- 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈ Rm×n with rank r √ mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = Â+ R̂. (1) Here matrix  is the data matrix or its (symmetric) variance, such as à = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix R̂ is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming,  is set to be the adjacency matrix and R̂ contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I − µA+ (D − I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A − D−Iµ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while  = A, R̂ = D−Iµ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix R̂ added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = ∑n i=1 v 4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1n for vector { 1√ n , 1√ n , ..., 1√ n } to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A+X , where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX . 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U . That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A+X; Otherwise, ∀i,Xii ← Xii − ηv2i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X , the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆ = 5/n. As η = O(1) and v2i = O(1/n), we can treat the learned entries in each step, L̂, as a perturbation to matrix LX . After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + λ̂i, and an eigenvector changes from ui to ui + ûi. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have λ̂i = uTi L̂ui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, L̂ is a diagonal matrix with entries L̂ii = −ηv2i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as λ̂i = −η ∑ k v 2 ku 2 ik. For the identified vector v, we further have λ̂v = −η ∑ i v4i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ûi as a function of all the other eigenvalues and eigenvectors, ûi = ∑ j 6=i ∑ k ujkv 2 kuik λi−λj uj . Then the inverse participation ratio of the new vector ui + ûi can be written as I(ui + ûi) = I(ui)− 4η n∑ l=1 ∑ j 6=i u2jlv 2 l u 4 il λi − λj − 4η n∑ l=1 ∑ j 6=i ∑ k 6=l u3ilv 2 kujkuikujl λi − λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η ∑n l=1 ∑ j 6=i u2jlv 2 l u 4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi − λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗i } ∈ {1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a 6= b. Given the average degree of the graph, there is a so-called detectability transition ∗ = cout/cin = ( √ c − 1)/( √ c − 1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I − Ã, and regularized and normalized Laplacian (R.N. Laplacian) LA = à − ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈ Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈ {1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ĉ satisfying 1 ĉ = 1 q ∫ ds (pin(s)−pout(s)) 2 pin(s)+(q−1)pout(s) , that with c < ĉ no algorithm could obtain any partial information of the planted clusters; while with c > ĉ some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈ Rn×r and V ∈ Rm×r are chosen uniformly at random and r √ nm is the ground-true rank. Only few, say c √ mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈ Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX = ( 0 A A 0 ) −X . The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥ 13. Moreover, when c ≥ 18, for all instances, only X-Laplacian gives an accurate completion for all instances. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix Ã, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net.
1. What is the main contribution of the paper regarding regularization and community detection? 2. What are the strengths of the proposed algorithm, particularly in its ability to work in sparse and noisy regimes? 3. What are the weaknesses of the paper, specifically regarding the lack of convergence and consistency analysis? 4. How does the reviewer assess the novelty and potential impact of the proposed method? 5. Are there any minor issues or typos in the paper that the reviewer has identified?
Review
Review This paper discussed regularization as a united framework for community detection problem in sparse graphs, and proposed an algorithm to learn regularization from localized eigenvectors, through a proposed ratio: inverse participation ratio, which quantify the amount of localization of a (normalized) eigenvector. The algorithm works in sparse and noisy regime. They conducted experiments on community detection, clustering from sparse pairwise measurements, rank estimation and matrix completion.The idea of this paper is novel and the analysis of how the authors come up with the algorithm is neat, they proposed an algorithm to learn regularizations form localized eigenvectors and it works in sparse and noisy regime, where spectral algorithms usually does not perform well. Plenty of experiments have been conducted to show the performance and I see this kind of method promising. However, the result seems preliminary as there is no proved guarantee of the algorithm. Some major problems are: (1) The analysis of this paper is just how to come up with the algorithm, they did a great job on explaining how to incorporate the proposed ratio in the algorithm using perturbation theory, but lack the proof on convergence analysis (i.e., how to choose \Delta to guarantee a successful recovery) and consistency analysis (i.e., how large is the recovery error). (2) Also, it is not clear how they come up with the ratio as it is a sum of 4th order elements, it seems to come from the design of the algorithm, but an analysis on whether this can be a generalized metric would be more helpful. Some minor problems/typos: (1) The authors use 'overlap' as accuracy for prediction, which is confusing as there is overlapping community detection problem (2) Equation 3 should begin with \approx not equal, as shown in SI (3) Figure 1 will be better if plotting curve of the density, rather than bar graph (4) 'I.e.' in 118 etc. is strange, conventionally 'i.e.' is used (5) Line 220, 'the pairwise similarity between between items' should be 'the pairwise similarity between items' (6) Line 226, 'Without loose of generality' should be 'Without loss of generality' (7) Line 227, 'assign items a hidden clusters' should be 'assign items hidden clusters' (8) Line 279, 'After the estimating the rank of the matrix' should be 'After estimating the rank of the matrix' (9) Figure 1 in SI, we have no idea what I2, I3, I4, D2, D3, D4 represent
NIPS
Title Random Normalization Aggregation for Adversarial Defense Abstract The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. N/A The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. 1 Introduction Deep Neural Networks (DNNs) have achieved impressive performance in various tasks [1, 2, 3]. However, it is well known that DNNs are susceptible to maliciously generated adversarial examples [4, 5]. Through imperceptible perturbations on the model inputs during inference stage, the model is misled to wrong predictions at a high rate. Since then, a wide range of attack techniques have been proposed under different settings and show strong attack capability. For example, attackers have full access to the model architecture and parameters, which forms white-box attacks [5, 6], and attackers have limited query access to the model, which forms black-box attacks [7, 8]. Since the high attack success rates of these techniques reveal the high risk of DNNs, defenses against adversarial examples have received increasing attention and adversarial robustness becomes one of the key criteria. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To mitigate this risk, adversarial training is proposed to yield robust models through training on generated adversarial examples [5, 6]. Besides training procedure, some regularization and image preprocessing techniques are introduced to improve adversarial robustness [9, 10]. Recent work note that the architecture and module designs could play important roles in the robustness [11, 12]. Hence, we pay more attention to the basic modules in the network which are seldom considered for improving robustness, such as normalization layers. Existing works have discussed the influence of Batch Normalization (BN) and empirically shown that BN increases adversarial vulnerability and decreases adversarial transferability [13, 14]. However, the theoretical analysis of this observation is insufficient and how to tackle this pitfall or even utilize this property to defend attacks is unexplored. In this work, we take numerous normalizations into consideration, including Layer Normalization (LN), Group Normalization (GN), Instance Normalization (IN), Batch Normalization (BN), and Batch Group Normalization (BGN) [15, 16, 17, 18, 19], as shown in Figure 1 (a) and (b). To evaluate the influence of different normalizations on the robustness, we first conduct PGD-7 attack [6] to both natural and adversarial trained networks with different normalizations on CIFAR-10, as shown in the diagonals of Figure 1 (c) and (d). Not surprisingly, BN obtains the best robustness compared to other variants. However, we have an intriguing observation after the transferability evaluations among different normalizations. As illustrated in the heatmaps, the adversarial accuracies in most scenarios are around 70% when fed with transferred adversarial examples, while those under whitebox attack are around 50%. This huge gap mainly comes from the adversarial transferability among normalizations. Motivated by this observation, we first explore the relationship between adversarial transferability and normalizations, and show that the gradient similarity and loss smoothness are the key factors of the discrepancy in transferability among different normalizations. Based on the theoretical evidence, we propose to aggregate different types of normalizations to form random space in the inference phase where the adversarial transferability can be significantly reduced. With designed random space, the inference phase naturally forms the black-box setting for those attackers who have access to model parameters due to the colossal random space. Together with the proposed black-box adversarial training framework, the adversarial robustness is substantially improved with less reduction of natural accuracy. For example, with the same adversarial training setting, the proposed algorithm improves the natural accuracy by 2.45% and the adversarial accuracy by 8.53% with ResNet-18 on CIFAR-10 under PGD20 attack. Our contributions can be summarized as: 1) We provide both empirical and theoretical evidence that the upper bound of adversarial transferability is influenced by the types and parameters of normalization layers. 2) We propose a novel Random Normalization Aggregation (RNA) module which replaces the normalization layers to create huge random space with weak adversarial transferability. Together with a natural black-box adversarial training, RNA boosts the defense ability. 3) We conduct extensive experiments to demonstrate the superiority of RNA on different benchmark datasets and networks. Different variants and components are also studied. 2 Related Work DNNs are vulnerable to adversarial examples and arouse lots of research interests in the attack and defense techniques [4, 5]. Expectation Over Transformation (EOT) is introduced to generating adversarial examples by computing the gradient over the expected transformation to the input [20]. Rice et al. [21] explore the overfitting issue in adversarial training and propose to improve the robustness via early-stopping. Recently, the influence of DNN basic component on adversarial robustness has been paid more attention, such as activation function [22], operation [23], and neural architecture [11, 24, 25]. In terms of BN, Xie et al. [26] explore the robustness at different network scales and introduce a mixture of two BN layers which take care of clean and adversarial examples separately to improve the trade-offs between clean and adversarial accuracy. The mixture of BN can also improve the generalization of network with adversarial training [27]. Benz et al. [13] provide empirical evidence that BN increase the adversarial vulnerability. In this paper, we lay emphasis on the normalization layers and explore the connections between adversarial robustness and the aggregation of normalization layers to improve defense performance. 3 Adversarial Transferability with Different Normalization In this section, we reveal the connections between adversarial transferability and normalization layers. We first consider a network which is identified with an hypothesis h from a spaceH. The network h is optimized with the loss function L on input X and labels Y . The objective is formulated as h∗ = argmin h∈H E x,y∼X ,Y [L(h(x), y)]. (1) Given a target network h and inputs {x, y}, the adversarial examples are defined as perturbed input x̃ = x+ δ, which makes the network h misclassify through maximizing the classification loss as x̃ = argmax x̃:∥x̃−x∥p⩽ϵ L(h(x̃), y), (2) where the perturbation δ is constrained by its lp-norm. Adversarial transferability denotes an inherent property of X̃ that these adversarial examples can also boost the classification loss L(h′(x̃), y) of other networks as well, where h′ ∈ H. We empirically demonstrate that transferability is influenced by the normalization layers in the network h, as shown in Figure 1 (d). For example, taking BN and IN as source models to generate adversarial examples, the adversarial accuracies of LN are 71.29% and 74.34% respectively. We further provide more theoretical analysis of their relationships. 3.1 Definition of Normalization Layers Batch Normalization is known as important basic module in DNNs, which improves the network performance, and a wide variety of variants are introduced where the activations are normalized with different dimensions as well as sizes. To cover most types of normalization, we divide them into two categories. An illustration is shown in Figure 1 (a) and (b), where LN, GN, and IN compute the mean and variance for each example with different group sizes during inference, while BN and BGN adopt the pre-calculated mini batch statistics which are computed by moving average in the training phase. Note that LN and IN are special cases of GN, which takes the minimum or maximum group number. Likewise, BN is a special case of BGN. For simplicity, we use GN and BGN to cover all these normalizations. Considering the activations y ∈ Rd×N where N denotes the batch size and d denotes the number of features, the normalized outputs after BGN with group number of s(BGN) and those of GN with group number of s(GN) during inference stage are formulated as (ŷ (k) (BGN))i = (y(k))i − (µ(BGN))i (σ(BGN))i , (z (k) (BGN))i = γ(BGN) ∗ (ŷ (k) (BGN))i + β(BGN), for 1 ≤ k ≤ N, (ŷ (k) (GN))i = γ(GN) (y(k))i − (µ(k)(GN))i (σ (k) (GN))i + β(GN), (z (k) (GN))i = γ(GN) ∗ (ŷ (k) (GN))i + β(GN), for 1 ≤ k ≤ N, where (µ(k)(GN))i = 1 G G∑ j=1 (y(k))G⌊ iG ⌋+j , (σ (k) (GN))i = √√√√ 1 G G∑ j=1 ((y(k))G⌊ iG ⌋+j − (µ (k) (GN))i) 2, (3) where G = ⌊ ds(GN) ⌋ denotes the group size of GN, (µ(BGN))i and (σ(BGN))i denote the tracked mean and standard deviation of group ⌊ i·s(BGN)d ⌋. 3.2 Variation of Loss Function Smoothness Existing work on adversarial transferability reveals that the adversarial transferability is mainly influenced by the dimensionality of the space of adversarial examples, since the adversarial subspaces of two networks are more likely to intersect with the growth of this dimensionality. [5, 28]. The size of space of adversarial examples can be estimated by the maximum number of orthogonal vectors ri which are aligned with the gradient g = ∇XL(h(X ),Y). In [28], a tight bound is derived as g⊤ri ≥ ϵ∥g∥2√k where k denotes the maximum number of ri, which implies that the smoothness of loss function is inversely proportional to the adversarial transferability. Thus, we analyze the influence of different normalization layers on the smoothness of loss function, including GN and BGN. For simplicity, we dismiss the usage of k in the following equations since the computation of both GN and BGN during inference is independent of batch size. We denote the loss with GN as L̂gn and the loss with BGN as L̂bgn. Since the mean and variance are computed based on current group for both GN and BGN, we compute the partial derivative of loss w.r.t. a group Yj instead of yi where Yj = y[G⌊ iG ⌋:G⌊ i G ⌋+G] . Similarly, Zj denotes the activations of a group after normalization layers. Based on Eq. 3, the partial derivative of L̂gn and L̂bgn w.r.t. Yj can be given as ∂L̂gn ∂Yj = γgn G · σgnj (G · ∂L̂gn ∂Zj − 1⟨1, ∂L̂gn ∂Zj ⟩ − Ŷj⟨ ∂L̂gn ∂Zj , Ŷj⟩), ∂L̂bgn ∂Yj = γbgn σbgnj ∂L̂bgn ∂Zj , (4) where ⟨, ⟩ denotes the inner product, σgnj denotes the standard deviation of Yj , and σ bgn j denotes the tracked standard deviation of Yj . For simplicity, we denote ĝ = ∂L̂∂Yj and g = ∂L̂ ∂Zj . Taking the advantage of the fact that the mean of Yj is zero and its norm is √ G, the squared norm of the partial derivative of GN and BGN can be derived as ∥ĝgn∥2 = γ2gn (σgnj ) 2 (∥ggn∥2 − 1 G ⟨1, ggn⟩2 − 1 G ⟨ggn, Ŷj⟩2), ∥ĝbgn∥2 = γ2bgn (σbgnj ) 2 ∥gbgn∥2. (5) Besides the smoothness of the loss, we further consider the smoothness of the gradients of the loss for GN and BGN. Following [29], we compute the “effective” β-smoothness through the quadratic form of Hessian of the loss w.r.t. the group activations in the normalized gradient direction, which measures the change of gradients with perturbations in the gradient direction. For simplicity, we denote the hessian w.r.t. the layer output as Ĥ = ∂L̂∂Yj∂Yj , the hessian w.r.t. the normalization output as H = ∂L̂∂Zj∂Zj , the normalized gradient as ĝ ′ = ĝ∥ĝ∥ and g ′ = g∥g∥ . For GN and BGN, we have ĝ′⊤gnĤgnĝ ′ gn ≤ γ2gn (σgnj ) 2 [ g′⊤gnHgng ′ gn − 1 G · γgn ⟨ggn, Ŷj⟩ ] , ĝ′⊤bgnĤbgnĝ ′ bgn ≤ γ2bgn (σbgnj ) 2 [ g′⊤bgnHbgng ′ bgn ] . (6) 3.3 Normalization Layers and Adversarial Transferability The sufficient conditions and the bounds of adversarial transferability between two networks have been discussed in [30]. We extend this result to the networks with different normalization layers. Since we focus on the influence of different normalization layers, we assume that these networks share the same loss function and weight parameters W , which makes ∂L̂gn∂Zj = ∂L̂bgn ∂Zj and ∂L̂gn∂Zj∂Zj = ∂L̂bgn ∂Zj∂Zj . Meanwhile, Eq. 5 and Eq. 6 can be easily generalized to the input x since ∂L̂∂x = ∂L̂ ∂Y W . With this assumption, the connections between normalization layers and adversarial transferability can be established via bounded gradient norm and β-smoothness in Eq. 5 and Eq. 6 as Theorem 3.1. Given two networks ha and hb with different normalization layers, the adversarial perturbation under white-box attack is δ on x with attack target label yA and true label yT . Assume ha and hb are “effective” βa and βb-smooth respectively, the level of adversarial transferability T between networks ha and hb within the perturbation ball ∥δ∥2 ≤ ϵ can be upper bounded by T ≤ Ra +Rb min(L(x, yA))−max(∥∇xL∥)ϵ( √ 1+S̄ 2 + 1)−max(βa, βb)ϵ2 , (7) where T denotes the attack successful rate,Ra andRb denotes the empirical risks of network ha and hb, S̄ denotes the upper loss gradient similarity, min(L(x, yA)) = minx∼X (La(x, yA),Lb(x, y′)), and max(∥∇xL∥) = maxx∼X ,y∼{yT ,yA}(∥∇xLa(x, y)∥, ∥∇xLb(x, y)∥). Since the networks share the same loss function and weight parameters, we denote the influence of weight parameters as some constant Cg on gradient norm and CH on gradient smoothness. The partial derivative and Hessian of loss w.r.t. the normalization output are the same for different normalization, denoted as g and H respectively. The gradient norm, βa, and βb in Eq. 7 can be bounded as ∥∇xL∥ ≤ Cg ·max ( |γgn| σgnj √ ∥g∥2 − 1 G ⟨1, g⟩2 − 1 G ⟨g, Ŷj⟩2, |γbgn| σbgnj ∥g∥ ) , βa,b ≤ CH ·max ( γ2gn (σgnj ) 2 [ g′⊤Hg′ − 1 G · γgn ⟨g, Ŷj⟩ ] , γ2bgn (σbgnj ) 2 [ g′⊤Hg′ ]) . (8) Combining Eq. 7 and 8, we observe that the upper bound of adversarial transferability is controlled by the gradient magnitude and gradient smoothness, which is further bounded according to the type and parameters of normalization layers. Specifically, given the same γ and σ for GN and BGN, GN achieves a smaller gradient norm and better gradient smoothness than BGN, which decreases the upper bound of adversarial transferability. Furthermore, the group size G in GN plays an important role in smoothness. With smaller G, the smoothness of GN increases, and thus the upper bound of adversarial transferability decreases. Similar observations can be found in empirical evidence. As shown in Figure 2, the loss landscapes of different normalization layers w.r.t. input are visualized, which demonstrates that different normalization layers have different smoothness. Furthermore, IN achieves the best performance in smoothness, which corresponds to the observation in Eq. 8, since IN has the minimum group size. The attack success rate is relatively low when the source model is IN, as shown in Figure 1 (c) and (d), which corresponds to the observation in Theorem 3.1 that the adversarial transferability decreases when the network is smoother. 4 Random Normalization Aggregation Since the adversarial transferability is strongly correlated with the type of normalization layers, we ask a simple question: Can we utilize the bounded adversarial transferability among normalization layers to defense against white-box attacks? In this work, we propose a Random Normalization Aggregation (RNA) module, which replaces the BN layer in the network. As shown in Figure 4 (a), the normalization layers becomes a combination of different normalization sampled from GNs and BGNs, where the underline denotes the group number. Specifically, the network maintains different normalization layers while only one normalization is randomly selected for each layer during forwarding, as shown in Figure 4 (b). Through incorporating randomization in normalization layers, the network with RNA module can be treated as a “supernet” with multiple paths. Back to white-box defense setting, we assume that the attackers have access to the network parameters. The adversarial examples are generated through backward on a randomly sampled path, and then fed to another randomly sampled path due to RNA module, which makes the entire white-box attack become a “black-box” attack, as illustrated in Figure 4 (c). Thus, together with the adversarial transferability study in Section 3, it is natural to create a network with random space in normalization layers where the adversarial transferability is significantly constrained. To achieve a strong defense against adversarial attacks, some concerns still remain: (1). The number of paths are required to be extremely large to reduce the probability of sampling the same path with random sampling strategy; (2). The collaboration with traditional adversarial training; (3). The normalization types need to be carefully selected to enforce low adversarial transferability. We discussion these concerns as follows. Path Increment in Random Space The adversarial transferability among different normalization has been discussed in Figure 1 (c) and (d). However, the size of random space also matters for effective defense against attacks. If the attackers can sample the same path during attack and inference phases with a high probability, the adversarial accuracy will decrease tremendously. To tackle this issue, we introduce layer-wise randomization of RNA module, which randomly samples the normalization for each layer in the network. As shown in Figure 4 (b), different normalization types are sampled for different layers, which exponentially increases the number of paths. Given the n normalization types in RNA and L layers in the network, the size of random space becomes nL, which reduces the probability of sampling the same path during attack and inference phase to 1 nL << 1n . Black-box Adversarial Training It is natural to incorporate RNA module into adversarial training. Consistent with inference phase, we randomly sample a path pa and conduct white-box attack to generate adversarial examples X̃pa . Different from traditional adversarial training which optimizes pa through feeding X̃pa , we feed X̃pa to another randomly sampled path p, which forms a “black-box” adversarial training, as illustrated in Figure 4 (c). Eq. 1 and Eq. 2 can be reformulated as h∗ = argmin h∈H E x,y∼X ,Y;p∼P [L(h(x̃pa ; p), y)], where x̃pa = argmax x̃pa :∥x̃pa−x∥p⩽ϵ L(h(x̃pa ; pa), y), (9) where P denotes the space of paths. The training procedure is shown in Algorithm 1. Normalization Types Selection In RNA module, multiple normalization types are maintained to form random space. According to Theorem 3.1, the adversarial transferability is bounded by different components, including gradient similarity, empirical risks, gradient magnitude, and gradient smoothness. We first provide empirical evidence that normalization layers from the same category defined in Section 3.1 tend to have higher gradient similarity. As shown in Figure 3, we visualize Algorithm 1 Random Normalization Aggregation with Black-box Adversarial Training Input: The training tuple {X , Y}; Path set P; Attack step size η; Attack iterations t; Perturbation size ϵ; Network h with parameters W ; Replace BN layers with RNA modules, and initialize the network. while not converge do Sample a batch of data {x, y} from {X , Y}; Randomly sample a path pa from P; Initialize adversarial perturbation δ; for i← 1 to t do δ = clipϵ[δ + η · sign(∇xL(h(x; pa), y)]; end for Randomly sample a path p from P; W = W −∇WL(h(x+ δ; p), y); end while the histograms over the cosine similarity of two networks with different normalization layers. For example, BN and BGN belong to the same category, and their gradient similarity is much higher than that between BN and IN, comparing Figure 3 (a) and (b). Since the gradient similarity is proportional to the upper bounds of adversarial transferability in Eq. 7, we propose to select normalization from different categories. Thus, RNA module samples the normalization types from both GN and BGN with small group sizes in our experiments, while the evaluation of other combinations is also included. 5 Experiments In this section, we provide sufficient evaluation of RNA module on various models and datasets. 5.1 Evaluation Setup CIFAR-10/100 We first conduct experiments on CIFAR-10/100 [31] datasets, which contain 50K training images and 10K testing images with size of 32×32 from 10/100 categories. The networks we use are ResNet-18 [31] and WideResNet-32 (WRN) [32]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 5 × 10−4. The initial learning rate is set to 0.1 with a piecewise decay learning rate scheduler. All the baselines are trained with 200 epochs with a batch size of 128. The PGD-10 with ϵ = 8/255 and step size of 2/255 is adopted in the adversarial training setting. For the RNA module, we utilize BN and IN to form the random space in normalization layers. The experiments are performed on one V100 GPU using Pytorch [33] and Mindspore [34]. ImageNet The effectiveness of proposed RNA is also evaluated on ImageNet [35], which contains 1.2M training images and 50K testing images with size of 224 × 224 from 1000 categories. The networks we use are ResNet-50 [31]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 1 × 10−4. The initial learning rate is set to 0.02 with a cosine learning rate scheduler. We load a pretrained ResNet-50 and then adversarailly train the network for 60 epochs with a batch size of 512. The PGD-2 with ϵ = 4/255 is adopted in the adversarial training setting. For the RNA module, we utilize BGNs and GNs with group size of 1 and 2 to form the random space. The experiments are performed on eight V100 GPUs. Baselines and Attacks Our proposed RNA modules replace the normalization layers in the network. Thus, various normalization layers are involved for comparison, including BN, IN, LN, GN, and BGN [15, 16, 17, 18, 19]. On CIFAR-10/100, we evaluate the robustness of all the baselines under different strong attacks from TorchAttacks [36]. For Fast Gradient Sign Method (FGSM) [4], the perturbation size ϵ is set to 8/255. For Projected Gradient Descent (PGD) [6], ϵ is set to 8/255 with step size of 2/255, and the steps are set to 20. For CW attack [37], the steps are set to 1000 with learning rate of 0.01. For Momentum variant of Iterative Fast Gradient Sign Method (MIFGSM) [38], ϵ is set to 8/255 with a step size of 2/255, and the steps are set to 5 with decay of 1.0. For DeepFool [39], the steps are set to 50 with overshoot of 0.02. For Auto Attack [40], ϵ is set to 8/255. On ImageNet, we evaluate the robustness of under PGD attacks with ϵ of 4/255 and steps of 50. 5.2 Results for Robustness Main Results We first evaluate the performance of RNA on CIFAR-10 and CIFAR-100 under different types of attacks. The detailed results are shown in Table 1 and 2. Popular normalization layers show similar performance on robustness. However, with a random space of different normalization layers, the robustness is significantly improved. Comparing RNA with other baselines, RNA consistently achieves the best performance under all attacks, and show strong superiority. For example, RNA with ResNet-18 achieves 65.61% under Auto Attack on CIFAR-10, which is 17.92% higher than BN. Similarly, RNA with WRN achieves 55.16% under DeepFool attacks on CIFAR-100, which is 53.33% higher than IN. The boosted adversarial accuracy shows strong empirical evidence that the constrained adversarial transferability in random space can provide satisfactory defense capability. Furthermore, with proposed black-box adversarial training, RNA achieves better natural accuracy. For example, RNA with WRN achieves 86.46% on CIFAR-10, which improves BN by 1.19%. We mainly attribute this improvement to the fact that the generated adversarial examples can be treated as a “weaker” attack example to other paths during optimization, which naturally achieves better trade-offs between natural and adversarial accuracy. Stronger PGD Attacks We further evaluate the defense capability of RNA under stronger PGD attacks, which enhance the number of iterations and enlarge the perturbation size. The comparison with other baselines are shown in Figure 5 (a) and (b). Comparing RNA with other baselines, RNA achieves stable robustness under different attack iterations, such as 60.70% on PGD10 and 59.94% on PGD100. Meanwhile, the PGD accuracy of RNA is much higher than all other baselines. For example, RNA improves BN baselines by a margin of 7.93%. In terms of larger perturbation size, RNA achieves the best robustness in all the scenarios, and the lowest decrement among all the methods. Specifically, RNA achieves 80.06% with ϵ of 2/255 and 24.90% with ϵ of 20/255, whose gap is 55.16%. For comparison, the gap of BN is 64.68% and GN is 58.40%. Table 3: Comparison with defense algorithms. Method CIFAR-10 ImageNetPGD20 AA PGD50 RobustWRN [41] 59.13 52.48 31.14 AWP [42] 58.14 54.04 - RobNet [11] 52.74 - 37.15 RPI+RPT [43] 53.96 53.30 42.72 SAT [22] 56.01 51.83 42.30 RNA(Ours) 63.34 67.88 54.61 Table 4: Robustness evaluation of random space built from different normalization combinations under different attacks. Normalization PGD20 DeepFool AutoAttack BN 52.16 0.35 47.69 GN+BGN 55.40 70.26 58.90 GN+LN 46.67 62.85 47.96 LN+BN 55.67 68.14 58.73 IN+BN 60.69 76.73 65.61 Comparison with SOTA Defense Methods To demonstrate the superiority of RNA, we include several SOTA defense algorithms for comparison. RobustWRN [41] explores the importance of network width and depth on robustness. AWP [42] proposes to regularize the flatness of weight loss landscape to achieve robustness. RobNet [11] introduce a NAS framework for robustness. RPI+RPT [43] utilizes randomized precision for adversarial defense. SAT [44] proposes to replace ReLU with its smooth approximations, which exhibits robustness. The results are shown in Table 3. We use WRN on CIFAR-10 and ResNet-50 on ImageNet. All the baselines are evaluated under the attack of PGD20 and AutoAttack (AA) on CIFAR-10 and PGD50 on ImageNet. Our proposed RNA module achieves the best performance in all the scenarios. On CIFAR-10, RNA achieves 67.88% under AutoAttack, with 13.84% improvement compared with AWP. On ImageNet, RNA achieves 54.61% under PGD50, with 12.31% improvement compared with SAT. Note that RNA replaces the normalization layer, which is orthogonal to other defense techniques. Similarly, the adversarial training in our setting can be replaced by other advanced training strategies to achieve potentially better performance. Adversarial Transferability in Random Space For a better illustration of the adversarial transferability in the random space built from RNA, we conduct the adversarial transferability study of ResNet-18 applied with RNA on CIFAR-10. We first define the path difference as the number of different normalization layers between attack and inference paths. For example, a path difference of 7 denotes two paths selecting different normalization layers in 7 layers during forwarding. For each path difference, we then randomly sample 10 path pairs for transferability evaluation. We also include different normalization combinations in RNA for comparison. The results are shown in Figure 5 (c), which illustrates the PGD accuracy under transferred attacks between path pairs along increasing path differences. The filled areas denote the maximum and minimum PGD accuracy. It is obvious that the combination of BN and IN achieves the best performance, which also corresponds to the analysis in Theorem 3.1. With random sampling strategy, the path difference is always around 10 since the number of normalization layers is 20 in this network. For example, the combination of BN and IN achieves an average PGD accuracy of 61.09% when the path difference becomes 10. Compared with the baseline BN which achieves 52.16% PGD accuracy, this lower adversarial transferability in our random space brings a strong defense capability against adversarial attacks. 5.3 Ablation Study Different Normalization Combinations We first provide more quantitative results of different combinations of normalization layers in RNA module. We conduct comparison with ResNet-18 on CIFAR-10. As shown in Table 4, we evaluate the performance under PGD20, DeepFool and Auto Attack, and the combination of IN and BN achieves the best robustness in all the scenarios. Consistent with the observation in Theorem 3.1, the combination of LN and BN has slightly worse performance than that of IN and BN, since IN is a smoother normalization than LN. Similarly, the combination of GN and LN achieves the worst performance, since LN is a special case of GN so that they have high gradient similarity in our empirical observation, as discussed in Figure 3. Thus, utilizing different normalization layers with smaller group size from GN and BGN, RNA module can form random space with low adversarial transferability to better improve the defense ability. Effectiveness of Different Components We next demonstrate the effectiveness of each component introduced in RNA module. The detailed results are shown in Table 5 where [.] denotes the range of group size. Besides the normalization combinations, we include more discussion of the random space designing. To form a random space in normalization layers, it is natural to consider a combination of BN, IN, GN, and LN, however, it is difficult to optimize, as shown in the first row of Table 5. With the normalization layers selected from GNs, the optimization becomes stable, however, the robustness is not competitive, as shown in the second row. The involvement of black-box adversarial training significantly improves the robustness, as shown in the third row. Through expanding the random space with BGNs, the defense capability is slightly improved due to the doubled number of paths. However, the size of random space is still limited. After removing the layer-based constraint, the number of paths exponentially increases, the size of random space for each layer can be reduced for better trade-offs, as shown in the last 4 rows. Comparing BGN+GN[1-1] with GN[1-64], a better random space designing with an appropriate adversarial training strategy can achieve an improvement of 15.04% under Auto Attack, which demonstrates the necessity of these components. 6 Conclusions In this paper, we explore the importance of normalization layers in adversarial robustness where the transferability among different normalization layers can be utilized to boost the defense capability. A Random Normalization Aggregation (RNA) module is proposed to form random space with low adversarial transferability for defense against adversarial attacks. We provide sufficient empirical evidence and theoretical analysis to reveal the connections between adversarial transferability and normalization types, which guides the random space designing. With the involvement of black-box adversarial training strategy and the relaxation of layer-based constraint, the robustness provided by RNA module is significantly strengthened. We demonstrate the superiority of RNA module via comprehensive experiments across different network architectures, attack settings, and benchmark datasets. Our work can provide valuable insights into the network module design for robustness. Acknowledgments This work was supported in part by the Australian Research Council under Project DP210101859 and the University of Sydney Research Accelerator (SOAR) Prize. The authors acknowledge the use of the National Computational Infrastructure (NCI) which is supported by the Australian Government, and accessed through the NCI Adapter Scheme and Sydney Informatics Hub HPC Allocation Scheme. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper regarding adversarial robustness? 2. What are the strengths of the proposed approach, particularly in its technical soundness and potential impact on the community? 3. What are the weaknesses of the paper, especially regarding its claims and explanations? 4. Do you have any concerns about the methodology used in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work provides a clear explanation for the relationship between normalizations and adversarial transferability, which further inspires the authors to propose a random normalization aggregation method where the adversarial robustness can be significantly improved. Strengths And Weaknesses Strengths: Using adversarial transferability to boost adversarial robustness is interesting and inspiring, which is technical and methodology sound. Establishing the relationship between normalizations and adversarial transferability is technically sound, and, in my opinion, it will make some impact on our community and give some insights to more researchers. Weaknesses: The connection between the mentioned novel viewpoint of using adversarial transferability and improved adversarial robustness should be highlighted, i.e., why transferability can contribute to boosting robustness, similarly why RNA can contribute to boosting robustness, moreover, do these two reasons share the same inspiration? For the introduction to motivation, the authors claim that (line47-line50) the performance gap between BN and other normalizations results from “the adversarial transferability among normalizations,” which is a bit confusing and needs further justification. Specifically, the diagonal results are evaluated using white-box attacks, and the robust accuracy is about 50%, so that the gap may come from the difference between white- and black-box attacks. As this is an essential part of inspiring the authors to investigate the relationship between transferability and normalizations (line50), I suggest the authors provide a detailed justification. One concern is that the robustness may come from the designed random selection operation, so it is necessary to employ expectation over transformation, EOT. [1], for the robustness evaluation to defend against the random operation. Typos: \tilde{x} makes Eq.2 confusing. [1] Synthesizing Robust Adversarial Examples Questions I suggest the authors give an explanation for the difference between RNA (random selection in normalizations) and other random selection methods, i.e., selecting sub-network in parameter or feature space. Limitations Please see [Strengths And Weaknesses]
NIPS
Title Random Normalization Aggregation for Adversarial Defense Abstract The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. N/A The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. 1 Introduction Deep Neural Networks (DNNs) have achieved impressive performance in various tasks [1, 2, 3]. However, it is well known that DNNs are susceptible to maliciously generated adversarial examples [4, 5]. Through imperceptible perturbations on the model inputs during inference stage, the model is misled to wrong predictions at a high rate. Since then, a wide range of attack techniques have been proposed under different settings and show strong attack capability. For example, attackers have full access to the model architecture and parameters, which forms white-box attacks [5, 6], and attackers have limited query access to the model, which forms black-box attacks [7, 8]. Since the high attack success rates of these techniques reveal the high risk of DNNs, defenses against adversarial examples have received increasing attention and adversarial robustness becomes one of the key criteria. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To mitigate this risk, adversarial training is proposed to yield robust models through training on generated adversarial examples [5, 6]. Besides training procedure, some regularization and image preprocessing techniques are introduced to improve adversarial robustness [9, 10]. Recent work note that the architecture and module designs could play important roles in the robustness [11, 12]. Hence, we pay more attention to the basic modules in the network which are seldom considered for improving robustness, such as normalization layers. Existing works have discussed the influence of Batch Normalization (BN) and empirically shown that BN increases adversarial vulnerability and decreases adversarial transferability [13, 14]. However, the theoretical analysis of this observation is insufficient and how to tackle this pitfall or even utilize this property to defend attacks is unexplored. In this work, we take numerous normalizations into consideration, including Layer Normalization (LN), Group Normalization (GN), Instance Normalization (IN), Batch Normalization (BN), and Batch Group Normalization (BGN) [15, 16, 17, 18, 19], as shown in Figure 1 (a) and (b). To evaluate the influence of different normalizations on the robustness, we first conduct PGD-7 attack [6] to both natural and adversarial trained networks with different normalizations on CIFAR-10, as shown in the diagonals of Figure 1 (c) and (d). Not surprisingly, BN obtains the best robustness compared to other variants. However, we have an intriguing observation after the transferability evaluations among different normalizations. As illustrated in the heatmaps, the adversarial accuracies in most scenarios are around 70% when fed with transferred adversarial examples, while those under whitebox attack are around 50%. This huge gap mainly comes from the adversarial transferability among normalizations. Motivated by this observation, we first explore the relationship between adversarial transferability and normalizations, and show that the gradient similarity and loss smoothness are the key factors of the discrepancy in transferability among different normalizations. Based on the theoretical evidence, we propose to aggregate different types of normalizations to form random space in the inference phase where the adversarial transferability can be significantly reduced. With designed random space, the inference phase naturally forms the black-box setting for those attackers who have access to model parameters due to the colossal random space. Together with the proposed black-box adversarial training framework, the adversarial robustness is substantially improved with less reduction of natural accuracy. For example, with the same adversarial training setting, the proposed algorithm improves the natural accuracy by 2.45% and the adversarial accuracy by 8.53% with ResNet-18 on CIFAR-10 under PGD20 attack. Our contributions can be summarized as: 1) We provide both empirical and theoretical evidence that the upper bound of adversarial transferability is influenced by the types and parameters of normalization layers. 2) We propose a novel Random Normalization Aggregation (RNA) module which replaces the normalization layers to create huge random space with weak adversarial transferability. Together with a natural black-box adversarial training, RNA boosts the defense ability. 3) We conduct extensive experiments to demonstrate the superiority of RNA on different benchmark datasets and networks. Different variants and components are also studied. 2 Related Work DNNs are vulnerable to adversarial examples and arouse lots of research interests in the attack and defense techniques [4, 5]. Expectation Over Transformation (EOT) is introduced to generating adversarial examples by computing the gradient over the expected transformation to the input [20]. Rice et al. [21] explore the overfitting issue in adversarial training and propose to improve the robustness via early-stopping. Recently, the influence of DNN basic component on adversarial robustness has been paid more attention, such as activation function [22], operation [23], and neural architecture [11, 24, 25]. In terms of BN, Xie et al. [26] explore the robustness at different network scales and introduce a mixture of two BN layers which take care of clean and adversarial examples separately to improve the trade-offs between clean and adversarial accuracy. The mixture of BN can also improve the generalization of network with adversarial training [27]. Benz et al. [13] provide empirical evidence that BN increase the adversarial vulnerability. In this paper, we lay emphasis on the normalization layers and explore the connections between adversarial robustness and the aggregation of normalization layers to improve defense performance. 3 Adversarial Transferability with Different Normalization In this section, we reveal the connections between adversarial transferability and normalization layers. We first consider a network which is identified with an hypothesis h from a spaceH. The network h is optimized with the loss function L on input X and labels Y . The objective is formulated as h∗ = argmin h∈H E x,y∼X ,Y [L(h(x), y)]. (1) Given a target network h and inputs {x, y}, the adversarial examples are defined as perturbed input x̃ = x+ δ, which makes the network h misclassify through maximizing the classification loss as x̃ = argmax x̃:∥x̃−x∥p⩽ϵ L(h(x̃), y), (2) where the perturbation δ is constrained by its lp-norm. Adversarial transferability denotes an inherent property of X̃ that these adversarial examples can also boost the classification loss L(h′(x̃), y) of other networks as well, where h′ ∈ H. We empirically demonstrate that transferability is influenced by the normalization layers in the network h, as shown in Figure 1 (d). For example, taking BN and IN as source models to generate adversarial examples, the adversarial accuracies of LN are 71.29% and 74.34% respectively. We further provide more theoretical analysis of their relationships. 3.1 Definition of Normalization Layers Batch Normalization is known as important basic module in DNNs, which improves the network performance, and a wide variety of variants are introduced where the activations are normalized with different dimensions as well as sizes. To cover most types of normalization, we divide them into two categories. An illustration is shown in Figure 1 (a) and (b), where LN, GN, and IN compute the mean and variance for each example with different group sizes during inference, while BN and BGN adopt the pre-calculated mini batch statistics which are computed by moving average in the training phase. Note that LN and IN are special cases of GN, which takes the minimum or maximum group number. Likewise, BN is a special case of BGN. For simplicity, we use GN and BGN to cover all these normalizations. Considering the activations y ∈ Rd×N where N denotes the batch size and d denotes the number of features, the normalized outputs after BGN with group number of s(BGN) and those of GN with group number of s(GN) during inference stage are formulated as (ŷ (k) (BGN))i = (y(k))i − (µ(BGN))i (σ(BGN))i , (z (k) (BGN))i = γ(BGN) ∗ (ŷ (k) (BGN))i + β(BGN), for 1 ≤ k ≤ N, (ŷ (k) (GN))i = γ(GN) (y(k))i − (µ(k)(GN))i (σ (k) (GN))i + β(GN), (z (k) (GN))i = γ(GN) ∗ (ŷ (k) (GN))i + β(GN), for 1 ≤ k ≤ N, where (µ(k)(GN))i = 1 G G∑ j=1 (y(k))G⌊ iG ⌋+j , (σ (k) (GN))i = √√√√ 1 G G∑ j=1 ((y(k))G⌊ iG ⌋+j − (µ (k) (GN))i) 2, (3) where G = ⌊ ds(GN) ⌋ denotes the group size of GN, (µ(BGN))i and (σ(BGN))i denote the tracked mean and standard deviation of group ⌊ i·s(BGN)d ⌋. 3.2 Variation of Loss Function Smoothness Existing work on adversarial transferability reveals that the adversarial transferability is mainly influenced by the dimensionality of the space of adversarial examples, since the adversarial subspaces of two networks are more likely to intersect with the growth of this dimensionality. [5, 28]. The size of space of adversarial examples can be estimated by the maximum number of orthogonal vectors ri which are aligned with the gradient g = ∇XL(h(X ),Y). In [28], a tight bound is derived as g⊤ri ≥ ϵ∥g∥2√k where k denotes the maximum number of ri, which implies that the smoothness of loss function is inversely proportional to the adversarial transferability. Thus, we analyze the influence of different normalization layers on the smoothness of loss function, including GN and BGN. For simplicity, we dismiss the usage of k in the following equations since the computation of both GN and BGN during inference is independent of batch size. We denote the loss with GN as L̂gn and the loss with BGN as L̂bgn. Since the mean and variance are computed based on current group for both GN and BGN, we compute the partial derivative of loss w.r.t. a group Yj instead of yi where Yj = y[G⌊ iG ⌋:G⌊ i G ⌋+G] . Similarly, Zj denotes the activations of a group after normalization layers. Based on Eq. 3, the partial derivative of L̂gn and L̂bgn w.r.t. Yj can be given as ∂L̂gn ∂Yj = γgn G · σgnj (G · ∂L̂gn ∂Zj − 1⟨1, ∂L̂gn ∂Zj ⟩ − Ŷj⟨ ∂L̂gn ∂Zj , Ŷj⟩), ∂L̂bgn ∂Yj = γbgn σbgnj ∂L̂bgn ∂Zj , (4) where ⟨, ⟩ denotes the inner product, σgnj denotes the standard deviation of Yj , and σ bgn j denotes the tracked standard deviation of Yj . For simplicity, we denote ĝ = ∂L̂∂Yj and g = ∂L̂ ∂Zj . Taking the advantage of the fact that the mean of Yj is zero and its norm is √ G, the squared norm of the partial derivative of GN and BGN can be derived as ∥ĝgn∥2 = γ2gn (σgnj ) 2 (∥ggn∥2 − 1 G ⟨1, ggn⟩2 − 1 G ⟨ggn, Ŷj⟩2), ∥ĝbgn∥2 = γ2bgn (σbgnj ) 2 ∥gbgn∥2. (5) Besides the smoothness of the loss, we further consider the smoothness of the gradients of the loss for GN and BGN. Following [29], we compute the “effective” β-smoothness through the quadratic form of Hessian of the loss w.r.t. the group activations in the normalized gradient direction, which measures the change of gradients with perturbations in the gradient direction. For simplicity, we denote the hessian w.r.t. the layer output as Ĥ = ∂L̂∂Yj∂Yj , the hessian w.r.t. the normalization output as H = ∂L̂∂Zj∂Zj , the normalized gradient as ĝ ′ = ĝ∥ĝ∥ and g ′ = g∥g∥ . For GN and BGN, we have ĝ′⊤gnĤgnĝ ′ gn ≤ γ2gn (σgnj ) 2 [ g′⊤gnHgng ′ gn − 1 G · γgn ⟨ggn, Ŷj⟩ ] , ĝ′⊤bgnĤbgnĝ ′ bgn ≤ γ2bgn (σbgnj ) 2 [ g′⊤bgnHbgng ′ bgn ] . (6) 3.3 Normalization Layers and Adversarial Transferability The sufficient conditions and the bounds of adversarial transferability between two networks have been discussed in [30]. We extend this result to the networks with different normalization layers. Since we focus on the influence of different normalization layers, we assume that these networks share the same loss function and weight parameters W , which makes ∂L̂gn∂Zj = ∂L̂bgn ∂Zj and ∂L̂gn∂Zj∂Zj = ∂L̂bgn ∂Zj∂Zj . Meanwhile, Eq. 5 and Eq. 6 can be easily generalized to the input x since ∂L̂∂x = ∂L̂ ∂Y W . With this assumption, the connections between normalization layers and adversarial transferability can be established via bounded gradient norm and β-smoothness in Eq. 5 and Eq. 6 as Theorem 3.1. Given two networks ha and hb with different normalization layers, the adversarial perturbation under white-box attack is δ on x with attack target label yA and true label yT . Assume ha and hb are “effective” βa and βb-smooth respectively, the level of adversarial transferability T between networks ha and hb within the perturbation ball ∥δ∥2 ≤ ϵ can be upper bounded by T ≤ Ra +Rb min(L(x, yA))−max(∥∇xL∥)ϵ( √ 1+S̄ 2 + 1)−max(βa, βb)ϵ2 , (7) where T denotes the attack successful rate,Ra andRb denotes the empirical risks of network ha and hb, S̄ denotes the upper loss gradient similarity, min(L(x, yA)) = minx∼X (La(x, yA),Lb(x, y′)), and max(∥∇xL∥) = maxx∼X ,y∼{yT ,yA}(∥∇xLa(x, y)∥, ∥∇xLb(x, y)∥). Since the networks share the same loss function and weight parameters, we denote the influence of weight parameters as some constant Cg on gradient norm and CH on gradient smoothness. The partial derivative and Hessian of loss w.r.t. the normalization output are the same for different normalization, denoted as g and H respectively. The gradient norm, βa, and βb in Eq. 7 can be bounded as ∥∇xL∥ ≤ Cg ·max ( |γgn| σgnj √ ∥g∥2 − 1 G ⟨1, g⟩2 − 1 G ⟨g, Ŷj⟩2, |γbgn| σbgnj ∥g∥ ) , βa,b ≤ CH ·max ( γ2gn (σgnj ) 2 [ g′⊤Hg′ − 1 G · γgn ⟨g, Ŷj⟩ ] , γ2bgn (σbgnj ) 2 [ g′⊤Hg′ ]) . (8) Combining Eq. 7 and 8, we observe that the upper bound of adversarial transferability is controlled by the gradient magnitude and gradient smoothness, which is further bounded according to the type and parameters of normalization layers. Specifically, given the same γ and σ for GN and BGN, GN achieves a smaller gradient norm and better gradient smoothness than BGN, which decreases the upper bound of adversarial transferability. Furthermore, the group size G in GN plays an important role in smoothness. With smaller G, the smoothness of GN increases, and thus the upper bound of adversarial transferability decreases. Similar observations can be found in empirical evidence. As shown in Figure 2, the loss landscapes of different normalization layers w.r.t. input are visualized, which demonstrates that different normalization layers have different smoothness. Furthermore, IN achieves the best performance in smoothness, which corresponds to the observation in Eq. 8, since IN has the minimum group size. The attack success rate is relatively low when the source model is IN, as shown in Figure 1 (c) and (d), which corresponds to the observation in Theorem 3.1 that the adversarial transferability decreases when the network is smoother. 4 Random Normalization Aggregation Since the adversarial transferability is strongly correlated with the type of normalization layers, we ask a simple question: Can we utilize the bounded adversarial transferability among normalization layers to defense against white-box attacks? In this work, we propose a Random Normalization Aggregation (RNA) module, which replaces the BN layer in the network. As shown in Figure 4 (a), the normalization layers becomes a combination of different normalization sampled from GNs and BGNs, where the underline denotes the group number. Specifically, the network maintains different normalization layers while only one normalization is randomly selected for each layer during forwarding, as shown in Figure 4 (b). Through incorporating randomization in normalization layers, the network with RNA module can be treated as a “supernet” with multiple paths. Back to white-box defense setting, we assume that the attackers have access to the network parameters. The adversarial examples are generated through backward on a randomly sampled path, and then fed to another randomly sampled path due to RNA module, which makes the entire white-box attack become a “black-box” attack, as illustrated in Figure 4 (c). Thus, together with the adversarial transferability study in Section 3, it is natural to create a network with random space in normalization layers where the adversarial transferability is significantly constrained. To achieve a strong defense against adversarial attacks, some concerns still remain: (1). The number of paths are required to be extremely large to reduce the probability of sampling the same path with random sampling strategy; (2). The collaboration with traditional adversarial training; (3). The normalization types need to be carefully selected to enforce low adversarial transferability. We discussion these concerns as follows. Path Increment in Random Space The adversarial transferability among different normalization has been discussed in Figure 1 (c) and (d). However, the size of random space also matters for effective defense against attacks. If the attackers can sample the same path during attack and inference phases with a high probability, the adversarial accuracy will decrease tremendously. To tackle this issue, we introduce layer-wise randomization of RNA module, which randomly samples the normalization for each layer in the network. As shown in Figure 4 (b), different normalization types are sampled for different layers, which exponentially increases the number of paths. Given the n normalization types in RNA and L layers in the network, the size of random space becomes nL, which reduces the probability of sampling the same path during attack and inference phase to 1 nL << 1n . Black-box Adversarial Training It is natural to incorporate RNA module into adversarial training. Consistent with inference phase, we randomly sample a path pa and conduct white-box attack to generate adversarial examples X̃pa . Different from traditional adversarial training which optimizes pa through feeding X̃pa , we feed X̃pa to another randomly sampled path p, which forms a “black-box” adversarial training, as illustrated in Figure 4 (c). Eq. 1 and Eq. 2 can be reformulated as h∗ = argmin h∈H E x,y∼X ,Y;p∼P [L(h(x̃pa ; p), y)], where x̃pa = argmax x̃pa :∥x̃pa−x∥p⩽ϵ L(h(x̃pa ; pa), y), (9) where P denotes the space of paths. The training procedure is shown in Algorithm 1. Normalization Types Selection In RNA module, multiple normalization types are maintained to form random space. According to Theorem 3.1, the adversarial transferability is bounded by different components, including gradient similarity, empirical risks, gradient magnitude, and gradient smoothness. We first provide empirical evidence that normalization layers from the same category defined in Section 3.1 tend to have higher gradient similarity. As shown in Figure 3, we visualize Algorithm 1 Random Normalization Aggregation with Black-box Adversarial Training Input: The training tuple {X , Y}; Path set P; Attack step size η; Attack iterations t; Perturbation size ϵ; Network h with parameters W ; Replace BN layers with RNA modules, and initialize the network. while not converge do Sample a batch of data {x, y} from {X , Y}; Randomly sample a path pa from P; Initialize adversarial perturbation δ; for i← 1 to t do δ = clipϵ[δ + η · sign(∇xL(h(x; pa), y)]; end for Randomly sample a path p from P; W = W −∇WL(h(x+ δ; p), y); end while the histograms over the cosine similarity of two networks with different normalization layers. For example, BN and BGN belong to the same category, and their gradient similarity is much higher than that between BN and IN, comparing Figure 3 (a) and (b). Since the gradient similarity is proportional to the upper bounds of adversarial transferability in Eq. 7, we propose to select normalization from different categories. Thus, RNA module samples the normalization types from both GN and BGN with small group sizes in our experiments, while the evaluation of other combinations is also included. 5 Experiments In this section, we provide sufficient evaluation of RNA module on various models and datasets. 5.1 Evaluation Setup CIFAR-10/100 We first conduct experiments on CIFAR-10/100 [31] datasets, which contain 50K training images and 10K testing images with size of 32×32 from 10/100 categories. The networks we use are ResNet-18 [31] and WideResNet-32 (WRN) [32]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 5 × 10−4. The initial learning rate is set to 0.1 with a piecewise decay learning rate scheduler. All the baselines are trained with 200 epochs with a batch size of 128. The PGD-10 with ϵ = 8/255 and step size of 2/255 is adopted in the adversarial training setting. For the RNA module, we utilize BN and IN to form the random space in normalization layers. The experiments are performed on one V100 GPU using Pytorch [33] and Mindspore [34]. ImageNet The effectiveness of proposed RNA is also evaluated on ImageNet [35], which contains 1.2M training images and 50K testing images with size of 224 × 224 from 1000 categories. The networks we use are ResNet-50 [31]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 1 × 10−4. The initial learning rate is set to 0.02 with a cosine learning rate scheduler. We load a pretrained ResNet-50 and then adversarailly train the network for 60 epochs with a batch size of 512. The PGD-2 with ϵ = 4/255 is adopted in the adversarial training setting. For the RNA module, we utilize BGNs and GNs with group size of 1 and 2 to form the random space. The experiments are performed on eight V100 GPUs. Baselines and Attacks Our proposed RNA modules replace the normalization layers in the network. Thus, various normalization layers are involved for comparison, including BN, IN, LN, GN, and BGN [15, 16, 17, 18, 19]. On CIFAR-10/100, we evaluate the robustness of all the baselines under different strong attacks from TorchAttacks [36]. For Fast Gradient Sign Method (FGSM) [4], the perturbation size ϵ is set to 8/255. For Projected Gradient Descent (PGD) [6], ϵ is set to 8/255 with step size of 2/255, and the steps are set to 20. For CW attack [37], the steps are set to 1000 with learning rate of 0.01. For Momentum variant of Iterative Fast Gradient Sign Method (MIFGSM) [38], ϵ is set to 8/255 with a step size of 2/255, and the steps are set to 5 with decay of 1.0. For DeepFool [39], the steps are set to 50 with overshoot of 0.02. For Auto Attack [40], ϵ is set to 8/255. On ImageNet, we evaluate the robustness of under PGD attacks with ϵ of 4/255 and steps of 50. 5.2 Results for Robustness Main Results We first evaluate the performance of RNA on CIFAR-10 and CIFAR-100 under different types of attacks. The detailed results are shown in Table 1 and 2. Popular normalization layers show similar performance on robustness. However, with a random space of different normalization layers, the robustness is significantly improved. Comparing RNA with other baselines, RNA consistently achieves the best performance under all attacks, and show strong superiority. For example, RNA with ResNet-18 achieves 65.61% under Auto Attack on CIFAR-10, which is 17.92% higher than BN. Similarly, RNA with WRN achieves 55.16% under DeepFool attacks on CIFAR-100, which is 53.33% higher than IN. The boosted adversarial accuracy shows strong empirical evidence that the constrained adversarial transferability in random space can provide satisfactory defense capability. Furthermore, with proposed black-box adversarial training, RNA achieves better natural accuracy. For example, RNA with WRN achieves 86.46% on CIFAR-10, which improves BN by 1.19%. We mainly attribute this improvement to the fact that the generated adversarial examples can be treated as a “weaker” attack example to other paths during optimization, which naturally achieves better trade-offs between natural and adversarial accuracy. Stronger PGD Attacks We further evaluate the defense capability of RNA under stronger PGD attacks, which enhance the number of iterations and enlarge the perturbation size. The comparison with other baselines are shown in Figure 5 (a) and (b). Comparing RNA with other baselines, RNA achieves stable robustness under different attack iterations, such as 60.70% on PGD10 and 59.94% on PGD100. Meanwhile, the PGD accuracy of RNA is much higher than all other baselines. For example, RNA improves BN baselines by a margin of 7.93%. In terms of larger perturbation size, RNA achieves the best robustness in all the scenarios, and the lowest decrement among all the methods. Specifically, RNA achieves 80.06% with ϵ of 2/255 and 24.90% with ϵ of 20/255, whose gap is 55.16%. For comparison, the gap of BN is 64.68% and GN is 58.40%. Table 3: Comparison with defense algorithms. Method CIFAR-10 ImageNetPGD20 AA PGD50 RobustWRN [41] 59.13 52.48 31.14 AWP [42] 58.14 54.04 - RobNet [11] 52.74 - 37.15 RPI+RPT [43] 53.96 53.30 42.72 SAT [22] 56.01 51.83 42.30 RNA(Ours) 63.34 67.88 54.61 Table 4: Robustness evaluation of random space built from different normalization combinations under different attacks. Normalization PGD20 DeepFool AutoAttack BN 52.16 0.35 47.69 GN+BGN 55.40 70.26 58.90 GN+LN 46.67 62.85 47.96 LN+BN 55.67 68.14 58.73 IN+BN 60.69 76.73 65.61 Comparison with SOTA Defense Methods To demonstrate the superiority of RNA, we include several SOTA defense algorithms for comparison. RobustWRN [41] explores the importance of network width and depth on robustness. AWP [42] proposes to regularize the flatness of weight loss landscape to achieve robustness. RobNet [11] introduce a NAS framework for robustness. RPI+RPT [43] utilizes randomized precision for adversarial defense. SAT [44] proposes to replace ReLU with its smooth approximations, which exhibits robustness. The results are shown in Table 3. We use WRN on CIFAR-10 and ResNet-50 on ImageNet. All the baselines are evaluated under the attack of PGD20 and AutoAttack (AA) on CIFAR-10 and PGD50 on ImageNet. Our proposed RNA module achieves the best performance in all the scenarios. On CIFAR-10, RNA achieves 67.88% under AutoAttack, with 13.84% improvement compared with AWP. On ImageNet, RNA achieves 54.61% under PGD50, with 12.31% improvement compared with SAT. Note that RNA replaces the normalization layer, which is orthogonal to other defense techniques. Similarly, the adversarial training in our setting can be replaced by other advanced training strategies to achieve potentially better performance. Adversarial Transferability in Random Space For a better illustration of the adversarial transferability in the random space built from RNA, we conduct the adversarial transferability study of ResNet-18 applied with RNA on CIFAR-10. We first define the path difference as the number of different normalization layers between attack and inference paths. For example, a path difference of 7 denotes two paths selecting different normalization layers in 7 layers during forwarding. For each path difference, we then randomly sample 10 path pairs for transferability evaluation. We also include different normalization combinations in RNA for comparison. The results are shown in Figure 5 (c), which illustrates the PGD accuracy under transferred attacks between path pairs along increasing path differences. The filled areas denote the maximum and minimum PGD accuracy. It is obvious that the combination of BN and IN achieves the best performance, which also corresponds to the analysis in Theorem 3.1. With random sampling strategy, the path difference is always around 10 since the number of normalization layers is 20 in this network. For example, the combination of BN and IN achieves an average PGD accuracy of 61.09% when the path difference becomes 10. Compared with the baseline BN which achieves 52.16% PGD accuracy, this lower adversarial transferability in our random space brings a strong defense capability against adversarial attacks. 5.3 Ablation Study Different Normalization Combinations We first provide more quantitative results of different combinations of normalization layers in RNA module. We conduct comparison with ResNet-18 on CIFAR-10. As shown in Table 4, we evaluate the performance under PGD20, DeepFool and Auto Attack, and the combination of IN and BN achieves the best robustness in all the scenarios. Consistent with the observation in Theorem 3.1, the combination of LN and BN has slightly worse performance than that of IN and BN, since IN is a smoother normalization than LN. Similarly, the combination of GN and LN achieves the worst performance, since LN is a special case of GN so that they have high gradient similarity in our empirical observation, as discussed in Figure 3. Thus, utilizing different normalization layers with smaller group size from GN and BGN, RNA module can form random space with low adversarial transferability to better improve the defense ability. Effectiveness of Different Components We next demonstrate the effectiveness of each component introduced in RNA module. The detailed results are shown in Table 5 where [.] denotes the range of group size. Besides the normalization combinations, we include more discussion of the random space designing. To form a random space in normalization layers, it is natural to consider a combination of BN, IN, GN, and LN, however, it is difficult to optimize, as shown in the first row of Table 5. With the normalization layers selected from GNs, the optimization becomes stable, however, the robustness is not competitive, as shown in the second row. The involvement of black-box adversarial training significantly improves the robustness, as shown in the third row. Through expanding the random space with BGNs, the defense capability is slightly improved due to the doubled number of paths. However, the size of random space is still limited. After removing the layer-based constraint, the number of paths exponentially increases, the size of random space for each layer can be reduced for better trade-offs, as shown in the last 4 rows. Comparing BGN+GN[1-1] with GN[1-64], a better random space designing with an appropriate adversarial training strategy can achieve an improvement of 15.04% under Auto Attack, which demonstrates the necessity of these components. 6 Conclusions In this paper, we explore the importance of normalization layers in adversarial robustness where the transferability among different normalization layers can be utilized to boost the defense capability. A Random Normalization Aggregation (RNA) module is proposed to form random space with low adversarial transferability for defense against adversarial attacks. We provide sufficient empirical evidence and theoretical analysis to reveal the connections between adversarial transferability and normalization types, which guides the random space designing. With the involvement of black-box adversarial training strategy and the relaxation of layer-based constraint, the robustness provided by RNA module is significantly strengthened. We demonstrate the superiority of RNA module via comprehensive experiments across different network architectures, attack settings, and benchmark datasets. Our work can provide valuable insights into the network module design for robustness. Acknowledgments This work was supported in part by the Australian Research Council under Project DP210101859 and the University of Sydney Research Accelerator (SOAR) Prize. The authors acknowledge the use of the National Computational Infrastructure (NCI) which is supported by the Australian Government, and accessed through the NCI Adapter Scheme and Sydney Informatics Hub HPC Allocation Scheme. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper regarding normalization layers in DNNs? 2. What are the strengths and weaknesses of the proposed method for improving adversarial robustness? 3. Do you have any questions or suggestions regarding the notation and presentation of the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential negative societal impacts that the author(s) should address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper discusses the relation between normalization layers used in DNNs and adversarial transferability. The author(s) show(s) that the choice of normalization layer highly influences the success rate of transferred adversarial examples. In detail, it is shown that adversarial examples transfer worse (i.e., have a lower success rate of fooling the network) if a different normalization layer is used in the attacked network. This fact is used to motivate a novel technique to robustify existing neural network architectures: the key idea is to randomly select the used normalization layer during inference. Experiments on CIFAR-10, CIFAR-100 and ImageNet show that this approach is effective for the commonly used ResNet-18 and WideResNet32. Strengths And Weaknesses STRENGTHS: The paper proposes an effective, clear and easy-to-implement method that is well-motivated by experimental and theoretical results. The extensive experimental evaluation on ResNet-18 and WideResNet32 architectures using a plethora of attacks (FGSM, PGD-20, CW, MIFGSM, DeepFool, AutoAttack) is convincing. An ablation study is conducted to show the effectiveness of the different components and the importance of different combinations of normalization layers. The paper presentation is well done and only contains only a few errors (see minor remarks). Used computational resources are mentioned. WEAKNESSES: The used notation can be improved at some places: Equation (1): x ∼ X , y ∼ Y suggests that x and y are drawn independently form the dataset. Here, x , y ∼ X , Y would be correct. Also in Algorithm 1 X , Y has to be a tuple instead of set. In Equation (3): y ^ ( b g n ) i ( k ) suggests that y ^ is a function in terms of ( b g n ) , which is not the case. I would suggest to use ( y ^ ( BGN ) ( k ) ) i instead. Different symbols are used to denote multiplication (Equation (3) vs Equation (8)) The stated Theorem 2.1 is hard to understand. Although, I see some value in the theorem, I think the author(s) should spend some time to reformulate it: It stated that "the gradient norm and β in Equation (7) can be bounded as [...]", however, there is no β in Equation (7) (only β a and β b ). The variable T is introduced multiple times in a single sentence: "the level of adversarial transferability" and "attack success rate". MINOR REMARKS: Typo "Adversarial Transfersability" in Abstract, "hessian" is sometimes not capitalized subscript and supperscript "gn" and "bgn" should not be typeset in math mode line 96: extra period in front of references [5, 20] Questions What is meant with β in Equation (7)? Limitations Limitations and potential negative societal impact have not been addressed.
NIPS
Title Random Normalization Aggregation for Adversarial Defense Abstract The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. N/A The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. 1 Introduction Deep Neural Networks (DNNs) have achieved impressive performance in various tasks [1, 2, 3]. However, it is well known that DNNs are susceptible to maliciously generated adversarial examples [4, 5]. Through imperceptible perturbations on the model inputs during inference stage, the model is misled to wrong predictions at a high rate. Since then, a wide range of attack techniques have been proposed under different settings and show strong attack capability. For example, attackers have full access to the model architecture and parameters, which forms white-box attacks [5, 6], and attackers have limited query access to the model, which forms black-box attacks [7, 8]. Since the high attack success rates of these techniques reveal the high risk of DNNs, defenses against adversarial examples have received increasing attention and adversarial robustness becomes one of the key criteria. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To mitigate this risk, adversarial training is proposed to yield robust models through training on generated adversarial examples [5, 6]. Besides training procedure, some regularization and image preprocessing techniques are introduced to improve adversarial robustness [9, 10]. Recent work note that the architecture and module designs could play important roles in the robustness [11, 12]. Hence, we pay more attention to the basic modules in the network which are seldom considered for improving robustness, such as normalization layers. Existing works have discussed the influence of Batch Normalization (BN) and empirically shown that BN increases adversarial vulnerability and decreases adversarial transferability [13, 14]. However, the theoretical analysis of this observation is insufficient and how to tackle this pitfall or even utilize this property to defend attacks is unexplored. In this work, we take numerous normalizations into consideration, including Layer Normalization (LN), Group Normalization (GN), Instance Normalization (IN), Batch Normalization (BN), and Batch Group Normalization (BGN) [15, 16, 17, 18, 19], as shown in Figure 1 (a) and (b). To evaluate the influence of different normalizations on the robustness, we first conduct PGD-7 attack [6] to both natural and adversarial trained networks with different normalizations on CIFAR-10, as shown in the diagonals of Figure 1 (c) and (d). Not surprisingly, BN obtains the best robustness compared to other variants. However, we have an intriguing observation after the transferability evaluations among different normalizations. As illustrated in the heatmaps, the adversarial accuracies in most scenarios are around 70% when fed with transferred adversarial examples, while those under whitebox attack are around 50%. This huge gap mainly comes from the adversarial transferability among normalizations. Motivated by this observation, we first explore the relationship between adversarial transferability and normalizations, and show that the gradient similarity and loss smoothness are the key factors of the discrepancy in transferability among different normalizations. Based on the theoretical evidence, we propose to aggregate different types of normalizations to form random space in the inference phase where the adversarial transferability can be significantly reduced. With designed random space, the inference phase naturally forms the black-box setting for those attackers who have access to model parameters due to the colossal random space. Together with the proposed black-box adversarial training framework, the adversarial robustness is substantially improved with less reduction of natural accuracy. For example, with the same adversarial training setting, the proposed algorithm improves the natural accuracy by 2.45% and the adversarial accuracy by 8.53% with ResNet-18 on CIFAR-10 under PGD20 attack. Our contributions can be summarized as: 1) We provide both empirical and theoretical evidence that the upper bound of adversarial transferability is influenced by the types and parameters of normalization layers. 2) We propose a novel Random Normalization Aggregation (RNA) module which replaces the normalization layers to create huge random space with weak adversarial transferability. Together with a natural black-box adversarial training, RNA boosts the defense ability. 3) We conduct extensive experiments to demonstrate the superiority of RNA on different benchmark datasets and networks. Different variants and components are also studied. 2 Related Work DNNs are vulnerable to adversarial examples and arouse lots of research interests in the attack and defense techniques [4, 5]. Expectation Over Transformation (EOT) is introduced to generating adversarial examples by computing the gradient over the expected transformation to the input [20]. Rice et al. [21] explore the overfitting issue in adversarial training and propose to improve the robustness via early-stopping. Recently, the influence of DNN basic component on adversarial robustness has been paid more attention, such as activation function [22], operation [23], and neural architecture [11, 24, 25]. In terms of BN, Xie et al. [26] explore the robustness at different network scales and introduce a mixture of two BN layers which take care of clean and adversarial examples separately to improve the trade-offs between clean and adversarial accuracy. The mixture of BN can also improve the generalization of network with adversarial training [27]. Benz et al. [13] provide empirical evidence that BN increase the adversarial vulnerability. In this paper, we lay emphasis on the normalization layers and explore the connections between adversarial robustness and the aggregation of normalization layers to improve defense performance. 3 Adversarial Transferability with Different Normalization In this section, we reveal the connections between adversarial transferability and normalization layers. We first consider a network which is identified with an hypothesis h from a spaceH. The network h is optimized with the loss function L on input X and labels Y . The objective is formulated as h∗ = argmin h∈H E x,y∼X ,Y [L(h(x), y)]. (1) Given a target network h and inputs {x, y}, the adversarial examples are defined as perturbed input x̃ = x+ δ, which makes the network h misclassify through maximizing the classification loss as x̃ = argmax x̃:∥x̃−x∥p⩽ϵ L(h(x̃), y), (2) where the perturbation δ is constrained by its lp-norm. Adversarial transferability denotes an inherent property of X̃ that these adversarial examples can also boost the classification loss L(h′(x̃), y) of other networks as well, where h′ ∈ H. We empirically demonstrate that transferability is influenced by the normalization layers in the network h, as shown in Figure 1 (d). For example, taking BN and IN as source models to generate adversarial examples, the adversarial accuracies of LN are 71.29% and 74.34% respectively. We further provide more theoretical analysis of their relationships. 3.1 Definition of Normalization Layers Batch Normalization is known as important basic module in DNNs, which improves the network performance, and a wide variety of variants are introduced where the activations are normalized with different dimensions as well as sizes. To cover most types of normalization, we divide them into two categories. An illustration is shown in Figure 1 (a) and (b), where LN, GN, and IN compute the mean and variance for each example with different group sizes during inference, while BN and BGN adopt the pre-calculated mini batch statistics which are computed by moving average in the training phase. Note that LN and IN are special cases of GN, which takes the minimum or maximum group number. Likewise, BN is a special case of BGN. For simplicity, we use GN and BGN to cover all these normalizations. Considering the activations y ∈ Rd×N where N denotes the batch size and d denotes the number of features, the normalized outputs after BGN with group number of s(BGN) and those of GN with group number of s(GN) during inference stage are formulated as (ŷ (k) (BGN))i = (y(k))i − (µ(BGN))i (σ(BGN))i , (z (k) (BGN))i = γ(BGN) ∗ (ŷ (k) (BGN))i + β(BGN), for 1 ≤ k ≤ N, (ŷ (k) (GN))i = γ(GN) (y(k))i − (µ(k)(GN))i (σ (k) (GN))i + β(GN), (z (k) (GN))i = γ(GN) ∗ (ŷ (k) (GN))i + β(GN), for 1 ≤ k ≤ N, where (µ(k)(GN))i = 1 G G∑ j=1 (y(k))G⌊ iG ⌋+j , (σ (k) (GN))i = √√√√ 1 G G∑ j=1 ((y(k))G⌊ iG ⌋+j − (µ (k) (GN))i) 2, (3) where G = ⌊ ds(GN) ⌋ denotes the group size of GN, (µ(BGN))i and (σ(BGN))i denote the tracked mean and standard deviation of group ⌊ i·s(BGN)d ⌋. 3.2 Variation of Loss Function Smoothness Existing work on adversarial transferability reveals that the adversarial transferability is mainly influenced by the dimensionality of the space of adversarial examples, since the adversarial subspaces of two networks are more likely to intersect with the growth of this dimensionality. [5, 28]. The size of space of adversarial examples can be estimated by the maximum number of orthogonal vectors ri which are aligned with the gradient g = ∇XL(h(X ),Y). In [28], a tight bound is derived as g⊤ri ≥ ϵ∥g∥2√k where k denotes the maximum number of ri, which implies that the smoothness of loss function is inversely proportional to the adversarial transferability. Thus, we analyze the influence of different normalization layers on the smoothness of loss function, including GN and BGN. For simplicity, we dismiss the usage of k in the following equations since the computation of both GN and BGN during inference is independent of batch size. We denote the loss with GN as L̂gn and the loss with BGN as L̂bgn. Since the mean and variance are computed based on current group for both GN and BGN, we compute the partial derivative of loss w.r.t. a group Yj instead of yi where Yj = y[G⌊ iG ⌋:G⌊ i G ⌋+G] . Similarly, Zj denotes the activations of a group after normalization layers. Based on Eq. 3, the partial derivative of L̂gn and L̂bgn w.r.t. Yj can be given as ∂L̂gn ∂Yj = γgn G · σgnj (G · ∂L̂gn ∂Zj − 1⟨1, ∂L̂gn ∂Zj ⟩ − Ŷj⟨ ∂L̂gn ∂Zj , Ŷj⟩), ∂L̂bgn ∂Yj = γbgn σbgnj ∂L̂bgn ∂Zj , (4) where ⟨, ⟩ denotes the inner product, σgnj denotes the standard deviation of Yj , and σ bgn j denotes the tracked standard deviation of Yj . For simplicity, we denote ĝ = ∂L̂∂Yj and g = ∂L̂ ∂Zj . Taking the advantage of the fact that the mean of Yj is zero and its norm is √ G, the squared norm of the partial derivative of GN and BGN can be derived as ∥ĝgn∥2 = γ2gn (σgnj ) 2 (∥ggn∥2 − 1 G ⟨1, ggn⟩2 − 1 G ⟨ggn, Ŷj⟩2), ∥ĝbgn∥2 = γ2bgn (σbgnj ) 2 ∥gbgn∥2. (5) Besides the smoothness of the loss, we further consider the smoothness of the gradients of the loss for GN and BGN. Following [29], we compute the “effective” β-smoothness through the quadratic form of Hessian of the loss w.r.t. the group activations in the normalized gradient direction, which measures the change of gradients with perturbations in the gradient direction. For simplicity, we denote the hessian w.r.t. the layer output as Ĥ = ∂L̂∂Yj∂Yj , the hessian w.r.t. the normalization output as H = ∂L̂∂Zj∂Zj , the normalized gradient as ĝ ′ = ĝ∥ĝ∥ and g ′ = g∥g∥ . For GN and BGN, we have ĝ′⊤gnĤgnĝ ′ gn ≤ γ2gn (σgnj ) 2 [ g′⊤gnHgng ′ gn − 1 G · γgn ⟨ggn, Ŷj⟩ ] , ĝ′⊤bgnĤbgnĝ ′ bgn ≤ γ2bgn (σbgnj ) 2 [ g′⊤bgnHbgng ′ bgn ] . (6) 3.3 Normalization Layers and Adversarial Transferability The sufficient conditions and the bounds of adversarial transferability between two networks have been discussed in [30]. We extend this result to the networks with different normalization layers. Since we focus on the influence of different normalization layers, we assume that these networks share the same loss function and weight parameters W , which makes ∂L̂gn∂Zj = ∂L̂bgn ∂Zj and ∂L̂gn∂Zj∂Zj = ∂L̂bgn ∂Zj∂Zj . Meanwhile, Eq. 5 and Eq. 6 can be easily generalized to the input x since ∂L̂∂x = ∂L̂ ∂Y W . With this assumption, the connections between normalization layers and adversarial transferability can be established via bounded gradient norm and β-smoothness in Eq. 5 and Eq. 6 as Theorem 3.1. Given two networks ha and hb with different normalization layers, the adversarial perturbation under white-box attack is δ on x with attack target label yA and true label yT . Assume ha and hb are “effective” βa and βb-smooth respectively, the level of adversarial transferability T between networks ha and hb within the perturbation ball ∥δ∥2 ≤ ϵ can be upper bounded by T ≤ Ra +Rb min(L(x, yA))−max(∥∇xL∥)ϵ( √ 1+S̄ 2 + 1)−max(βa, βb)ϵ2 , (7) where T denotes the attack successful rate,Ra andRb denotes the empirical risks of network ha and hb, S̄ denotes the upper loss gradient similarity, min(L(x, yA)) = minx∼X (La(x, yA),Lb(x, y′)), and max(∥∇xL∥) = maxx∼X ,y∼{yT ,yA}(∥∇xLa(x, y)∥, ∥∇xLb(x, y)∥). Since the networks share the same loss function and weight parameters, we denote the influence of weight parameters as some constant Cg on gradient norm and CH on gradient smoothness. The partial derivative and Hessian of loss w.r.t. the normalization output are the same for different normalization, denoted as g and H respectively. The gradient norm, βa, and βb in Eq. 7 can be bounded as ∥∇xL∥ ≤ Cg ·max ( |γgn| σgnj √ ∥g∥2 − 1 G ⟨1, g⟩2 − 1 G ⟨g, Ŷj⟩2, |γbgn| σbgnj ∥g∥ ) , βa,b ≤ CH ·max ( γ2gn (σgnj ) 2 [ g′⊤Hg′ − 1 G · γgn ⟨g, Ŷj⟩ ] , γ2bgn (σbgnj ) 2 [ g′⊤Hg′ ]) . (8) Combining Eq. 7 and 8, we observe that the upper bound of adversarial transferability is controlled by the gradient magnitude and gradient smoothness, which is further bounded according to the type and parameters of normalization layers. Specifically, given the same γ and σ for GN and BGN, GN achieves a smaller gradient norm and better gradient smoothness than BGN, which decreases the upper bound of adversarial transferability. Furthermore, the group size G in GN plays an important role in smoothness. With smaller G, the smoothness of GN increases, and thus the upper bound of adversarial transferability decreases. Similar observations can be found in empirical evidence. As shown in Figure 2, the loss landscapes of different normalization layers w.r.t. input are visualized, which demonstrates that different normalization layers have different smoothness. Furthermore, IN achieves the best performance in smoothness, which corresponds to the observation in Eq. 8, since IN has the minimum group size. The attack success rate is relatively low when the source model is IN, as shown in Figure 1 (c) and (d), which corresponds to the observation in Theorem 3.1 that the adversarial transferability decreases when the network is smoother. 4 Random Normalization Aggregation Since the adversarial transferability is strongly correlated with the type of normalization layers, we ask a simple question: Can we utilize the bounded adversarial transferability among normalization layers to defense against white-box attacks? In this work, we propose a Random Normalization Aggregation (RNA) module, which replaces the BN layer in the network. As shown in Figure 4 (a), the normalization layers becomes a combination of different normalization sampled from GNs and BGNs, where the underline denotes the group number. Specifically, the network maintains different normalization layers while only one normalization is randomly selected for each layer during forwarding, as shown in Figure 4 (b). Through incorporating randomization in normalization layers, the network with RNA module can be treated as a “supernet” with multiple paths. Back to white-box defense setting, we assume that the attackers have access to the network parameters. The adversarial examples are generated through backward on a randomly sampled path, and then fed to another randomly sampled path due to RNA module, which makes the entire white-box attack become a “black-box” attack, as illustrated in Figure 4 (c). Thus, together with the adversarial transferability study in Section 3, it is natural to create a network with random space in normalization layers where the adversarial transferability is significantly constrained. To achieve a strong defense against adversarial attacks, some concerns still remain: (1). The number of paths are required to be extremely large to reduce the probability of sampling the same path with random sampling strategy; (2). The collaboration with traditional adversarial training; (3). The normalization types need to be carefully selected to enforce low adversarial transferability. We discussion these concerns as follows. Path Increment in Random Space The adversarial transferability among different normalization has been discussed in Figure 1 (c) and (d). However, the size of random space also matters for effective defense against attacks. If the attackers can sample the same path during attack and inference phases with a high probability, the adversarial accuracy will decrease tremendously. To tackle this issue, we introduce layer-wise randomization of RNA module, which randomly samples the normalization for each layer in the network. As shown in Figure 4 (b), different normalization types are sampled for different layers, which exponentially increases the number of paths. Given the n normalization types in RNA and L layers in the network, the size of random space becomes nL, which reduces the probability of sampling the same path during attack and inference phase to 1 nL << 1n . Black-box Adversarial Training It is natural to incorporate RNA module into adversarial training. Consistent with inference phase, we randomly sample a path pa and conduct white-box attack to generate adversarial examples X̃pa . Different from traditional adversarial training which optimizes pa through feeding X̃pa , we feed X̃pa to another randomly sampled path p, which forms a “black-box” adversarial training, as illustrated in Figure 4 (c). Eq. 1 and Eq. 2 can be reformulated as h∗ = argmin h∈H E x,y∼X ,Y;p∼P [L(h(x̃pa ; p), y)], where x̃pa = argmax x̃pa :∥x̃pa−x∥p⩽ϵ L(h(x̃pa ; pa), y), (9) where P denotes the space of paths. The training procedure is shown in Algorithm 1. Normalization Types Selection In RNA module, multiple normalization types are maintained to form random space. According to Theorem 3.1, the adversarial transferability is bounded by different components, including gradient similarity, empirical risks, gradient magnitude, and gradient smoothness. We first provide empirical evidence that normalization layers from the same category defined in Section 3.1 tend to have higher gradient similarity. As shown in Figure 3, we visualize Algorithm 1 Random Normalization Aggregation with Black-box Adversarial Training Input: The training tuple {X , Y}; Path set P; Attack step size η; Attack iterations t; Perturbation size ϵ; Network h with parameters W ; Replace BN layers with RNA modules, and initialize the network. while not converge do Sample a batch of data {x, y} from {X , Y}; Randomly sample a path pa from P; Initialize adversarial perturbation δ; for i← 1 to t do δ = clipϵ[δ + η · sign(∇xL(h(x; pa), y)]; end for Randomly sample a path p from P; W = W −∇WL(h(x+ δ; p), y); end while the histograms over the cosine similarity of two networks with different normalization layers. For example, BN and BGN belong to the same category, and their gradient similarity is much higher than that between BN and IN, comparing Figure 3 (a) and (b). Since the gradient similarity is proportional to the upper bounds of adversarial transferability in Eq. 7, we propose to select normalization from different categories. Thus, RNA module samples the normalization types from both GN and BGN with small group sizes in our experiments, while the evaluation of other combinations is also included. 5 Experiments In this section, we provide sufficient evaluation of RNA module on various models and datasets. 5.1 Evaluation Setup CIFAR-10/100 We first conduct experiments on CIFAR-10/100 [31] datasets, which contain 50K training images and 10K testing images with size of 32×32 from 10/100 categories. The networks we use are ResNet-18 [31] and WideResNet-32 (WRN) [32]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 5 × 10−4. The initial learning rate is set to 0.1 with a piecewise decay learning rate scheduler. All the baselines are trained with 200 epochs with a batch size of 128. The PGD-10 with ϵ = 8/255 and step size of 2/255 is adopted in the adversarial training setting. For the RNA module, we utilize BN and IN to form the random space in normalization layers. The experiments are performed on one V100 GPU using Pytorch [33] and Mindspore [34]. ImageNet The effectiveness of proposed RNA is also evaluated on ImageNet [35], which contains 1.2M training images and 50K testing images with size of 224 × 224 from 1000 categories. The networks we use are ResNet-50 [31]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 1 × 10−4. The initial learning rate is set to 0.02 with a cosine learning rate scheduler. We load a pretrained ResNet-50 and then adversarailly train the network for 60 epochs with a batch size of 512. The PGD-2 with ϵ = 4/255 is adopted in the adversarial training setting. For the RNA module, we utilize BGNs and GNs with group size of 1 and 2 to form the random space. The experiments are performed on eight V100 GPUs. Baselines and Attacks Our proposed RNA modules replace the normalization layers in the network. Thus, various normalization layers are involved for comparison, including BN, IN, LN, GN, and BGN [15, 16, 17, 18, 19]. On CIFAR-10/100, we evaluate the robustness of all the baselines under different strong attacks from TorchAttacks [36]. For Fast Gradient Sign Method (FGSM) [4], the perturbation size ϵ is set to 8/255. For Projected Gradient Descent (PGD) [6], ϵ is set to 8/255 with step size of 2/255, and the steps are set to 20. For CW attack [37], the steps are set to 1000 with learning rate of 0.01. For Momentum variant of Iterative Fast Gradient Sign Method (MIFGSM) [38], ϵ is set to 8/255 with a step size of 2/255, and the steps are set to 5 with decay of 1.0. For DeepFool [39], the steps are set to 50 with overshoot of 0.02. For Auto Attack [40], ϵ is set to 8/255. On ImageNet, we evaluate the robustness of under PGD attacks with ϵ of 4/255 and steps of 50. 5.2 Results for Robustness Main Results We first evaluate the performance of RNA on CIFAR-10 and CIFAR-100 under different types of attacks. The detailed results are shown in Table 1 and 2. Popular normalization layers show similar performance on robustness. However, with a random space of different normalization layers, the robustness is significantly improved. Comparing RNA with other baselines, RNA consistently achieves the best performance under all attacks, and show strong superiority. For example, RNA with ResNet-18 achieves 65.61% under Auto Attack on CIFAR-10, which is 17.92% higher than BN. Similarly, RNA with WRN achieves 55.16% under DeepFool attacks on CIFAR-100, which is 53.33% higher than IN. The boosted adversarial accuracy shows strong empirical evidence that the constrained adversarial transferability in random space can provide satisfactory defense capability. Furthermore, with proposed black-box adversarial training, RNA achieves better natural accuracy. For example, RNA with WRN achieves 86.46% on CIFAR-10, which improves BN by 1.19%. We mainly attribute this improvement to the fact that the generated adversarial examples can be treated as a “weaker” attack example to other paths during optimization, which naturally achieves better trade-offs between natural and adversarial accuracy. Stronger PGD Attacks We further evaluate the defense capability of RNA under stronger PGD attacks, which enhance the number of iterations and enlarge the perturbation size. The comparison with other baselines are shown in Figure 5 (a) and (b). Comparing RNA with other baselines, RNA achieves stable robustness under different attack iterations, such as 60.70% on PGD10 and 59.94% on PGD100. Meanwhile, the PGD accuracy of RNA is much higher than all other baselines. For example, RNA improves BN baselines by a margin of 7.93%. In terms of larger perturbation size, RNA achieves the best robustness in all the scenarios, and the lowest decrement among all the methods. Specifically, RNA achieves 80.06% with ϵ of 2/255 and 24.90% with ϵ of 20/255, whose gap is 55.16%. For comparison, the gap of BN is 64.68% and GN is 58.40%. Table 3: Comparison with defense algorithms. Method CIFAR-10 ImageNetPGD20 AA PGD50 RobustWRN [41] 59.13 52.48 31.14 AWP [42] 58.14 54.04 - RobNet [11] 52.74 - 37.15 RPI+RPT [43] 53.96 53.30 42.72 SAT [22] 56.01 51.83 42.30 RNA(Ours) 63.34 67.88 54.61 Table 4: Robustness evaluation of random space built from different normalization combinations under different attacks. Normalization PGD20 DeepFool AutoAttack BN 52.16 0.35 47.69 GN+BGN 55.40 70.26 58.90 GN+LN 46.67 62.85 47.96 LN+BN 55.67 68.14 58.73 IN+BN 60.69 76.73 65.61 Comparison with SOTA Defense Methods To demonstrate the superiority of RNA, we include several SOTA defense algorithms for comparison. RobustWRN [41] explores the importance of network width and depth on robustness. AWP [42] proposes to regularize the flatness of weight loss landscape to achieve robustness. RobNet [11] introduce a NAS framework for robustness. RPI+RPT [43] utilizes randomized precision for adversarial defense. SAT [44] proposes to replace ReLU with its smooth approximations, which exhibits robustness. The results are shown in Table 3. We use WRN on CIFAR-10 and ResNet-50 on ImageNet. All the baselines are evaluated under the attack of PGD20 and AutoAttack (AA) on CIFAR-10 and PGD50 on ImageNet. Our proposed RNA module achieves the best performance in all the scenarios. On CIFAR-10, RNA achieves 67.88% under AutoAttack, with 13.84% improvement compared with AWP. On ImageNet, RNA achieves 54.61% under PGD50, with 12.31% improvement compared with SAT. Note that RNA replaces the normalization layer, which is orthogonal to other defense techniques. Similarly, the adversarial training in our setting can be replaced by other advanced training strategies to achieve potentially better performance. Adversarial Transferability in Random Space For a better illustration of the adversarial transferability in the random space built from RNA, we conduct the adversarial transferability study of ResNet-18 applied with RNA on CIFAR-10. We first define the path difference as the number of different normalization layers between attack and inference paths. For example, a path difference of 7 denotes two paths selecting different normalization layers in 7 layers during forwarding. For each path difference, we then randomly sample 10 path pairs for transferability evaluation. We also include different normalization combinations in RNA for comparison. The results are shown in Figure 5 (c), which illustrates the PGD accuracy under transferred attacks between path pairs along increasing path differences. The filled areas denote the maximum and minimum PGD accuracy. It is obvious that the combination of BN and IN achieves the best performance, which also corresponds to the analysis in Theorem 3.1. With random sampling strategy, the path difference is always around 10 since the number of normalization layers is 20 in this network. For example, the combination of BN and IN achieves an average PGD accuracy of 61.09% when the path difference becomes 10. Compared with the baseline BN which achieves 52.16% PGD accuracy, this lower adversarial transferability in our random space brings a strong defense capability against adversarial attacks. 5.3 Ablation Study Different Normalization Combinations We first provide more quantitative results of different combinations of normalization layers in RNA module. We conduct comparison with ResNet-18 on CIFAR-10. As shown in Table 4, we evaluate the performance under PGD20, DeepFool and Auto Attack, and the combination of IN and BN achieves the best robustness in all the scenarios. Consistent with the observation in Theorem 3.1, the combination of LN and BN has slightly worse performance than that of IN and BN, since IN is a smoother normalization than LN. Similarly, the combination of GN and LN achieves the worst performance, since LN is a special case of GN so that they have high gradient similarity in our empirical observation, as discussed in Figure 3. Thus, utilizing different normalization layers with smaller group size from GN and BGN, RNA module can form random space with low adversarial transferability to better improve the defense ability. Effectiveness of Different Components We next demonstrate the effectiveness of each component introduced in RNA module. The detailed results are shown in Table 5 where [.] denotes the range of group size. Besides the normalization combinations, we include more discussion of the random space designing. To form a random space in normalization layers, it is natural to consider a combination of BN, IN, GN, and LN, however, it is difficult to optimize, as shown in the first row of Table 5. With the normalization layers selected from GNs, the optimization becomes stable, however, the robustness is not competitive, as shown in the second row. The involvement of black-box adversarial training significantly improves the robustness, as shown in the third row. Through expanding the random space with BGNs, the defense capability is slightly improved due to the doubled number of paths. However, the size of random space is still limited. After removing the layer-based constraint, the number of paths exponentially increases, the size of random space for each layer can be reduced for better trade-offs, as shown in the last 4 rows. Comparing BGN+GN[1-1] with GN[1-64], a better random space designing with an appropriate adversarial training strategy can achieve an improvement of 15.04% under Auto Attack, which demonstrates the necessity of these components. 6 Conclusions In this paper, we explore the importance of normalization layers in adversarial robustness where the transferability among different normalization layers can be utilized to boost the defense capability. A Random Normalization Aggregation (RNA) module is proposed to form random space with low adversarial transferability for defense against adversarial attacks. We provide sufficient empirical evidence and theoretical analysis to reveal the connections between adversarial transferability and normalization types, which guides the random space designing. With the involvement of black-box adversarial training strategy and the relaxation of layer-based constraint, the robustness provided by RNA module is significantly strengthened. We demonstrate the superiority of RNA module via comprehensive experiments across different network architectures, attack settings, and benchmark datasets. Our work can provide valuable insights into the network module design for robustness. Acknowledgments This work was supported in part by the Australian Research Council under Project DP210101859 and the University of Sydney Research Accelerator (SOAR) Prize. The authors acknowledge the use of the National Computational Infrastructure (NCI) which is supported by the Australian Government, and accessed through the NCI Adapter Scheme and Sydney Informatics Hub HPC Allocation Scheme. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper regarding adversarial defense? 2. What are the strengths of the proposed approach, particularly in terms of its originality and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the random sampling strategy and the lack of discussion on weight parameters and architectures? 4. Do you have any concerns regarding the claim on the smoothness of different normalization layers controlling the adversarial transferability? 5. Are there any potential negative societal impacts of this work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Inspired by the limited adversarial transferability across different normalizations, the authors proposed to involve randomness into the types of normalization layers and introduce a RNA module that reduces the adversarial transferability in a pre-defined random space, which improves the defense against adversarial examples. To evaluate the effectiveness of their algorithm, the authors provided experimental results on CIFAR-10/100 and ImageNet. Strengths And Weaknesses The authors studied a simple yet effective randomized mechanism with normalization layers. In general, the paper reads well, and the presentation is clear. The paper is original in that it studies the connections between normalization layers and adversarial defense. The theorical analysis well explains the principle of proposed algorithm. The paper includes a clear experimental setup and a meticulous comparison with other variant baselines as well as state-of-the-art algorithms on popular benchmarks. The results seem convincing. Despite its contributions, I have several concerns: The authors mentioned that they adopt random sampling strategy to utilize the adversarial transferability in random space. However, there exists a chance that the similar paths are sampled during both attack and inference stage, and there is no discussion of this scenario as well as the probability of it. Besides normalization layers, there exist wide components in the networks which can involve randomness, such as weight parameters and architectures. Although the transferability evaluation in Fig 1 shows that there exists a poor transferability of adversarial examples among different types of normalization layers, it seems to me that this transferability also holds true for weight parameters and architectures. There is no discussion of these variant baselines. The authors claimed that the smoothness of different normalization layers directly controls the adversarial transferability. However, it is hard for me to find corresponding evidence in experiment section. Questions As mentioned above, several questions are listed below: What is the risk of random sampling strategy? What is the probability of sampling the similar paths? How about the involvement of randomness in weight parameters and architectures? What is the advantage of proposed RNA compared with these baselines? Is there empirical evidence that the smoothness of different normalization layers controls the defense performance? Limitations The limitations have been addressed and there is no potential negative societal impact of this work.
NIPS
Title Random Normalization Aggregation for Adversarial Defense Abstract The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. N/A The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/ Random-Norm-Aggregation and the MindSpore code is available at https: //gitee.com/mindspore/models/tree/master/research/cv/RNA. 1 Introduction Deep Neural Networks (DNNs) have achieved impressive performance in various tasks [1, 2, 3]. However, it is well known that DNNs are susceptible to maliciously generated adversarial examples [4, 5]. Through imperceptible perturbations on the model inputs during inference stage, the model is misled to wrong predictions at a high rate. Since then, a wide range of attack techniques have been proposed under different settings and show strong attack capability. For example, attackers have full access to the model architecture and parameters, which forms white-box attacks [5, 6], and attackers have limited query access to the model, which forms black-box attacks [7, 8]. Since the high attack success rates of these techniques reveal the high risk of DNNs, defenses against adversarial examples have received increasing attention and adversarial robustness becomes one of the key criteria. ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To mitigate this risk, adversarial training is proposed to yield robust models through training on generated adversarial examples [5, 6]. Besides training procedure, some regularization and image preprocessing techniques are introduced to improve adversarial robustness [9, 10]. Recent work note that the architecture and module designs could play important roles in the robustness [11, 12]. Hence, we pay more attention to the basic modules in the network which are seldom considered for improving robustness, such as normalization layers. Existing works have discussed the influence of Batch Normalization (BN) and empirically shown that BN increases adversarial vulnerability and decreases adversarial transferability [13, 14]. However, the theoretical analysis of this observation is insufficient and how to tackle this pitfall or even utilize this property to defend attacks is unexplored. In this work, we take numerous normalizations into consideration, including Layer Normalization (LN), Group Normalization (GN), Instance Normalization (IN), Batch Normalization (BN), and Batch Group Normalization (BGN) [15, 16, 17, 18, 19], as shown in Figure 1 (a) and (b). To evaluate the influence of different normalizations on the robustness, we first conduct PGD-7 attack [6] to both natural and adversarial trained networks with different normalizations on CIFAR-10, as shown in the diagonals of Figure 1 (c) and (d). Not surprisingly, BN obtains the best robustness compared to other variants. However, we have an intriguing observation after the transferability evaluations among different normalizations. As illustrated in the heatmaps, the adversarial accuracies in most scenarios are around 70% when fed with transferred adversarial examples, while those under whitebox attack are around 50%. This huge gap mainly comes from the adversarial transferability among normalizations. Motivated by this observation, we first explore the relationship between adversarial transferability and normalizations, and show that the gradient similarity and loss smoothness are the key factors of the discrepancy in transferability among different normalizations. Based on the theoretical evidence, we propose to aggregate different types of normalizations to form random space in the inference phase where the adversarial transferability can be significantly reduced. With designed random space, the inference phase naturally forms the black-box setting for those attackers who have access to model parameters due to the colossal random space. Together with the proposed black-box adversarial training framework, the adversarial robustness is substantially improved with less reduction of natural accuracy. For example, with the same adversarial training setting, the proposed algorithm improves the natural accuracy by 2.45% and the adversarial accuracy by 8.53% with ResNet-18 on CIFAR-10 under PGD20 attack. Our contributions can be summarized as: 1) We provide both empirical and theoretical evidence that the upper bound of adversarial transferability is influenced by the types and parameters of normalization layers. 2) We propose a novel Random Normalization Aggregation (RNA) module which replaces the normalization layers to create huge random space with weak adversarial transferability. Together with a natural black-box adversarial training, RNA boosts the defense ability. 3) We conduct extensive experiments to demonstrate the superiority of RNA on different benchmark datasets and networks. Different variants and components are also studied. 2 Related Work DNNs are vulnerable to adversarial examples and arouse lots of research interests in the attack and defense techniques [4, 5]. Expectation Over Transformation (EOT) is introduced to generating adversarial examples by computing the gradient over the expected transformation to the input [20]. Rice et al. [21] explore the overfitting issue in adversarial training and propose to improve the robustness via early-stopping. Recently, the influence of DNN basic component on adversarial robustness has been paid more attention, such as activation function [22], operation [23], and neural architecture [11, 24, 25]. In terms of BN, Xie et al. [26] explore the robustness at different network scales and introduce a mixture of two BN layers which take care of clean and adversarial examples separately to improve the trade-offs between clean and adversarial accuracy. The mixture of BN can also improve the generalization of network with adversarial training [27]. Benz et al. [13] provide empirical evidence that BN increase the adversarial vulnerability. In this paper, we lay emphasis on the normalization layers and explore the connections between adversarial robustness and the aggregation of normalization layers to improve defense performance. 3 Adversarial Transferability with Different Normalization In this section, we reveal the connections between adversarial transferability and normalization layers. We first consider a network which is identified with an hypothesis h from a spaceH. The network h is optimized with the loss function L on input X and labels Y . The objective is formulated as h∗ = argmin h∈H E x,y∼X ,Y [L(h(x), y)]. (1) Given a target network h and inputs {x, y}, the adversarial examples are defined as perturbed input x̃ = x+ δ, which makes the network h misclassify through maximizing the classification loss as x̃ = argmax x̃:∥x̃−x∥p⩽ϵ L(h(x̃), y), (2) where the perturbation δ is constrained by its lp-norm. Adversarial transferability denotes an inherent property of X̃ that these adversarial examples can also boost the classification loss L(h′(x̃), y) of other networks as well, where h′ ∈ H. We empirically demonstrate that transferability is influenced by the normalization layers in the network h, as shown in Figure 1 (d). For example, taking BN and IN as source models to generate adversarial examples, the adversarial accuracies of LN are 71.29% and 74.34% respectively. We further provide more theoretical analysis of their relationships. 3.1 Definition of Normalization Layers Batch Normalization is known as important basic module in DNNs, which improves the network performance, and a wide variety of variants are introduced where the activations are normalized with different dimensions as well as sizes. To cover most types of normalization, we divide them into two categories. An illustration is shown in Figure 1 (a) and (b), where LN, GN, and IN compute the mean and variance for each example with different group sizes during inference, while BN and BGN adopt the pre-calculated mini batch statistics which are computed by moving average in the training phase. Note that LN and IN are special cases of GN, which takes the minimum or maximum group number. Likewise, BN is a special case of BGN. For simplicity, we use GN and BGN to cover all these normalizations. Considering the activations y ∈ Rd×N where N denotes the batch size and d denotes the number of features, the normalized outputs after BGN with group number of s(BGN) and those of GN with group number of s(GN) during inference stage are formulated as (ŷ (k) (BGN))i = (y(k))i − (µ(BGN))i (σ(BGN))i , (z (k) (BGN))i = γ(BGN) ∗ (ŷ (k) (BGN))i + β(BGN), for 1 ≤ k ≤ N, (ŷ (k) (GN))i = γ(GN) (y(k))i − (µ(k)(GN))i (σ (k) (GN))i + β(GN), (z (k) (GN))i = γ(GN) ∗ (ŷ (k) (GN))i + β(GN), for 1 ≤ k ≤ N, where (µ(k)(GN))i = 1 G G∑ j=1 (y(k))G⌊ iG ⌋+j , (σ (k) (GN))i = √√√√ 1 G G∑ j=1 ((y(k))G⌊ iG ⌋+j − (µ (k) (GN))i) 2, (3) where G = ⌊ ds(GN) ⌋ denotes the group size of GN, (µ(BGN))i and (σ(BGN))i denote the tracked mean and standard deviation of group ⌊ i·s(BGN)d ⌋. 3.2 Variation of Loss Function Smoothness Existing work on adversarial transferability reveals that the adversarial transferability is mainly influenced by the dimensionality of the space of adversarial examples, since the adversarial subspaces of two networks are more likely to intersect with the growth of this dimensionality. [5, 28]. The size of space of adversarial examples can be estimated by the maximum number of orthogonal vectors ri which are aligned with the gradient g = ∇XL(h(X ),Y). In [28], a tight bound is derived as g⊤ri ≥ ϵ∥g∥2√k where k denotes the maximum number of ri, which implies that the smoothness of loss function is inversely proportional to the adversarial transferability. Thus, we analyze the influence of different normalization layers on the smoothness of loss function, including GN and BGN. For simplicity, we dismiss the usage of k in the following equations since the computation of both GN and BGN during inference is independent of batch size. We denote the loss with GN as L̂gn and the loss with BGN as L̂bgn. Since the mean and variance are computed based on current group for both GN and BGN, we compute the partial derivative of loss w.r.t. a group Yj instead of yi where Yj = y[G⌊ iG ⌋:G⌊ i G ⌋+G] . Similarly, Zj denotes the activations of a group after normalization layers. Based on Eq. 3, the partial derivative of L̂gn and L̂bgn w.r.t. Yj can be given as ∂L̂gn ∂Yj = γgn G · σgnj (G · ∂L̂gn ∂Zj − 1⟨1, ∂L̂gn ∂Zj ⟩ − Ŷj⟨ ∂L̂gn ∂Zj , Ŷj⟩), ∂L̂bgn ∂Yj = γbgn σbgnj ∂L̂bgn ∂Zj , (4) where ⟨, ⟩ denotes the inner product, σgnj denotes the standard deviation of Yj , and σ bgn j denotes the tracked standard deviation of Yj . For simplicity, we denote ĝ = ∂L̂∂Yj and g = ∂L̂ ∂Zj . Taking the advantage of the fact that the mean of Yj is zero and its norm is √ G, the squared norm of the partial derivative of GN and BGN can be derived as ∥ĝgn∥2 = γ2gn (σgnj ) 2 (∥ggn∥2 − 1 G ⟨1, ggn⟩2 − 1 G ⟨ggn, Ŷj⟩2), ∥ĝbgn∥2 = γ2bgn (σbgnj ) 2 ∥gbgn∥2. (5) Besides the smoothness of the loss, we further consider the smoothness of the gradients of the loss for GN and BGN. Following [29], we compute the “effective” β-smoothness through the quadratic form of Hessian of the loss w.r.t. the group activations in the normalized gradient direction, which measures the change of gradients with perturbations in the gradient direction. For simplicity, we denote the hessian w.r.t. the layer output as Ĥ = ∂L̂∂Yj∂Yj , the hessian w.r.t. the normalization output as H = ∂L̂∂Zj∂Zj , the normalized gradient as ĝ ′ = ĝ∥ĝ∥ and g ′ = g∥g∥ . For GN and BGN, we have ĝ′⊤gnĤgnĝ ′ gn ≤ γ2gn (σgnj ) 2 [ g′⊤gnHgng ′ gn − 1 G · γgn ⟨ggn, Ŷj⟩ ] , ĝ′⊤bgnĤbgnĝ ′ bgn ≤ γ2bgn (σbgnj ) 2 [ g′⊤bgnHbgng ′ bgn ] . (6) 3.3 Normalization Layers and Adversarial Transferability The sufficient conditions and the bounds of adversarial transferability between two networks have been discussed in [30]. We extend this result to the networks with different normalization layers. Since we focus on the influence of different normalization layers, we assume that these networks share the same loss function and weight parameters W , which makes ∂L̂gn∂Zj = ∂L̂bgn ∂Zj and ∂L̂gn∂Zj∂Zj = ∂L̂bgn ∂Zj∂Zj . Meanwhile, Eq. 5 and Eq. 6 can be easily generalized to the input x since ∂L̂∂x = ∂L̂ ∂Y W . With this assumption, the connections between normalization layers and adversarial transferability can be established via bounded gradient norm and β-smoothness in Eq. 5 and Eq. 6 as Theorem 3.1. Given two networks ha and hb with different normalization layers, the adversarial perturbation under white-box attack is δ on x with attack target label yA and true label yT . Assume ha and hb are “effective” βa and βb-smooth respectively, the level of adversarial transferability T between networks ha and hb within the perturbation ball ∥δ∥2 ≤ ϵ can be upper bounded by T ≤ Ra +Rb min(L(x, yA))−max(∥∇xL∥)ϵ( √ 1+S̄ 2 + 1)−max(βa, βb)ϵ2 , (7) where T denotes the attack successful rate,Ra andRb denotes the empirical risks of network ha and hb, S̄ denotes the upper loss gradient similarity, min(L(x, yA)) = minx∼X (La(x, yA),Lb(x, y′)), and max(∥∇xL∥) = maxx∼X ,y∼{yT ,yA}(∥∇xLa(x, y)∥, ∥∇xLb(x, y)∥). Since the networks share the same loss function and weight parameters, we denote the influence of weight parameters as some constant Cg on gradient norm and CH on gradient smoothness. The partial derivative and Hessian of loss w.r.t. the normalization output are the same for different normalization, denoted as g and H respectively. The gradient norm, βa, and βb in Eq. 7 can be bounded as ∥∇xL∥ ≤ Cg ·max ( |γgn| σgnj √ ∥g∥2 − 1 G ⟨1, g⟩2 − 1 G ⟨g, Ŷj⟩2, |γbgn| σbgnj ∥g∥ ) , βa,b ≤ CH ·max ( γ2gn (σgnj ) 2 [ g′⊤Hg′ − 1 G · γgn ⟨g, Ŷj⟩ ] , γ2bgn (σbgnj ) 2 [ g′⊤Hg′ ]) . (8) Combining Eq. 7 and 8, we observe that the upper bound of adversarial transferability is controlled by the gradient magnitude and gradient smoothness, which is further bounded according to the type and parameters of normalization layers. Specifically, given the same γ and σ for GN and BGN, GN achieves a smaller gradient norm and better gradient smoothness than BGN, which decreases the upper bound of adversarial transferability. Furthermore, the group size G in GN plays an important role in smoothness. With smaller G, the smoothness of GN increases, and thus the upper bound of adversarial transferability decreases. Similar observations can be found in empirical evidence. As shown in Figure 2, the loss landscapes of different normalization layers w.r.t. input are visualized, which demonstrates that different normalization layers have different smoothness. Furthermore, IN achieves the best performance in smoothness, which corresponds to the observation in Eq. 8, since IN has the minimum group size. The attack success rate is relatively low when the source model is IN, as shown in Figure 1 (c) and (d), which corresponds to the observation in Theorem 3.1 that the adversarial transferability decreases when the network is smoother. 4 Random Normalization Aggregation Since the adversarial transferability is strongly correlated with the type of normalization layers, we ask a simple question: Can we utilize the bounded adversarial transferability among normalization layers to defense against white-box attacks? In this work, we propose a Random Normalization Aggregation (RNA) module, which replaces the BN layer in the network. As shown in Figure 4 (a), the normalization layers becomes a combination of different normalization sampled from GNs and BGNs, where the underline denotes the group number. Specifically, the network maintains different normalization layers while only one normalization is randomly selected for each layer during forwarding, as shown in Figure 4 (b). Through incorporating randomization in normalization layers, the network with RNA module can be treated as a “supernet” with multiple paths. Back to white-box defense setting, we assume that the attackers have access to the network parameters. The adversarial examples are generated through backward on a randomly sampled path, and then fed to another randomly sampled path due to RNA module, which makes the entire white-box attack become a “black-box” attack, as illustrated in Figure 4 (c). Thus, together with the adversarial transferability study in Section 3, it is natural to create a network with random space in normalization layers where the adversarial transferability is significantly constrained. To achieve a strong defense against adversarial attacks, some concerns still remain: (1). The number of paths are required to be extremely large to reduce the probability of sampling the same path with random sampling strategy; (2). The collaboration with traditional adversarial training; (3). The normalization types need to be carefully selected to enforce low adversarial transferability. We discussion these concerns as follows. Path Increment in Random Space The adversarial transferability among different normalization has been discussed in Figure 1 (c) and (d). However, the size of random space also matters for effective defense against attacks. If the attackers can sample the same path during attack and inference phases with a high probability, the adversarial accuracy will decrease tremendously. To tackle this issue, we introduce layer-wise randomization of RNA module, which randomly samples the normalization for each layer in the network. As shown in Figure 4 (b), different normalization types are sampled for different layers, which exponentially increases the number of paths. Given the n normalization types in RNA and L layers in the network, the size of random space becomes nL, which reduces the probability of sampling the same path during attack and inference phase to 1 nL << 1n . Black-box Adversarial Training It is natural to incorporate RNA module into adversarial training. Consistent with inference phase, we randomly sample a path pa and conduct white-box attack to generate adversarial examples X̃pa . Different from traditional adversarial training which optimizes pa through feeding X̃pa , we feed X̃pa to another randomly sampled path p, which forms a “black-box” adversarial training, as illustrated in Figure 4 (c). Eq. 1 and Eq. 2 can be reformulated as h∗ = argmin h∈H E x,y∼X ,Y;p∼P [L(h(x̃pa ; p), y)], where x̃pa = argmax x̃pa :∥x̃pa−x∥p⩽ϵ L(h(x̃pa ; pa), y), (9) where P denotes the space of paths. The training procedure is shown in Algorithm 1. Normalization Types Selection In RNA module, multiple normalization types are maintained to form random space. According to Theorem 3.1, the adversarial transferability is bounded by different components, including gradient similarity, empirical risks, gradient magnitude, and gradient smoothness. We first provide empirical evidence that normalization layers from the same category defined in Section 3.1 tend to have higher gradient similarity. As shown in Figure 3, we visualize Algorithm 1 Random Normalization Aggregation with Black-box Adversarial Training Input: The training tuple {X , Y}; Path set P; Attack step size η; Attack iterations t; Perturbation size ϵ; Network h with parameters W ; Replace BN layers with RNA modules, and initialize the network. while not converge do Sample a batch of data {x, y} from {X , Y}; Randomly sample a path pa from P; Initialize adversarial perturbation δ; for i← 1 to t do δ = clipϵ[δ + η · sign(∇xL(h(x; pa), y)]; end for Randomly sample a path p from P; W = W −∇WL(h(x+ δ; p), y); end while the histograms over the cosine similarity of two networks with different normalization layers. For example, BN and BGN belong to the same category, and their gradient similarity is much higher than that between BN and IN, comparing Figure 3 (a) and (b). Since the gradient similarity is proportional to the upper bounds of adversarial transferability in Eq. 7, we propose to select normalization from different categories. Thus, RNA module samples the normalization types from both GN and BGN with small group sizes in our experiments, while the evaluation of other combinations is also included. 5 Experiments In this section, we provide sufficient evaluation of RNA module on various models and datasets. 5.1 Evaluation Setup CIFAR-10/100 We first conduct experiments on CIFAR-10/100 [31] datasets, which contain 50K training images and 10K testing images with size of 32×32 from 10/100 categories. The networks we use are ResNet-18 [31] and WideResNet-32 (WRN) [32]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 5 × 10−4. The initial learning rate is set to 0.1 with a piecewise decay learning rate scheduler. All the baselines are trained with 200 epochs with a batch size of 128. The PGD-10 with ϵ = 8/255 and step size of 2/255 is adopted in the adversarial training setting. For the RNA module, we utilize BN and IN to form the random space in normalization layers. The experiments are performed on one V100 GPU using Pytorch [33] and Mindspore [34]. ImageNet The effectiveness of proposed RNA is also evaluated on ImageNet [35], which contains 1.2M training images and 50K testing images with size of 224 × 224 from 1000 categories. The networks we use are ResNet-50 [31]. The SGD optimizer with a momentum of 0.9 is used. The weight decay is set to 1 × 10−4. The initial learning rate is set to 0.02 with a cosine learning rate scheduler. We load a pretrained ResNet-50 and then adversarailly train the network for 60 epochs with a batch size of 512. The PGD-2 with ϵ = 4/255 is adopted in the adversarial training setting. For the RNA module, we utilize BGNs and GNs with group size of 1 and 2 to form the random space. The experiments are performed on eight V100 GPUs. Baselines and Attacks Our proposed RNA modules replace the normalization layers in the network. Thus, various normalization layers are involved for comparison, including BN, IN, LN, GN, and BGN [15, 16, 17, 18, 19]. On CIFAR-10/100, we evaluate the robustness of all the baselines under different strong attacks from TorchAttacks [36]. For Fast Gradient Sign Method (FGSM) [4], the perturbation size ϵ is set to 8/255. For Projected Gradient Descent (PGD) [6], ϵ is set to 8/255 with step size of 2/255, and the steps are set to 20. For CW attack [37], the steps are set to 1000 with learning rate of 0.01. For Momentum variant of Iterative Fast Gradient Sign Method (MIFGSM) [38], ϵ is set to 8/255 with a step size of 2/255, and the steps are set to 5 with decay of 1.0. For DeepFool [39], the steps are set to 50 with overshoot of 0.02. For Auto Attack [40], ϵ is set to 8/255. On ImageNet, we evaluate the robustness of under PGD attacks with ϵ of 4/255 and steps of 50. 5.2 Results for Robustness Main Results We first evaluate the performance of RNA on CIFAR-10 and CIFAR-100 under different types of attacks. The detailed results are shown in Table 1 and 2. Popular normalization layers show similar performance on robustness. However, with a random space of different normalization layers, the robustness is significantly improved. Comparing RNA with other baselines, RNA consistently achieves the best performance under all attacks, and show strong superiority. For example, RNA with ResNet-18 achieves 65.61% under Auto Attack on CIFAR-10, which is 17.92% higher than BN. Similarly, RNA with WRN achieves 55.16% under DeepFool attacks on CIFAR-100, which is 53.33% higher than IN. The boosted adversarial accuracy shows strong empirical evidence that the constrained adversarial transferability in random space can provide satisfactory defense capability. Furthermore, with proposed black-box adversarial training, RNA achieves better natural accuracy. For example, RNA with WRN achieves 86.46% on CIFAR-10, which improves BN by 1.19%. We mainly attribute this improvement to the fact that the generated adversarial examples can be treated as a “weaker” attack example to other paths during optimization, which naturally achieves better trade-offs between natural and adversarial accuracy. Stronger PGD Attacks We further evaluate the defense capability of RNA under stronger PGD attacks, which enhance the number of iterations and enlarge the perturbation size. The comparison with other baselines are shown in Figure 5 (a) and (b). Comparing RNA with other baselines, RNA achieves stable robustness under different attack iterations, such as 60.70% on PGD10 and 59.94% on PGD100. Meanwhile, the PGD accuracy of RNA is much higher than all other baselines. For example, RNA improves BN baselines by a margin of 7.93%. In terms of larger perturbation size, RNA achieves the best robustness in all the scenarios, and the lowest decrement among all the methods. Specifically, RNA achieves 80.06% with ϵ of 2/255 and 24.90% with ϵ of 20/255, whose gap is 55.16%. For comparison, the gap of BN is 64.68% and GN is 58.40%. Table 3: Comparison with defense algorithms. Method CIFAR-10 ImageNetPGD20 AA PGD50 RobustWRN [41] 59.13 52.48 31.14 AWP [42] 58.14 54.04 - RobNet [11] 52.74 - 37.15 RPI+RPT [43] 53.96 53.30 42.72 SAT [22] 56.01 51.83 42.30 RNA(Ours) 63.34 67.88 54.61 Table 4: Robustness evaluation of random space built from different normalization combinations under different attacks. Normalization PGD20 DeepFool AutoAttack BN 52.16 0.35 47.69 GN+BGN 55.40 70.26 58.90 GN+LN 46.67 62.85 47.96 LN+BN 55.67 68.14 58.73 IN+BN 60.69 76.73 65.61 Comparison with SOTA Defense Methods To demonstrate the superiority of RNA, we include several SOTA defense algorithms for comparison. RobustWRN [41] explores the importance of network width and depth on robustness. AWP [42] proposes to regularize the flatness of weight loss landscape to achieve robustness. RobNet [11] introduce a NAS framework for robustness. RPI+RPT [43] utilizes randomized precision for adversarial defense. SAT [44] proposes to replace ReLU with its smooth approximations, which exhibits robustness. The results are shown in Table 3. We use WRN on CIFAR-10 and ResNet-50 on ImageNet. All the baselines are evaluated under the attack of PGD20 and AutoAttack (AA) on CIFAR-10 and PGD50 on ImageNet. Our proposed RNA module achieves the best performance in all the scenarios. On CIFAR-10, RNA achieves 67.88% under AutoAttack, with 13.84% improvement compared with AWP. On ImageNet, RNA achieves 54.61% under PGD50, with 12.31% improvement compared with SAT. Note that RNA replaces the normalization layer, which is orthogonal to other defense techniques. Similarly, the adversarial training in our setting can be replaced by other advanced training strategies to achieve potentially better performance. Adversarial Transferability in Random Space For a better illustration of the adversarial transferability in the random space built from RNA, we conduct the adversarial transferability study of ResNet-18 applied with RNA on CIFAR-10. We first define the path difference as the number of different normalization layers between attack and inference paths. For example, a path difference of 7 denotes two paths selecting different normalization layers in 7 layers during forwarding. For each path difference, we then randomly sample 10 path pairs for transferability evaluation. We also include different normalization combinations in RNA for comparison. The results are shown in Figure 5 (c), which illustrates the PGD accuracy under transferred attacks between path pairs along increasing path differences. The filled areas denote the maximum and minimum PGD accuracy. It is obvious that the combination of BN and IN achieves the best performance, which also corresponds to the analysis in Theorem 3.1. With random sampling strategy, the path difference is always around 10 since the number of normalization layers is 20 in this network. For example, the combination of BN and IN achieves an average PGD accuracy of 61.09% when the path difference becomes 10. Compared with the baseline BN which achieves 52.16% PGD accuracy, this lower adversarial transferability in our random space brings a strong defense capability against adversarial attacks. 5.3 Ablation Study Different Normalization Combinations We first provide more quantitative results of different combinations of normalization layers in RNA module. We conduct comparison with ResNet-18 on CIFAR-10. As shown in Table 4, we evaluate the performance under PGD20, DeepFool and Auto Attack, and the combination of IN and BN achieves the best robustness in all the scenarios. Consistent with the observation in Theorem 3.1, the combination of LN and BN has slightly worse performance than that of IN and BN, since IN is a smoother normalization than LN. Similarly, the combination of GN and LN achieves the worst performance, since LN is a special case of GN so that they have high gradient similarity in our empirical observation, as discussed in Figure 3. Thus, utilizing different normalization layers with smaller group size from GN and BGN, RNA module can form random space with low adversarial transferability to better improve the defense ability. Effectiveness of Different Components We next demonstrate the effectiveness of each component introduced in RNA module. The detailed results are shown in Table 5 where [.] denotes the range of group size. Besides the normalization combinations, we include more discussion of the random space designing. To form a random space in normalization layers, it is natural to consider a combination of BN, IN, GN, and LN, however, it is difficult to optimize, as shown in the first row of Table 5. With the normalization layers selected from GNs, the optimization becomes stable, however, the robustness is not competitive, as shown in the second row. The involvement of black-box adversarial training significantly improves the robustness, as shown in the third row. Through expanding the random space with BGNs, the defense capability is slightly improved due to the doubled number of paths. However, the size of random space is still limited. After removing the layer-based constraint, the number of paths exponentially increases, the size of random space for each layer can be reduced for better trade-offs, as shown in the last 4 rows. Comparing BGN+GN[1-1] with GN[1-64], a better random space designing with an appropriate adversarial training strategy can achieve an improvement of 15.04% under Auto Attack, which demonstrates the necessity of these components. 6 Conclusions In this paper, we explore the importance of normalization layers in adversarial robustness where the transferability among different normalization layers can be utilized to boost the defense capability. A Random Normalization Aggregation (RNA) module is proposed to form random space with low adversarial transferability for defense against adversarial attacks. We provide sufficient empirical evidence and theoretical analysis to reveal the connections between adversarial transferability and normalization types, which guides the random space designing. With the involvement of black-box adversarial training strategy and the relaxation of layer-based constraint, the robustness provided by RNA module is significantly strengthened. We demonstrate the superiority of RNA module via comprehensive experiments across different network architectures, attack settings, and benchmark datasets. Our work can provide valuable insights into the network module design for robustness. Acknowledgments This work was supported in part by the Australian Research Council under Project DP210101859 and the University of Sydney Research Accelerator (SOAR) Prize. The authors acknowledge the use of the National Computational Infrastructure (NCI) which is supported by the Australian Government, and accessed through the NCI Adapter Scheme and Sydney Informatics Hub HPC Allocation Scheme. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the main contribution of the paper regarding adversarial transferability and normalization layers? 2. What are the strengths and weaknesses of the proposed Random Normalization Aggregation (RNA) module? 3. Do you have any concerns or questions about the experimental results presented in the paper? 4. How does the RNA method compare to other approaches that explore the same field of normalization layers and robustness? 5. Are there any limitations or potential drawbacks of the RNA method that the authors should discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Researchers explore how different normalization layers affect adversarial transferability. They provide a theoretical upper bound on the adversarial transferability between normalization layers. Then, they proposed a module named Random Normalization Aggregation (RNA). RNA replaces the normalization layers in the network, and samples normalization layers randomly at each forward pass. This can generate an exponential number of possible paths, which makes it harder for attacker to exploit. Strengths And Weaknesses Strengths: The paper is clear and easy to follow Theoretical part is sound The idea to generate random normalization paths is new to the literature Weaknesses: Experiment section presents some suspicious results - the results with resNet vs. WRN are counterintuitive, and results on Auto-Attack vs. PGD also do not make sense. It can be a sign of obfuscated gradients. Additionally, some of the reported results are not in line with known results. See the Questions section. No code/models are provided for evaluation of the results. Can the authors provide the code/models/github link for additional verification? No related work section. Specifically, the authors did not address some related literature that explores the same field (see Questions section). Questions Experiments questions/concerns: What if the attacker randomizes a different path (p_a) at each iteration of PGD? Wouldn't it make the attacker more diverse, and therefore generalize better? Can the authors present such experiments? Can the authors explain why the results on resNet-18 are better than those on WRN-32? For example, on CIFAR-10, AA results on ResNet-18 are better by almost 4% compared to WRN-32 Also, for CIFAR-10, FGSM and PGD-20 acc is lower than the Auto-Attack accuracy, which doesn't make sense. Can the authors explain these results? WRN-32 with BN and standard AT is essentially the standard AT method by Madry et al. [1]. It was shown that this method reaches AA accuracy of ~52-53% against AA when combined with early stopping. However, in your paper, you claim 46.44% against AA. Did the authors use early stopping for all methods? If not, I think that should present results with early stopping [2] for a fair comparison. Otherwise, the results can be linked to adversarial overfitting [2]. It would be interesting to see how the method works when combined with other AT methods. The authors did not refer/compared to related work in the field of normalization layers and robustness [3, 4]. Can the authors compare their method to [3, 4] and present the results? [1] Towards Deep Learning Models Resistant to Adversarial Attacks https://arxiv.org/pdf/1706.06083.pdf [2] Overfitting in adversarially robust deep learning http://proceedings.mlr.press/v119/rice20a [3] INTRIGUING PROPERTIES OF ADVERSARIAL TRAINING AT SCALE https://arxiv.org/pdf/1906.03787.pdf [4] Adversarial Examples Improve Image Recognition https://openaccess.thecvf.com/content_CVPR_2020/html/Xie_Adversarial_Examples_Improve_Image_Recognition_CVPR_2020_paper.html miscellaneous: Figure 4 has twice (b) instead of (c) Line 153 - I suggest rephrasing to- "… to defend against white-box attacks …" Limitations No. Authors should discuss the limitation of adversarial training in general and their method in particular.
NIPS
Title Training Deep Models Faster with Robust, Approximate Importance Sampling Abstract In theory, importance sampling speeds up stochastic gradient algorithms for supervised learning by prioritizing training examples. In practice, the cost of computing importances greatly limits the impact of importance sampling. We propose a robust, approximate importance sampling procedure (RAIS) for stochastic gradient descent. By approximating the ideal sampling distribution using robust optimization, RAIS provides much of the benefit of exact importance sampling with drastically reduced overhead. Empirically, we find RAIS-SGD and standard SGD follow similar learning curves, but RAIS moves faster through these paths, achieving speed-ups of at least 20% and sometimes much more. 1 Introduction Deep learning models perform excellently on many tasks. Training such models is resource-intensive, however, as stochastic gradient descent algorithms can require days or weeks to train effectively. After a short period training, models usually perform well on some—or even most—training examples. As training continues, frequently reconsidering such “easy” examples slows further improvement. Importance sampling prioritizes training examples for SGD in a principled way. The technique suggests sampling example i with probability proportional to the norm of loss term i’s gradient. This distribution both prioritizes challenging examples and minimizes the stochastic gradient’s variance. SGD with optimal importance sampling is impractical, however, since computing the sampling distribution requires excessive time. [1] and [2] analyze importance sampling for SGD and convex problems; practical versions of these algorithms sample proportional to fixed constants. For deep models, other algorithms attempt closer approximations of gradient norms [3, 4, 5]. But these algorithms are not inherently robust. Without carefully chosen hyperparameters or additional forward passes, these algorithms do not converge, let alone speed up training. We propose RAIS, an importance sampling procedure for SGD with several appealing qualities. First, RAIS determines each sampling distribution by solving a robust optimization problem. As a result, each sampling distribution is minimax optimal with respect to an uncertainty set. Since RAIS trains this uncertainty set in an adaptive manner, RAIS is not sensitive to hyperparameters. In addition, RAIS maximizes the benefit of importance sampling by adaptively increasing SGD’s learning rate—an effective yet novel idea to our knowledge. This improvement invites the idea that one RAIS-SGD iteration equates to more than one iteration of conventional SGD. Interestingly, when plotted in terms of “epochs equivalent,” the learning curves of the algorithms align closely. RAIS applies to any model that is trainable with SGD. RAIS also combines nicely with standard “tricks,” including data augmentation, dropout, and batch normalization. We show this empirically in §6. In this section, we also demonstrate that RAIS consistently improves training times. To provide context for the paper, we include qualitative results from these experiments in Figure 1. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. 2 Problem formulation Given loss functions f1, f2, . . . , fn and a tuning parameter 2 R 0, our task is to efficiently solve minimize w2Rd F (w) , where F (w) = 1n Pn i=1 fi(w) + 2 kwk2 . (P) A standard algorithm for solving (P) is stochastic gradient descent. Let w(t) denote the optimization variables when iteration t begins. SGD updates these weights via w(t+1) w(t) ⌘(t)g(t) . (1) Above, ⌘(t) 2 R>0 is a learning rate, specified by a schedule: ⌘(t) = lr_sched (t). The vector g(t) is an unbiased stochastic approximation of the gradientrF (w(t)). SGD computes g(t) by sampling a minibatch of |M| indices from {1, 2, . . . , n} uniformly at random (or approximately so). Denoting this minibatch by M(t), SGD defines the stochastic gradient as g(t) = 1|M| P i2M(t) rfi(w(t)) + w(t) . (2) In this work, we assume an objective function, learning rate schedule, and minibatch size, and we propose a modified algorithm called RAIS-SGD. RAIS prioritizes examples by sampling minibatches non-uniformly, allowing us to train models using fewer iterations and less time. 3 SGD with oracle importance sampling We now introduce an SGD algorithm with “oracle” importance sampling, which prioritizes examples using exact knowledge of importance values. RAIS-SGD is an approximation of this algorithm. Given w(t), let us define the expected training progress attributable to iteration t as E (t) = kw(t) w?k2 E h kw(t+1) w?k2 i = 2⌘(t)hrF (w(t)),w(t) w?i [⌘(t)]2E h kg(t)k2 i . (3) Here w? denotes the solution to (P), and the expectation is with respect to minibatch M(t). The equality follows from plugging in (1) and applying the fact that g(t) is unbiased. We refer to our oracle algorithm as O-SGD, and we refer to SGD with uniform sampling as U-SGD. At a high level, O-SGD makes two changes to U-SGD in order to increase E (t). First, O-SGD samples training examples non-uniformly in a way that minimizes the variance of the stochastic gradient. This first change is not new—see [1], for example. Second, to compensate for the first improvement, O-SGD adaptively increases the learning rate. This second change, which is novel to our knowledge, can be essential for obtaining large speed-ups. 3.1 Maximizing progress with oracle importance sampling By sampling minibatches non-uniformly, O-SGD prioritizes training examples in order to decrease E[kg(t) O k2]. During iteration t, O-SGD defines a discrete distribution p(t) 2 Rn 0, where P i p (t) i = 1. O-SGD constructs minibatch M(t) by sampling independently |M| examples according to p(t). Instead of (2), the resulting stochastic gradient is g(t) O = 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) . (4) Scaling the rfi terms by (np(t)i ) 1 ensures g (t) O remains an unbiased approximation ofrF (w(t)). O-SGD defines p(t) as the sampling distribution that maximizes (3): Proposition 3.1 (Oracle sampling distribution). In order to minimize E[kg(t) O k2], O-SGD samples each example i with probability proportional to the ith “gradient norm.” That is, p(t)i = krfi(w(t))k Pn j=1 krfj(w(t))k . Proof sketch. Defining f̄(w) = 1n Pn i=1 fi(w), we write this second moment as E h kg(t) O k2 i = 1n2|M| Pn i=1 1 p(t)i krfi(w(t))k2 1|M|krf̄(w (t))k2 + krF (w(t))k2 . (5) Finding the distribution p(t) that minimizes (5) is a problem with a closed-form solution. The solution is the distribution defined by Proposition 3.1, which we show in Appendix A. The oracle sampling distribution is quite intuitive. Training examples with largest gradient norm are most important for further decreasing F , and these examples receive priority. Examples that the model handles correctly have smaller gradient norm, and O-SGD deprioritizes these examples. 3.2 Adapting the learning rate Because importance sampling reduces the stochastic gradient’s variance—possibly by a large amount— we find it important to adaptively increase O-SGD’s learning rate compared to U-SGD. For O-SGD, we propose a learning rate that depends on the “gain ratio” r(t) O 2 R 1: r(t) O = E h kg(t) U k2 i . E h kg(t) O k2 i . (6) Above, g(t) U is the stochastic gradient defined by uniform sampling. O-SGD adapts the learning rate so that according to (3), one O-SGD iteration results in as much progress as r(t) O U-SGD iterations. Defining the edge case r(0) O = 1, this learning rate depends on the “effective iteration number” t̂(t) O = Pt t0=1 r (t0 1) O . Since the gain ratio exceeds 1, we have t̂(t) O t for all t. O-SGD defines the learning rate as ⌘(t) O = r(t) O lr_sched(t̂(t) O ) . We justify this choice of learning rate schedule with the following proposition: Proposition 3.2 (Equivalence of gain ratio and expected speed-up). Given w(t), define E (t) U as the expected progress from iteration t of U-SGD with learning rate ⌘(t) U = lr_sched (t). For comparison, define E (t) O as the expected progress from iteration t of O-SGD with learning rate ⌘(t) O = r(t) O ⌘(t) U . Then E (t) O = r(t) O E (t) U . Relative to U-SGD, O-SGD multiplies the expected progress by r(t) O . Proof. Using (3), we have E (t) U = 2⌘(t) U hrF (w(t)),w(t) w?i [⌘(t) U ]2E h kg(t) U k2 i . For O-SGD, we expect progress E (t) O = 2⌘(t) O hrF (w(t)),w(t) w?i [⌘(t) O ]2E h kg(t) O k2 i = 2r(t) O ⌘(t) U hrF (w(t)),w(t) w?i r(t) O [⌘(t) U ]2E h kg(t) U k2 i = r(t) O E (t) U . We remark that the purpose of this learning rate adjustment is not necessarily to speed up training— whether the adjustment results in speed-up depends greatly on the original learning rate schedule. Instead, the purpose of this rescaling is to make O-SGD (and hence RAIS-SGD) suitable as a drop-in replacement for U-SGD. We show empirically that this is the case in §6. 4 Robust approximate importance sampling (RAIS) Determining p(t) and r(t) O in O-SGD depends on knowledge of many gradient norms ( rfi(w(t)) for all examples, rf̄(w(t)) , and rF (w(t)) ). Computing these norms requires a timeconsuming pass over the data. To make importance sampling practical, we propose RAIS-SGD. 4.1 Determining a robust sampling distribution Like O-SGD, RAIS selects the tth minibatch by sampling indices from a discrete distribution p(t). We denote the stochastic gradient by g(t) R , which takes the same form as g(t) O in (4). Let v⇤i = krfi(w(t))k and v⇤ = [v⇤1 , v⇤2 , . . . , v⇤n]T . RAIS defines p(t) by approximating v⇤. Naïve algorithms approximate v⇤ using a point estimate v̂. The sampling distribution becomes a multiple of v̂. [3], [4], and [6] propose algorithms based on similar point estimation strategies. The drawback of the point estimation approach is extreme sensitivity to differences between v̂ and v⇤. For this reason, [3, 4, 6] incorporate additive smoothing. They introduce a hyperparameter, which we denote by , and sample example i with probability proportional to v̂i+ . This approach to robustness is unconvincing, however, since performance becomes critically dependent on a hyperparameter. Too small a risks divergence, while too large a value greatly limits the benefit of importance sampling. Instead of a point estimate, RAIS approximates v⇤ with an uncertainty set U (t) ⇢ Rn 0, which we expect contains (or nearly contains) v⇤. Given U (t), RAIS defines p(t) by minimizing the worst-case value of E[kg(t) R k2] over all gradient norm possibilities in U (t). Noting E[kg(t) R k2] / P i 1 p(t)i (v⇤i ) 2+c for some c 2 R (according to (5)), RAIS defines p(t) as the solution to the following problem: p(t) = arginf n max Pn i=1 1 pi v2i | v 2 U (t) p 2 Rn>0, Pn i=1 pi = 1 o . (PRC) Such robust optimization problems are common for making decisions with data uncertainty [7]. It turns out (PRC) is straightforward to solve because the minimax theorem applies to (PRC) (we prove this in Appendix D.1, assuming our definition of U (t) in §4.2). We first minimize over p by defining pi = vi( Pn j=1 vj) 1. Plugging this into (PRC)’s objective leads to the simplified problem v(t) = argmax ( Pn i=1 vi) 2 | v 2 U (t) . (PRC’) During each iteration t, RAIS solves (PRC’). After doing so, RAIS recovers the minimax optimal sampling distribution by defining p(t)i / v (t) i for all training examples. 4.2 Modeling the uncertainty set To define U (t), RAIS uses features of SGD’s state that are predictive of the true gradient norms. For each example i, we define a feature vector s(t)i 2 R dR 0. A useful feature for s (t) i is the gradient norm krfi(w(t 0 ))k, where t0 is the most recent iteration for which i 2M(t0). Since RAIS-SGD computes rfi(w(t 0 )) during iteration t0, constructing this feature during iteration t should add little overhead. Given s(t)i for all examples, RAIS defines the uncertainty set as an axis-aligned ellipsoid. Since v⇤ 0, RAIS also intersects this ellipsoid with the positive orthant. RAIS parameterizes this uncertainty set with two vectors, c 2 RdR 0 and d 2 R dR 0. These vectors map features s (t) 1:n to parameters of the ellipsoid. Specifically, RAIS defines the uncertainty set as U (t)cd = v 2 Rn 0 1 n Pn i=1 Qcd(s (t) i , vi) 1 , where Qcd(s, v) = (hc,si v)2 hd,si . Here we denote the uncertainty set by U (t)cd to emphasize the dependence of U (t) on c and d. With this definition of U (t)cd , (PRC’) has a simple closed-form solution (proven in Appendix B): Proposition 4.1 (Solution to robust counterpart). For all i, the solution to (PRC’) satisfies v(t)i = hc, s (t) i i+ khd, s (t) i i , where k = q n Pn j=1hd, s (t) j i . If we consider hc, s(t)i i an estimate of v⇤i and hd, s (t) i i a measure of uncertainty in this estimate, then Proposition 4.1 is quite interpretable. RAIS samples example i with probability proportional to hc, s(t)i i+ khd, s (t) i i. The first term is the v⇤i estimate, and the second term adds robustness to error. 4.3 Learning the uncertainty set The uncertainty set parameters, c and d, greatly influence the performance of RAIS. If U (t)cd is a small region near v⇤, then RAIS’s sampling distribution is similar to O-SGD’s sampling distribution. If U (t)cd is less representative of v⇤, the variance of the stochastic gradient could become much larger. In order to make E[kg(t) R k2] small but still ensure v⇤ likely lies in U (t)cd , RAIS adaptively defines c and d. To do so, RAIS minimizes the size of U (t)cd subject to a constraint that encourages v⇤ 2 U (t) cd : c,d = arginf nPn i=1hd, s (t) i i c,d 2 RdR 0, 1|D| P|D| i=1 w̃iQcd(s̃i, ṽi) 1 o . (PT) Here we have defined U (t)cd ’s “size” as the sum of hd, s (t) i i values. The constraint that encourages v⇤ 2 U (t) assumes weighted training data, (w̃i, s̃i, ṽi)|D|i=1. RAIS must define this training set so that 1 |D| P|D| i=1 w̃iQcd(s̃i, ṽi) ⇡ 1 n Pn i=1 Qcd(s (t) i , krfi(w(t))k) . That is, for any c and d, the mean of Qcd(s̃, ṽi) over the weighted training set should approximately equal the mean of Qcd(s (t) i , v ⇤ i ), which depends on current (unknown) gradient norms. To achieve this, RAIS uses gradients from recent minibatches. For entry j of the RAIS training set, RAIS considers an i and t0 for which i 2 M(t0) and t0 < t. RAIS defines s̃j = s(t 0 ) i , ṽj = krfi(w(t 0 ))k, and w̃j = (np(t 0 ) i ) 1. The justification for this choice is that the mean of Qcd(s (t) i , krfi(w(t))k) over training examples tends to change gradually with t. Thus, the weighted mean over the RAIS training set approximates the mean of current Qcd(s (t) i , rfi(w(t)) ) values. 4.4 Approximating the gain ratio In addition to the sampling distribution, RAIS must approximate the gain ratio in O-SGD. Define g(t) R1 as a stochastic gradient of the form (4) using minibatch size 1 and RAIS sampling. Define g(t) U1 in the same way but with uniform sampling. From (5), we can work out that the gain ratio satisfies E h kg(t) U k2 i. E h kg(t) R k2 i = 1 + 1|M| ⇣ E[kg(t) U1 k2] E[kg(t) R1 k2] ⌘. E[kg(t) R k2] . (7) To approximate the gain ratio, RAIS estimates the three moments on the right side of this equation. RAIS estimates E[kg(t) R k2] using an exponential moving average of kg(t) R k2 from recent iterations: E[kg(t) R k2] ⇡ ↵ h kg(t) R k2 + (1 ↵)kg(t 1) R k2 + (1 ↵)2kg(t 2) R k2 + . . . i . Algorithm 4.1 RAIS-SGD input objective function F , minibatch size |M|, learning rate schedule lr_sched(·) input RAIS training set size |D|, exponential smoothing parameter ↵ for gain estimate initialize w(1) 2 Rd, c,d 2 RdR 0; t̂(1) 1; r_estimator GainEstimator(↵) for t = 1, 2, . . . , T do v(t) argmax ( Pn i=1 vi) 2 | v 2 U (t)cd # see Proposition 4.1 for closed-form solution p(t) v(t)/kv(t)k1 M(t) sample_indices_from_distribution(p(t), size = |M|) g(t) R 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) r_estimator.record_gradient_norms(kg(t) R k, (krfi(w(t))k, p(t)i )i2M(t)) r̂(t) r_estimator.estimate_gain_ratio() # see §4.4 ⌘(t) r̂(t) · lr_sched(t̂(t)) w(t+1) w(t) ⌘(t)g(t) R t̂(t+1) t̂(t) + r̂(t) if mod(t, d|D|/|M|e) = 0 and t (n+ |D|)/|M| then c,d train_uncertainty_model() # see §4.2 return w(T+1) RAIS approximates E[kg(t) R1 k2] and E[kg(t) U1 k2] in a similar way. After computing gradients for minibatch t, RAIS estimates E[kg(t) R1 k2] and E[kg(t) U1 k2] using appropriately weighted averages of krfi(w(t))k2 for each i 2M(t) (for E[kg(t)R1k2], RAIS weights terms by (np (t) i ) 2; for E[kg(t) U1 k2], RAIS weights terms by (np(t)i ) 1). Using the same exponential averaging parameter ↵, RAIS averages these estimates from minibatch t with estimates from prior iterations. RAIS approximates the gain ratio by plugging these moment estimates into (7). We denote the result by r̂(t). Analogous to O-SGD, RAIS uses learning rate ⌘(t) = r̂(t)lr_sched t̂(t) , where t̂(t) is the effective iteration number: t̂(t) = Pt t0=1 r̂ (t0 1). Here we also define the edge case r̂(0) = 1. 4.5 Practical considerations Algorithm 4.1 summarizes our RAIS-SGD algorithm. We next discuss important practical details. Solving (PT) While computing p(t) requires a small number of length n operations (see Proposition 4.1), learning the uncertainty set parameters requires more computation. For this reason, RAIS should not solve (PT) during every iteration. Our implementation solves (PT) asynchronously after every d|D|/|M|e minibatches, with updates to w(t) continuing during the process. We describe our algorithm for solving (PT) in Appendix D.2. Since our features s(t) 1:n depend on past minibatch updates, we do not use RAIS for the first epoch of training—instead we sample examples sequentially. Compatibility with common tricks RAIS combines nicely with standard training tricks for deep learning. With no change, we find RAIS works well with momentum [8, 9]. Incorporating data augmentation, dropout [10], or batch normalization [11] adds variance to the model’s outputs and gradient norms. RAIS elegantly compensates for such inconsistency by learning a larger uncertainty set. Since the importance sampling distribution changes over time, we find it important to compute weighted batch statistics when using RAIS with batch normalization. That is, when computing normalization statistics during training, we weight contributions from each example by (np(t)i ) 1. Protecting against outliers In some cases—typically when the gain ratio is very large—we find Qcd(s (t) i , v ⇤ i ) can be quite small for most examples yet large for a small set of outliers. Typically we find RAIS does not require special treatment of such outliers. Even so, it is reasonable to protect against outliers, so that an example with extremely large Qcd(s (t) i , v ⇤ i ) cannot greatly increase the stochastic gradient’s variance. To achieve this, we use gradient clipping, and RAIS provides a natural way of doing so. We define an “outlier” as any example for which Qcd(s (t) i , v ⇤ i ) exceeds a threshold ⌧ . For each outlier i, we temporarily scale fi during iteration t until Qcd(s (t) i , rfi(w(t)) ) = ⌧ . In practice, we use ⌧ = 100; the fraction of outliers is often zero and rarely exceeds 0.1%. Approximating per-example gradient norms To train the uncertainty set, RAIS computes krfi(w(t))k for each example in each minibatch. Unfortunately, existing software tools do not provide efficient access to per-example gradient norms. Instead, libraries are optimized for aggregating gradients over minibatches. Thus, to make RAIS practical, we must approximate the gradient norms. We do so by replacing krfi(w(t))k with the norm of only the loss layer’s gradient (with respect to this layer’s inputs). These values correlate strongly, since the loss layer begins the backpropagation chain for computingrfi(w(t)). We show this empirically in Figure 2(left), and we include additional plots in Appendix E.1. We note this approximation may not work well for all models. 5 Relation to prior work Prior strategies also consider importance sampling for speeding up deep learning. [3] proposes distributing the computation of sampling probabilities. In parallel with regular training, [4] trains a miniature neural network to predict importance values. [5] approximates importance values using additional forward passes. [12] and [13] apply importance sampling to deep reinforcement learning. With the exception of [5] (which requires considerable time to compute importance values), these prior algorithms are sensitive to errors in importance value estimates. For this reason, all require critical smoothing hyperparameters to converge. In contrast, RAIS elegantly compensates for approximation error by choosing a sampling distribution that is minimax optimal with respect to an uncertainty set. Since RAIS adaptively trains this uncertainty set, RAIS does not require hyperparameter tuning. Researchers have also considered other ways to prioritize training examples for deep learning. [14] considers examples in order of increasing difficulty. Other researchers prioritize challenging training examples [15, 16]. And yet others prioritize examples closest to the model’s decision boundary [17]. Unlike RAIS, the primary goal of these approaches is improved model performance, not optimization efficiency. Importance sampling may work well in conjunction with these strategies. There also exist ideas for sampling minibatches non-uniformly outside the context of deep learning. [18, 19] consider sampling diverse minibatches via repulsive point processes. Another strategy uses side information, such as class labels, for approximate importance sampling [6]. By choosing appropriate features for the uncertainty set, RAIS can use side information in the same way. In the convex setting, there are several importance sampling strategies for SGD with theoretical guarantees. This includes [1] and [2], which sample training examples proportional to Lipschitz constants. Leverage score sampling uses a closely related concept for matrix approximation algorithms [20, 21]. For more general convex problems, some adaptive sampling strategies include [22] and [23]. 6 Empirical comparisons In this section, we demonstrate how RAIS performs in practice. We consider the very popular task of training a convolutional neural network to classify images. We first train a LeNet-5 model [24] on the MNIST digits dataset. The model’s small size makes it possible to compare with O-SGD. We use learning rate ⌘(t) = 3.4/ p 100 + t, L2 penalty = 2.5⇥ 10 4, and batch size 32—the parameters are chosen so that SGD performs well. We do not use momentum or data augmentation. Figure 2(middle) includes the results of this experiment. Oracle sampling significantly outperforms RAIS, and RAIS significantly outperforms uniform sampling. For our remaining comparisons, we consider street view house numbers [25], rotated MNIST [26], and CIFAR tiny image [27] datasets. For rot-MNIST, we train a 7 layer CNN with 20 channels per layer—a strong baseline from [28]. Otherwise, we train an 18 layer ResNet preactivation model [29]. CIFAR-100 contains 100 classes, while the other problems contain 10. The number of training examples is 6.0⇥ 105 for SVHN, 1.2⇥ 104 for rot-MNIST, and 5.0⇥ 104 for the CIFAR problems. We follow standard training procedures to attain good generalization performance. We use batch normalization and standard momentum of 0.9. For rot-MNIST, we follow [28], augmenting data with random rotations and training with dropout. For the CIFAR problems, we augment the training set with random horizontal reflections and random crops (pad to 40x40 pixels; crop to 32x32). We train the SVHN model with batch size 64 and the remaining models with |M| = 128. For each problem, we approximately optimize and the learning rate schedule in order to achieve good validation performance with SGD at the end of training. The learning rate schedule decreases by a fixed fraction after each epoch (n/|M| iterations). This fraction is 0.8 for SVHN, 0.972 for rot-MNIST, 0.96 for CIFAR-10, and 0.96 for CIFAR-100. The initial learning rates are 0.15, 0.09, 0.08, and 0.1, respectively. We use = 3⇥ 10 3 for rot-MNIST and = 5⇥ 10 4 otherwise. For RAIS-SGD, we use |D| = 2⇥ 104 training examples to learn c and d and ↵ = 0.01 to estimate r̂(t). The performance of RAIS varies little with these parameters, since they only determine the number of minibatches to consider when training the uncertainty set and estimating the gain ratio. For the uncertainty set features, we use simple moving averages of the most recently computed gradient norms for each example. We use moving averages of different lengths—1, 2, 4, 8, and 16. For lengths of at least four, we also include the variance and standard deviation of these prior gradient norm values. We also incorporate a bias feature as well as the magnitude of the random crop offset. We compare training curves of RAIS-SGD and SGD in Figure 3. Notice that RAIS-SGD consistently outperforms SGD. The relative speed-up ranges from approximately 20% for the CIFAR-100 problem to more than 2x for the SVHN problem. Due to varying machine loads, we plot results in terms of epochs (not wall time), but RAIS introduces very little time overhead. For example, Figure 2(right) includes time overhead results for the rot-MNIST comparison, which we ran on an isolated machine. Figure 4 provides additional details of these results. In the figure’s first row, we see the speed-up in terms of the gain ratio (the blue curve averages the value (r̂(t) 1) · 100% over consecutive epochs). The gain ratio tends to increase as training progresses, implying RAIS is most useful during later stages of training. We also plot the relative wall time overhead for RAIS, which again is very small. In the second row of Figure 4, we compare RAIS-SGD and SGD in terms of epochs equivalent—the number of epochs measured in terms of effective iterations. Interestingly, the curves align closely. This alignment confirms that our learning rate adjustment is reasonable, as it results in a suitable drop-in replacement for SGD. This result contrasts starkly with [3], for example, in which case generalization performance differs significantly for the importance sampling and standard algorithms. Table 1 concludes these comparisons with a summary of results: 7 Discussion We proposed a relatively simple and very practical importance sampling procedure for speeding up the training of deep models. By using robust optimization to define the sampling distribution, RAIS depends minimally on user-specified parameters. Additionally, RAIS introduces little computational overhead and combines nicely with standard training strategies. All together, RAIS is a promising approach with minimal downside and potential for large improvements in training speed. Acknowledgements We thank Marco Tulio Ribeiro, Tianqi Chen, Maryam Fazel, Sham Kakade, and Ali Shojaie for helpful discussion and feedback. This work was supported by PECASE N00014-13-1-0023.
1. What is the focus of the paper, and what are the authors' contributions to Importance Sampling SGD? 2. What are the issues with vanilla Importance Sampling SGD, and how does the proposed method address them? 3. How does the proposed method estimate the quantities involved in Importance Sampling SGD? 4. Why does the author question the claim that using the gradient norm of the last layer as an approximation to the gradient norm over the whole model is reasonable? 5. What is the significance of Figure 1, and what does it represent? 6. What does Figure 3 suggest about the performance of RAIS-SGD, and how does it compare to other methods? 7. What are some potential limitations or drawbacks of the proposed method, and how might they be addressed?
Review
Review The authors present a method to make Importance Sampling SGD more robust. There are a few difficulties with the vanilla algorithm, and one of them is the instability of the importance weights. The authors proposed to address this by introducing a method that estimates the quantities involved. The authors briefly refer to the earlier work by Alain et al (2016) in their introduction, but make no effort to describe any of that work which is similar to theirs. The authors present their “Oracle SGD” algorithm using the same kind of language as would be used to introduce a novel idea. The whole paper takes great pains to devise a novel method that makes the Importance Sampling SGD more robust, but at the same time they casually claim that it’s quite reasonable to use the gradient norm of the last layer as approximation to the gradient norm over the whole model. This is stated entirely without reference, and I suspect that it is not even true as a general fact (while it might be true for certain models at a certain time of training on certain data). Otherwise, the “exploding gradient” problem would not be a thing. They could have at least provided a sanity check on their own data and model just to make sure that they weren’t completely wrong about this. This can be done with a batch size 1, at an increased cost for sure, but it doesn’t need to be done more than a few times. If it is indeed true, it does make it easier to apply their method to any model because it’s not so hard to compute the gradient norms on the final layers. Figure 1 is especially nice and intuitive. Figure 3 seems to suggest that epochs run faster, but I believe that they are comparing to evaluating importance samples on the whole training set. This is an interesting quantity, but it seems like it would be something that advocates of Importance Sampling SGD would not actually recommend simply due to the fact that it would scale badly with the size of the training set. It would be interesting to compare RAIS-SGD’s speed with other reasonable methods that, for example, run Importance Sampling SGD on only a fifth of the training set (corresponding to a 500% speed increase but possibly not an increase in training quality).
NIPS
Title Training Deep Models Faster with Robust, Approximate Importance Sampling Abstract In theory, importance sampling speeds up stochastic gradient algorithms for supervised learning by prioritizing training examples. In practice, the cost of computing importances greatly limits the impact of importance sampling. We propose a robust, approximate importance sampling procedure (RAIS) for stochastic gradient descent. By approximating the ideal sampling distribution using robust optimization, RAIS provides much of the benefit of exact importance sampling with drastically reduced overhead. Empirically, we find RAIS-SGD and standard SGD follow similar learning curves, but RAIS moves faster through these paths, achieving speed-ups of at least 20% and sometimes much more. 1 Introduction Deep learning models perform excellently on many tasks. Training such models is resource-intensive, however, as stochastic gradient descent algorithms can require days or weeks to train effectively. After a short period training, models usually perform well on some—or even most—training examples. As training continues, frequently reconsidering such “easy” examples slows further improvement. Importance sampling prioritizes training examples for SGD in a principled way. The technique suggests sampling example i with probability proportional to the norm of loss term i’s gradient. This distribution both prioritizes challenging examples and minimizes the stochastic gradient’s variance. SGD with optimal importance sampling is impractical, however, since computing the sampling distribution requires excessive time. [1] and [2] analyze importance sampling for SGD and convex problems; practical versions of these algorithms sample proportional to fixed constants. For deep models, other algorithms attempt closer approximations of gradient norms [3, 4, 5]. But these algorithms are not inherently robust. Without carefully chosen hyperparameters or additional forward passes, these algorithms do not converge, let alone speed up training. We propose RAIS, an importance sampling procedure for SGD with several appealing qualities. First, RAIS determines each sampling distribution by solving a robust optimization problem. As a result, each sampling distribution is minimax optimal with respect to an uncertainty set. Since RAIS trains this uncertainty set in an adaptive manner, RAIS is not sensitive to hyperparameters. In addition, RAIS maximizes the benefit of importance sampling by adaptively increasing SGD’s learning rate—an effective yet novel idea to our knowledge. This improvement invites the idea that one RAIS-SGD iteration equates to more than one iteration of conventional SGD. Interestingly, when plotted in terms of “epochs equivalent,” the learning curves of the algorithms align closely. RAIS applies to any model that is trainable with SGD. RAIS also combines nicely with standard “tricks,” including data augmentation, dropout, and batch normalization. We show this empirically in §6. In this section, we also demonstrate that RAIS consistently improves training times. To provide context for the paper, we include qualitative results from these experiments in Figure 1. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. 2 Problem formulation Given loss functions f1, f2, . . . , fn and a tuning parameter 2 R 0, our task is to efficiently solve minimize w2Rd F (w) , where F (w) = 1n Pn i=1 fi(w) + 2 kwk2 . (P) A standard algorithm for solving (P) is stochastic gradient descent. Let w(t) denote the optimization variables when iteration t begins. SGD updates these weights via w(t+1) w(t) ⌘(t)g(t) . (1) Above, ⌘(t) 2 R>0 is a learning rate, specified by a schedule: ⌘(t) = lr_sched (t). The vector g(t) is an unbiased stochastic approximation of the gradientrF (w(t)). SGD computes g(t) by sampling a minibatch of |M| indices from {1, 2, . . . , n} uniformly at random (or approximately so). Denoting this minibatch by M(t), SGD defines the stochastic gradient as g(t) = 1|M| P i2M(t) rfi(w(t)) + w(t) . (2) In this work, we assume an objective function, learning rate schedule, and minibatch size, and we propose a modified algorithm called RAIS-SGD. RAIS prioritizes examples by sampling minibatches non-uniformly, allowing us to train models using fewer iterations and less time. 3 SGD with oracle importance sampling We now introduce an SGD algorithm with “oracle” importance sampling, which prioritizes examples using exact knowledge of importance values. RAIS-SGD is an approximation of this algorithm. Given w(t), let us define the expected training progress attributable to iteration t as E (t) = kw(t) w?k2 E h kw(t+1) w?k2 i = 2⌘(t)hrF (w(t)),w(t) w?i [⌘(t)]2E h kg(t)k2 i . (3) Here w? denotes the solution to (P), and the expectation is with respect to minibatch M(t). The equality follows from plugging in (1) and applying the fact that g(t) is unbiased. We refer to our oracle algorithm as O-SGD, and we refer to SGD with uniform sampling as U-SGD. At a high level, O-SGD makes two changes to U-SGD in order to increase E (t). First, O-SGD samples training examples non-uniformly in a way that minimizes the variance of the stochastic gradient. This first change is not new—see [1], for example. Second, to compensate for the first improvement, O-SGD adaptively increases the learning rate. This second change, which is novel to our knowledge, can be essential for obtaining large speed-ups. 3.1 Maximizing progress with oracle importance sampling By sampling minibatches non-uniformly, O-SGD prioritizes training examples in order to decrease E[kg(t) O k2]. During iteration t, O-SGD defines a discrete distribution p(t) 2 Rn 0, where P i p (t) i = 1. O-SGD constructs minibatch M(t) by sampling independently |M| examples according to p(t). Instead of (2), the resulting stochastic gradient is g(t) O = 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) . (4) Scaling the rfi terms by (np(t)i ) 1 ensures g (t) O remains an unbiased approximation ofrF (w(t)). O-SGD defines p(t) as the sampling distribution that maximizes (3): Proposition 3.1 (Oracle sampling distribution). In order to minimize E[kg(t) O k2], O-SGD samples each example i with probability proportional to the ith “gradient norm.” That is, p(t)i = krfi(w(t))k Pn j=1 krfj(w(t))k . Proof sketch. Defining f̄(w) = 1n Pn i=1 fi(w), we write this second moment as E h kg(t) O k2 i = 1n2|M| Pn i=1 1 p(t)i krfi(w(t))k2 1|M|krf̄(w (t))k2 + krF (w(t))k2 . (5) Finding the distribution p(t) that minimizes (5) is a problem with a closed-form solution. The solution is the distribution defined by Proposition 3.1, which we show in Appendix A. The oracle sampling distribution is quite intuitive. Training examples with largest gradient norm are most important for further decreasing F , and these examples receive priority. Examples that the model handles correctly have smaller gradient norm, and O-SGD deprioritizes these examples. 3.2 Adapting the learning rate Because importance sampling reduces the stochastic gradient’s variance—possibly by a large amount— we find it important to adaptively increase O-SGD’s learning rate compared to U-SGD. For O-SGD, we propose a learning rate that depends on the “gain ratio” r(t) O 2 R 1: r(t) O = E h kg(t) U k2 i . E h kg(t) O k2 i . (6) Above, g(t) U is the stochastic gradient defined by uniform sampling. O-SGD adapts the learning rate so that according to (3), one O-SGD iteration results in as much progress as r(t) O U-SGD iterations. Defining the edge case r(0) O = 1, this learning rate depends on the “effective iteration number” t̂(t) O = Pt t0=1 r (t0 1) O . Since the gain ratio exceeds 1, we have t̂(t) O t for all t. O-SGD defines the learning rate as ⌘(t) O = r(t) O lr_sched(t̂(t) O ) . We justify this choice of learning rate schedule with the following proposition: Proposition 3.2 (Equivalence of gain ratio and expected speed-up). Given w(t), define E (t) U as the expected progress from iteration t of U-SGD with learning rate ⌘(t) U = lr_sched (t). For comparison, define E (t) O as the expected progress from iteration t of O-SGD with learning rate ⌘(t) O = r(t) O ⌘(t) U . Then E (t) O = r(t) O E (t) U . Relative to U-SGD, O-SGD multiplies the expected progress by r(t) O . Proof. Using (3), we have E (t) U = 2⌘(t) U hrF (w(t)),w(t) w?i [⌘(t) U ]2E h kg(t) U k2 i . For O-SGD, we expect progress E (t) O = 2⌘(t) O hrF (w(t)),w(t) w?i [⌘(t) O ]2E h kg(t) O k2 i = 2r(t) O ⌘(t) U hrF (w(t)),w(t) w?i r(t) O [⌘(t) U ]2E h kg(t) U k2 i = r(t) O E (t) U . We remark that the purpose of this learning rate adjustment is not necessarily to speed up training— whether the adjustment results in speed-up depends greatly on the original learning rate schedule. Instead, the purpose of this rescaling is to make O-SGD (and hence RAIS-SGD) suitable as a drop-in replacement for U-SGD. We show empirically that this is the case in §6. 4 Robust approximate importance sampling (RAIS) Determining p(t) and r(t) O in O-SGD depends on knowledge of many gradient norms ( rfi(w(t)) for all examples, rf̄(w(t)) , and rF (w(t)) ). Computing these norms requires a timeconsuming pass over the data. To make importance sampling practical, we propose RAIS-SGD. 4.1 Determining a robust sampling distribution Like O-SGD, RAIS selects the tth minibatch by sampling indices from a discrete distribution p(t). We denote the stochastic gradient by g(t) R , which takes the same form as g(t) O in (4). Let v⇤i = krfi(w(t))k and v⇤ = [v⇤1 , v⇤2 , . . . , v⇤n]T . RAIS defines p(t) by approximating v⇤. Naïve algorithms approximate v⇤ using a point estimate v̂. The sampling distribution becomes a multiple of v̂. [3], [4], and [6] propose algorithms based on similar point estimation strategies. The drawback of the point estimation approach is extreme sensitivity to differences between v̂ and v⇤. For this reason, [3, 4, 6] incorporate additive smoothing. They introduce a hyperparameter, which we denote by , and sample example i with probability proportional to v̂i+ . This approach to robustness is unconvincing, however, since performance becomes critically dependent on a hyperparameter. Too small a risks divergence, while too large a value greatly limits the benefit of importance sampling. Instead of a point estimate, RAIS approximates v⇤ with an uncertainty set U (t) ⇢ Rn 0, which we expect contains (or nearly contains) v⇤. Given U (t), RAIS defines p(t) by minimizing the worst-case value of E[kg(t) R k2] over all gradient norm possibilities in U (t). Noting E[kg(t) R k2] / P i 1 p(t)i (v⇤i ) 2+c for some c 2 R (according to (5)), RAIS defines p(t) as the solution to the following problem: p(t) = arginf n max Pn i=1 1 pi v2i | v 2 U (t) p 2 Rn>0, Pn i=1 pi = 1 o . (PRC) Such robust optimization problems are common for making decisions with data uncertainty [7]. It turns out (PRC) is straightforward to solve because the minimax theorem applies to (PRC) (we prove this in Appendix D.1, assuming our definition of U (t) in §4.2). We first minimize over p by defining pi = vi( Pn j=1 vj) 1. Plugging this into (PRC)’s objective leads to the simplified problem v(t) = argmax ( Pn i=1 vi) 2 | v 2 U (t) . (PRC’) During each iteration t, RAIS solves (PRC’). After doing so, RAIS recovers the minimax optimal sampling distribution by defining p(t)i / v (t) i for all training examples. 4.2 Modeling the uncertainty set To define U (t), RAIS uses features of SGD’s state that are predictive of the true gradient norms. For each example i, we define a feature vector s(t)i 2 R dR 0. A useful feature for s (t) i is the gradient norm krfi(w(t 0 ))k, where t0 is the most recent iteration for which i 2M(t0). Since RAIS-SGD computes rfi(w(t 0 )) during iteration t0, constructing this feature during iteration t should add little overhead. Given s(t)i for all examples, RAIS defines the uncertainty set as an axis-aligned ellipsoid. Since v⇤ 0, RAIS also intersects this ellipsoid with the positive orthant. RAIS parameterizes this uncertainty set with two vectors, c 2 RdR 0 and d 2 R dR 0. These vectors map features s (t) 1:n to parameters of the ellipsoid. Specifically, RAIS defines the uncertainty set as U (t)cd = v 2 Rn 0 1 n Pn i=1 Qcd(s (t) i , vi) 1 , where Qcd(s, v) = (hc,si v)2 hd,si . Here we denote the uncertainty set by U (t)cd to emphasize the dependence of U (t) on c and d. With this definition of U (t)cd , (PRC’) has a simple closed-form solution (proven in Appendix B): Proposition 4.1 (Solution to robust counterpart). For all i, the solution to (PRC’) satisfies v(t)i = hc, s (t) i i+ khd, s (t) i i , where k = q n Pn j=1hd, s (t) j i . If we consider hc, s(t)i i an estimate of v⇤i and hd, s (t) i i a measure of uncertainty in this estimate, then Proposition 4.1 is quite interpretable. RAIS samples example i with probability proportional to hc, s(t)i i+ khd, s (t) i i. The first term is the v⇤i estimate, and the second term adds robustness to error. 4.3 Learning the uncertainty set The uncertainty set parameters, c and d, greatly influence the performance of RAIS. If U (t)cd is a small region near v⇤, then RAIS’s sampling distribution is similar to O-SGD’s sampling distribution. If U (t)cd is less representative of v⇤, the variance of the stochastic gradient could become much larger. In order to make E[kg(t) R k2] small but still ensure v⇤ likely lies in U (t)cd , RAIS adaptively defines c and d. To do so, RAIS minimizes the size of U (t)cd subject to a constraint that encourages v⇤ 2 U (t) cd : c,d = arginf nPn i=1hd, s (t) i i c,d 2 RdR 0, 1|D| P|D| i=1 w̃iQcd(s̃i, ṽi) 1 o . (PT) Here we have defined U (t)cd ’s “size” as the sum of hd, s (t) i i values. The constraint that encourages v⇤ 2 U (t) assumes weighted training data, (w̃i, s̃i, ṽi)|D|i=1. RAIS must define this training set so that 1 |D| P|D| i=1 w̃iQcd(s̃i, ṽi) ⇡ 1 n Pn i=1 Qcd(s (t) i , krfi(w(t))k) . That is, for any c and d, the mean of Qcd(s̃, ṽi) over the weighted training set should approximately equal the mean of Qcd(s (t) i , v ⇤ i ), which depends on current (unknown) gradient norms. To achieve this, RAIS uses gradients from recent minibatches. For entry j of the RAIS training set, RAIS considers an i and t0 for which i 2 M(t0) and t0 < t. RAIS defines s̃j = s(t 0 ) i , ṽj = krfi(w(t 0 ))k, and w̃j = (np(t 0 ) i ) 1. The justification for this choice is that the mean of Qcd(s (t) i , krfi(w(t))k) over training examples tends to change gradually with t. Thus, the weighted mean over the RAIS training set approximates the mean of current Qcd(s (t) i , rfi(w(t)) ) values. 4.4 Approximating the gain ratio In addition to the sampling distribution, RAIS must approximate the gain ratio in O-SGD. Define g(t) R1 as a stochastic gradient of the form (4) using minibatch size 1 and RAIS sampling. Define g(t) U1 in the same way but with uniform sampling. From (5), we can work out that the gain ratio satisfies E h kg(t) U k2 i. E h kg(t) R k2 i = 1 + 1|M| ⇣ E[kg(t) U1 k2] E[kg(t) R1 k2] ⌘. E[kg(t) R k2] . (7) To approximate the gain ratio, RAIS estimates the three moments on the right side of this equation. RAIS estimates E[kg(t) R k2] using an exponential moving average of kg(t) R k2 from recent iterations: E[kg(t) R k2] ⇡ ↵ h kg(t) R k2 + (1 ↵)kg(t 1) R k2 + (1 ↵)2kg(t 2) R k2 + . . . i . Algorithm 4.1 RAIS-SGD input objective function F , minibatch size |M|, learning rate schedule lr_sched(·) input RAIS training set size |D|, exponential smoothing parameter ↵ for gain estimate initialize w(1) 2 Rd, c,d 2 RdR 0; t̂(1) 1; r_estimator GainEstimator(↵) for t = 1, 2, . . . , T do v(t) argmax ( Pn i=1 vi) 2 | v 2 U (t)cd # see Proposition 4.1 for closed-form solution p(t) v(t)/kv(t)k1 M(t) sample_indices_from_distribution(p(t), size = |M|) g(t) R 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) r_estimator.record_gradient_norms(kg(t) R k, (krfi(w(t))k, p(t)i )i2M(t)) r̂(t) r_estimator.estimate_gain_ratio() # see §4.4 ⌘(t) r̂(t) · lr_sched(t̂(t)) w(t+1) w(t) ⌘(t)g(t) R t̂(t+1) t̂(t) + r̂(t) if mod(t, d|D|/|M|e) = 0 and t (n+ |D|)/|M| then c,d train_uncertainty_model() # see §4.2 return w(T+1) RAIS approximates E[kg(t) R1 k2] and E[kg(t) U1 k2] in a similar way. After computing gradients for minibatch t, RAIS estimates E[kg(t) R1 k2] and E[kg(t) U1 k2] using appropriately weighted averages of krfi(w(t))k2 for each i 2M(t) (for E[kg(t)R1k2], RAIS weights terms by (np (t) i ) 2; for E[kg(t) U1 k2], RAIS weights terms by (np(t)i ) 1). Using the same exponential averaging parameter ↵, RAIS averages these estimates from minibatch t with estimates from prior iterations. RAIS approximates the gain ratio by plugging these moment estimates into (7). We denote the result by r̂(t). Analogous to O-SGD, RAIS uses learning rate ⌘(t) = r̂(t)lr_sched t̂(t) , where t̂(t) is the effective iteration number: t̂(t) = Pt t0=1 r̂ (t0 1). Here we also define the edge case r̂(0) = 1. 4.5 Practical considerations Algorithm 4.1 summarizes our RAIS-SGD algorithm. We next discuss important practical details. Solving (PT) While computing p(t) requires a small number of length n operations (see Proposition 4.1), learning the uncertainty set parameters requires more computation. For this reason, RAIS should not solve (PT) during every iteration. Our implementation solves (PT) asynchronously after every d|D|/|M|e minibatches, with updates to w(t) continuing during the process. We describe our algorithm for solving (PT) in Appendix D.2. Since our features s(t) 1:n depend on past minibatch updates, we do not use RAIS for the first epoch of training—instead we sample examples sequentially. Compatibility with common tricks RAIS combines nicely with standard training tricks for deep learning. With no change, we find RAIS works well with momentum [8, 9]. Incorporating data augmentation, dropout [10], or batch normalization [11] adds variance to the model’s outputs and gradient norms. RAIS elegantly compensates for such inconsistency by learning a larger uncertainty set. Since the importance sampling distribution changes over time, we find it important to compute weighted batch statistics when using RAIS with batch normalization. That is, when computing normalization statistics during training, we weight contributions from each example by (np(t)i ) 1. Protecting against outliers In some cases—typically when the gain ratio is very large—we find Qcd(s (t) i , v ⇤ i ) can be quite small for most examples yet large for a small set of outliers. Typically we find RAIS does not require special treatment of such outliers. Even so, it is reasonable to protect against outliers, so that an example with extremely large Qcd(s (t) i , v ⇤ i ) cannot greatly increase the stochastic gradient’s variance. To achieve this, we use gradient clipping, and RAIS provides a natural way of doing so. We define an “outlier” as any example for which Qcd(s (t) i , v ⇤ i ) exceeds a threshold ⌧ . For each outlier i, we temporarily scale fi during iteration t until Qcd(s (t) i , rfi(w(t)) ) = ⌧ . In practice, we use ⌧ = 100; the fraction of outliers is often zero and rarely exceeds 0.1%. Approximating per-example gradient norms To train the uncertainty set, RAIS computes krfi(w(t))k for each example in each minibatch. Unfortunately, existing software tools do not provide efficient access to per-example gradient norms. Instead, libraries are optimized for aggregating gradients over minibatches. Thus, to make RAIS practical, we must approximate the gradient norms. We do so by replacing krfi(w(t))k with the norm of only the loss layer’s gradient (with respect to this layer’s inputs). These values correlate strongly, since the loss layer begins the backpropagation chain for computingrfi(w(t)). We show this empirically in Figure 2(left), and we include additional plots in Appendix E.1. We note this approximation may not work well for all models. 5 Relation to prior work Prior strategies also consider importance sampling for speeding up deep learning. [3] proposes distributing the computation of sampling probabilities. In parallel with regular training, [4] trains a miniature neural network to predict importance values. [5] approximates importance values using additional forward passes. [12] and [13] apply importance sampling to deep reinforcement learning. With the exception of [5] (which requires considerable time to compute importance values), these prior algorithms are sensitive to errors in importance value estimates. For this reason, all require critical smoothing hyperparameters to converge. In contrast, RAIS elegantly compensates for approximation error by choosing a sampling distribution that is minimax optimal with respect to an uncertainty set. Since RAIS adaptively trains this uncertainty set, RAIS does not require hyperparameter tuning. Researchers have also considered other ways to prioritize training examples for deep learning. [14] considers examples in order of increasing difficulty. Other researchers prioritize challenging training examples [15, 16]. And yet others prioritize examples closest to the model’s decision boundary [17]. Unlike RAIS, the primary goal of these approaches is improved model performance, not optimization efficiency. Importance sampling may work well in conjunction with these strategies. There also exist ideas for sampling minibatches non-uniformly outside the context of deep learning. [18, 19] consider sampling diverse minibatches via repulsive point processes. Another strategy uses side information, such as class labels, for approximate importance sampling [6]. By choosing appropriate features for the uncertainty set, RAIS can use side information in the same way. In the convex setting, there are several importance sampling strategies for SGD with theoretical guarantees. This includes [1] and [2], which sample training examples proportional to Lipschitz constants. Leverage score sampling uses a closely related concept for matrix approximation algorithms [20, 21]. For more general convex problems, some adaptive sampling strategies include [22] and [23]. 6 Empirical comparisons In this section, we demonstrate how RAIS performs in practice. We consider the very popular task of training a convolutional neural network to classify images. We first train a LeNet-5 model [24] on the MNIST digits dataset. The model’s small size makes it possible to compare with O-SGD. We use learning rate ⌘(t) = 3.4/ p 100 + t, L2 penalty = 2.5⇥ 10 4, and batch size 32—the parameters are chosen so that SGD performs well. We do not use momentum or data augmentation. Figure 2(middle) includes the results of this experiment. Oracle sampling significantly outperforms RAIS, and RAIS significantly outperforms uniform sampling. For our remaining comparisons, we consider street view house numbers [25], rotated MNIST [26], and CIFAR tiny image [27] datasets. For rot-MNIST, we train a 7 layer CNN with 20 channels per layer—a strong baseline from [28]. Otherwise, we train an 18 layer ResNet preactivation model [29]. CIFAR-100 contains 100 classes, while the other problems contain 10. The number of training examples is 6.0⇥ 105 for SVHN, 1.2⇥ 104 for rot-MNIST, and 5.0⇥ 104 for the CIFAR problems. We follow standard training procedures to attain good generalization performance. We use batch normalization and standard momentum of 0.9. For rot-MNIST, we follow [28], augmenting data with random rotations and training with dropout. For the CIFAR problems, we augment the training set with random horizontal reflections and random crops (pad to 40x40 pixels; crop to 32x32). We train the SVHN model with batch size 64 and the remaining models with |M| = 128. For each problem, we approximately optimize and the learning rate schedule in order to achieve good validation performance with SGD at the end of training. The learning rate schedule decreases by a fixed fraction after each epoch (n/|M| iterations). This fraction is 0.8 for SVHN, 0.972 for rot-MNIST, 0.96 for CIFAR-10, and 0.96 for CIFAR-100. The initial learning rates are 0.15, 0.09, 0.08, and 0.1, respectively. We use = 3⇥ 10 3 for rot-MNIST and = 5⇥ 10 4 otherwise. For RAIS-SGD, we use |D| = 2⇥ 104 training examples to learn c and d and ↵ = 0.01 to estimate r̂(t). The performance of RAIS varies little with these parameters, since they only determine the number of minibatches to consider when training the uncertainty set and estimating the gain ratio. For the uncertainty set features, we use simple moving averages of the most recently computed gradient norms for each example. We use moving averages of different lengths—1, 2, 4, 8, and 16. For lengths of at least four, we also include the variance and standard deviation of these prior gradient norm values. We also incorporate a bias feature as well as the magnitude of the random crop offset. We compare training curves of RAIS-SGD and SGD in Figure 3. Notice that RAIS-SGD consistently outperforms SGD. The relative speed-up ranges from approximately 20% for the CIFAR-100 problem to more than 2x for the SVHN problem. Due to varying machine loads, we plot results in terms of epochs (not wall time), but RAIS introduces very little time overhead. For example, Figure 2(right) includes time overhead results for the rot-MNIST comparison, which we ran on an isolated machine. Figure 4 provides additional details of these results. In the figure’s first row, we see the speed-up in terms of the gain ratio (the blue curve averages the value (r̂(t) 1) · 100% over consecutive epochs). The gain ratio tends to increase as training progresses, implying RAIS is most useful during later stages of training. We also plot the relative wall time overhead for RAIS, which again is very small. In the second row of Figure 4, we compare RAIS-SGD and SGD in terms of epochs equivalent—the number of epochs measured in terms of effective iterations. Interestingly, the curves align closely. This alignment confirms that our learning rate adjustment is reasonable, as it results in a suitable drop-in replacement for SGD. This result contrasts starkly with [3], for example, in which case generalization performance differs significantly for the importance sampling and standard algorithms. Table 1 concludes these comparisons with a summary of results: 7 Discussion We proposed a relatively simple and very practical importance sampling procedure for speeding up the training of deep models. By using robust optimization to define the sampling distribution, RAIS depends minimally on user-specified parameters. Additionally, RAIS introduces little computational overhead and combines nicely with standard training strategies. All together, RAIS is a promising approach with minimal downside and potential for large improvements in training speed. Acknowledgements We thank Marco Tulio Ribeiro, Tianqi Chen, Maryam Fazel, Sham Kakade, and Ali Shojaie for helpful discussion and feedback. This work was supported by PECASE N00014-13-1-0023.
1. What are the similarities between the proposed method and existing works in the field? 2. How does the reviewer assess the clarity and presentation of the paper? 3. What are the recent references of importance sampling that are missing in the paper? 4. Is the paper ready for publication considering its current state?
Review
Review At least in the purposes and goals, the proposed method or better this work seems similar to Ö. D. Akyıldız, I. P. Marino, J. Miguez. Adaptive noisy importance sampling for stochastic optimization, IEEE CAMSAP 2017. However, the point is that, in my opinion, the paper is very confused. The presentation must be improved. It is even difficult to see where the importance sampling in Section 3.2. Secondly, several recent references of importance sampling are missed. Therefore, this paper is not ready for publication.
NIPS
Title Training Deep Models Faster with Robust, Approximate Importance Sampling Abstract In theory, importance sampling speeds up stochastic gradient algorithms for supervised learning by prioritizing training examples. In practice, the cost of computing importances greatly limits the impact of importance sampling. We propose a robust, approximate importance sampling procedure (RAIS) for stochastic gradient descent. By approximating the ideal sampling distribution using robust optimization, RAIS provides much of the benefit of exact importance sampling with drastically reduced overhead. Empirically, we find RAIS-SGD and standard SGD follow similar learning curves, but RAIS moves faster through these paths, achieving speed-ups of at least 20% and sometimes much more. 1 Introduction Deep learning models perform excellently on many tasks. Training such models is resource-intensive, however, as stochastic gradient descent algorithms can require days or weeks to train effectively. After a short period training, models usually perform well on some—or even most—training examples. As training continues, frequently reconsidering such “easy” examples slows further improvement. Importance sampling prioritizes training examples for SGD in a principled way. The technique suggests sampling example i with probability proportional to the norm of loss term i’s gradient. This distribution both prioritizes challenging examples and minimizes the stochastic gradient’s variance. SGD with optimal importance sampling is impractical, however, since computing the sampling distribution requires excessive time. [1] and [2] analyze importance sampling for SGD and convex problems; practical versions of these algorithms sample proportional to fixed constants. For deep models, other algorithms attempt closer approximations of gradient norms [3, 4, 5]. But these algorithms are not inherently robust. Without carefully chosen hyperparameters or additional forward passes, these algorithms do not converge, let alone speed up training. We propose RAIS, an importance sampling procedure for SGD with several appealing qualities. First, RAIS determines each sampling distribution by solving a robust optimization problem. As a result, each sampling distribution is minimax optimal with respect to an uncertainty set. Since RAIS trains this uncertainty set in an adaptive manner, RAIS is not sensitive to hyperparameters. In addition, RAIS maximizes the benefit of importance sampling by adaptively increasing SGD’s learning rate—an effective yet novel idea to our knowledge. This improvement invites the idea that one RAIS-SGD iteration equates to more than one iteration of conventional SGD. Interestingly, when plotted in terms of “epochs equivalent,” the learning curves of the algorithms align closely. RAIS applies to any model that is trainable with SGD. RAIS also combines nicely with standard “tricks,” including data augmentation, dropout, and batch normalization. We show this empirically in §6. In this section, we also demonstrate that RAIS consistently improves training times. To provide context for the paper, we include qualitative results from these experiments in Figure 1. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. 2 Problem formulation Given loss functions f1, f2, . . . , fn and a tuning parameter 2 R 0, our task is to efficiently solve minimize w2Rd F (w) , where F (w) = 1n Pn i=1 fi(w) + 2 kwk2 . (P) A standard algorithm for solving (P) is stochastic gradient descent. Let w(t) denote the optimization variables when iteration t begins. SGD updates these weights via w(t+1) w(t) ⌘(t)g(t) . (1) Above, ⌘(t) 2 R>0 is a learning rate, specified by a schedule: ⌘(t) = lr_sched (t). The vector g(t) is an unbiased stochastic approximation of the gradientrF (w(t)). SGD computes g(t) by sampling a minibatch of |M| indices from {1, 2, . . . , n} uniformly at random (or approximately so). Denoting this minibatch by M(t), SGD defines the stochastic gradient as g(t) = 1|M| P i2M(t) rfi(w(t)) + w(t) . (2) In this work, we assume an objective function, learning rate schedule, and minibatch size, and we propose a modified algorithm called RAIS-SGD. RAIS prioritizes examples by sampling minibatches non-uniformly, allowing us to train models using fewer iterations and less time. 3 SGD with oracle importance sampling We now introduce an SGD algorithm with “oracle” importance sampling, which prioritizes examples using exact knowledge of importance values. RAIS-SGD is an approximation of this algorithm. Given w(t), let us define the expected training progress attributable to iteration t as E (t) = kw(t) w?k2 E h kw(t+1) w?k2 i = 2⌘(t)hrF (w(t)),w(t) w?i [⌘(t)]2E h kg(t)k2 i . (3) Here w? denotes the solution to (P), and the expectation is with respect to minibatch M(t). The equality follows from plugging in (1) and applying the fact that g(t) is unbiased. We refer to our oracle algorithm as O-SGD, and we refer to SGD with uniform sampling as U-SGD. At a high level, O-SGD makes two changes to U-SGD in order to increase E (t). First, O-SGD samples training examples non-uniformly in a way that minimizes the variance of the stochastic gradient. This first change is not new—see [1], for example. Second, to compensate for the first improvement, O-SGD adaptively increases the learning rate. This second change, which is novel to our knowledge, can be essential for obtaining large speed-ups. 3.1 Maximizing progress with oracle importance sampling By sampling minibatches non-uniformly, O-SGD prioritizes training examples in order to decrease E[kg(t) O k2]. During iteration t, O-SGD defines a discrete distribution p(t) 2 Rn 0, where P i p (t) i = 1. O-SGD constructs minibatch M(t) by sampling independently |M| examples according to p(t). Instead of (2), the resulting stochastic gradient is g(t) O = 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) . (4) Scaling the rfi terms by (np(t)i ) 1 ensures g (t) O remains an unbiased approximation ofrF (w(t)). O-SGD defines p(t) as the sampling distribution that maximizes (3): Proposition 3.1 (Oracle sampling distribution). In order to minimize E[kg(t) O k2], O-SGD samples each example i with probability proportional to the ith “gradient norm.” That is, p(t)i = krfi(w(t))k Pn j=1 krfj(w(t))k . Proof sketch. Defining f̄(w) = 1n Pn i=1 fi(w), we write this second moment as E h kg(t) O k2 i = 1n2|M| Pn i=1 1 p(t)i krfi(w(t))k2 1|M|krf̄(w (t))k2 + krF (w(t))k2 . (5) Finding the distribution p(t) that minimizes (5) is a problem with a closed-form solution. The solution is the distribution defined by Proposition 3.1, which we show in Appendix A. The oracle sampling distribution is quite intuitive. Training examples with largest gradient norm are most important for further decreasing F , and these examples receive priority. Examples that the model handles correctly have smaller gradient norm, and O-SGD deprioritizes these examples. 3.2 Adapting the learning rate Because importance sampling reduces the stochastic gradient’s variance—possibly by a large amount— we find it important to adaptively increase O-SGD’s learning rate compared to U-SGD. For O-SGD, we propose a learning rate that depends on the “gain ratio” r(t) O 2 R 1: r(t) O = E h kg(t) U k2 i . E h kg(t) O k2 i . (6) Above, g(t) U is the stochastic gradient defined by uniform sampling. O-SGD adapts the learning rate so that according to (3), one O-SGD iteration results in as much progress as r(t) O U-SGD iterations. Defining the edge case r(0) O = 1, this learning rate depends on the “effective iteration number” t̂(t) O = Pt t0=1 r (t0 1) O . Since the gain ratio exceeds 1, we have t̂(t) O t for all t. O-SGD defines the learning rate as ⌘(t) O = r(t) O lr_sched(t̂(t) O ) . We justify this choice of learning rate schedule with the following proposition: Proposition 3.2 (Equivalence of gain ratio and expected speed-up). Given w(t), define E (t) U as the expected progress from iteration t of U-SGD with learning rate ⌘(t) U = lr_sched (t). For comparison, define E (t) O as the expected progress from iteration t of O-SGD with learning rate ⌘(t) O = r(t) O ⌘(t) U . Then E (t) O = r(t) O E (t) U . Relative to U-SGD, O-SGD multiplies the expected progress by r(t) O . Proof. Using (3), we have E (t) U = 2⌘(t) U hrF (w(t)),w(t) w?i [⌘(t) U ]2E h kg(t) U k2 i . For O-SGD, we expect progress E (t) O = 2⌘(t) O hrF (w(t)),w(t) w?i [⌘(t) O ]2E h kg(t) O k2 i = 2r(t) O ⌘(t) U hrF (w(t)),w(t) w?i r(t) O [⌘(t) U ]2E h kg(t) U k2 i = r(t) O E (t) U . We remark that the purpose of this learning rate adjustment is not necessarily to speed up training— whether the adjustment results in speed-up depends greatly on the original learning rate schedule. Instead, the purpose of this rescaling is to make O-SGD (and hence RAIS-SGD) suitable as a drop-in replacement for U-SGD. We show empirically that this is the case in §6. 4 Robust approximate importance sampling (RAIS) Determining p(t) and r(t) O in O-SGD depends on knowledge of many gradient norms ( rfi(w(t)) for all examples, rf̄(w(t)) , and rF (w(t)) ). Computing these norms requires a timeconsuming pass over the data. To make importance sampling practical, we propose RAIS-SGD. 4.1 Determining a robust sampling distribution Like O-SGD, RAIS selects the tth minibatch by sampling indices from a discrete distribution p(t). We denote the stochastic gradient by g(t) R , which takes the same form as g(t) O in (4). Let v⇤i = krfi(w(t))k and v⇤ = [v⇤1 , v⇤2 , . . . , v⇤n]T . RAIS defines p(t) by approximating v⇤. Naïve algorithms approximate v⇤ using a point estimate v̂. The sampling distribution becomes a multiple of v̂. [3], [4], and [6] propose algorithms based on similar point estimation strategies. The drawback of the point estimation approach is extreme sensitivity to differences between v̂ and v⇤. For this reason, [3, 4, 6] incorporate additive smoothing. They introduce a hyperparameter, which we denote by , and sample example i with probability proportional to v̂i+ . This approach to robustness is unconvincing, however, since performance becomes critically dependent on a hyperparameter. Too small a risks divergence, while too large a value greatly limits the benefit of importance sampling. Instead of a point estimate, RAIS approximates v⇤ with an uncertainty set U (t) ⇢ Rn 0, which we expect contains (or nearly contains) v⇤. Given U (t), RAIS defines p(t) by minimizing the worst-case value of E[kg(t) R k2] over all gradient norm possibilities in U (t). Noting E[kg(t) R k2] / P i 1 p(t)i (v⇤i ) 2+c for some c 2 R (according to (5)), RAIS defines p(t) as the solution to the following problem: p(t) = arginf n max Pn i=1 1 pi v2i | v 2 U (t) p 2 Rn>0, Pn i=1 pi = 1 o . (PRC) Such robust optimization problems are common for making decisions with data uncertainty [7]. It turns out (PRC) is straightforward to solve because the minimax theorem applies to (PRC) (we prove this in Appendix D.1, assuming our definition of U (t) in §4.2). We first minimize over p by defining pi = vi( Pn j=1 vj) 1. Plugging this into (PRC)’s objective leads to the simplified problem v(t) = argmax ( Pn i=1 vi) 2 | v 2 U (t) . (PRC’) During each iteration t, RAIS solves (PRC’). After doing so, RAIS recovers the minimax optimal sampling distribution by defining p(t)i / v (t) i for all training examples. 4.2 Modeling the uncertainty set To define U (t), RAIS uses features of SGD’s state that are predictive of the true gradient norms. For each example i, we define a feature vector s(t)i 2 R dR 0. A useful feature for s (t) i is the gradient norm krfi(w(t 0 ))k, where t0 is the most recent iteration for which i 2M(t0). Since RAIS-SGD computes rfi(w(t 0 )) during iteration t0, constructing this feature during iteration t should add little overhead. Given s(t)i for all examples, RAIS defines the uncertainty set as an axis-aligned ellipsoid. Since v⇤ 0, RAIS also intersects this ellipsoid with the positive orthant. RAIS parameterizes this uncertainty set with two vectors, c 2 RdR 0 and d 2 R dR 0. These vectors map features s (t) 1:n to parameters of the ellipsoid. Specifically, RAIS defines the uncertainty set as U (t)cd = v 2 Rn 0 1 n Pn i=1 Qcd(s (t) i , vi) 1 , where Qcd(s, v) = (hc,si v)2 hd,si . Here we denote the uncertainty set by U (t)cd to emphasize the dependence of U (t) on c and d. With this definition of U (t)cd , (PRC’) has a simple closed-form solution (proven in Appendix B): Proposition 4.1 (Solution to robust counterpart). For all i, the solution to (PRC’) satisfies v(t)i = hc, s (t) i i+ khd, s (t) i i , where k = q n Pn j=1hd, s (t) j i . If we consider hc, s(t)i i an estimate of v⇤i and hd, s (t) i i a measure of uncertainty in this estimate, then Proposition 4.1 is quite interpretable. RAIS samples example i with probability proportional to hc, s(t)i i+ khd, s (t) i i. The first term is the v⇤i estimate, and the second term adds robustness to error. 4.3 Learning the uncertainty set The uncertainty set parameters, c and d, greatly influence the performance of RAIS. If U (t)cd is a small region near v⇤, then RAIS’s sampling distribution is similar to O-SGD’s sampling distribution. If U (t)cd is less representative of v⇤, the variance of the stochastic gradient could become much larger. In order to make E[kg(t) R k2] small but still ensure v⇤ likely lies in U (t)cd , RAIS adaptively defines c and d. To do so, RAIS minimizes the size of U (t)cd subject to a constraint that encourages v⇤ 2 U (t) cd : c,d = arginf nPn i=1hd, s (t) i i c,d 2 RdR 0, 1|D| P|D| i=1 w̃iQcd(s̃i, ṽi) 1 o . (PT) Here we have defined U (t)cd ’s “size” as the sum of hd, s (t) i i values. The constraint that encourages v⇤ 2 U (t) assumes weighted training data, (w̃i, s̃i, ṽi)|D|i=1. RAIS must define this training set so that 1 |D| P|D| i=1 w̃iQcd(s̃i, ṽi) ⇡ 1 n Pn i=1 Qcd(s (t) i , krfi(w(t))k) . That is, for any c and d, the mean of Qcd(s̃, ṽi) over the weighted training set should approximately equal the mean of Qcd(s (t) i , v ⇤ i ), which depends on current (unknown) gradient norms. To achieve this, RAIS uses gradients from recent minibatches. For entry j of the RAIS training set, RAIS considers an i and t0 for which i 2 M(t0) and t0 < t. RAIS defines s̃j = s(t 0 ) i , ṽj = krfi(w(t 0 ))k, and w̃j = (np(t 0 ) i ) 1. The justification for this choice is that the mean of Qcd(s (t) i , krfi(w(t))k) over training examples tends to change gradually with t. Thus, the weighted mean over the RAIS training set approximates the mean of current Qcd(s (t) i , rfi(w(t)) ) values. 4.4 Approximating the gain ratio In addition to the sampling distribution, RAIS must approximate the gain ratio in O-SGD. Define g(t) R1 as a stochastic gradient of the form (4) using minibatch size 1 and RAIS sampling. Define g(t) U1 in the same way but with uniform sampling. From (5), we can work out that the gain ratio satisfies E h kg(t) U k2 i. E h kg(t) R k2 i = 1 + 1|M| ⇣ E[kg(t) U1 k2] E[kg(t) R1 k2] ⌘. E[kg(t) R k2] . (7) To approximate the gain ratio, RAIS estimates the three moments on the right side of this equation. RAIS estimates E[kg(t) R k2] using an exponential moving average of kg(t) R k2 from recent iterations: E[kg(t) R k2] ⇡ ↵ h kg(t) R k2 + (1 ↵)kg(t 1) R k2 + (1 ↵)2kg(t 2) R k2 + . . . i . Algorithm 4.1 RAIS-SGD input objective function F , minibatch size |M|, learning rate schedule lr_sched(·) input RAIS training set size |D|, exponential smoothing parameter ↵ for gain estimate initialize w(1) 2 Rd, c,d 2 RdR 0; t̂(1) 1; r_estimator GainEstimator(↵) for t = 1, 2, . . . , T do v(t) argmax ( Pn i=1 vi) 2 | v 2 U (t)cd # see Proposition 4.1 for closed-form solution p(t) v(t)/kv(t)k1 M(t) sample_indices_from_distribution(p(t), size = |M|) g(t) R 1|M| P i2M(t) 1 np(t)i rfi(w(t)) + w(t) r_estimator.record_gradient_norms(kg(t) R k, (krfi(w(t))k, p(t)i )i2M(t)) r̂(t) r_estimator.estimate_gain_ratio() # see §4.4 ⌘(t) r̂(t) · lr_sched(t̂(t)) w(t+1) w(t) ⌘(t)g(t) R t̂(t+1) t̂(t) + r̂(t) if mod(t, d|D|/|M|e) = 0 and t (n+ |D|)/|M| then c,d train_uncertainty_model() # see §4.2 return w(T+1) RAIS approximates E[kg(t) R1 k2] and E[kg(t) U1 k2] in a similar way. After computing gradients for minibatch t, RAIS estimates E[kg(t) R1 k2] and E[kg(t) U1 k2] using appropriately weighted averages of krfi(w(t))k2 for each i 2M(t) (for E[kg(t)R1k2], RAIS weights terms by (np (t) i ) 2; for E[kg(t) U1 k2], RAIS weights terms by (np(t)i ) 1). Using the same exponential averaging parameter ↵, RAIS averages these estimates from minibatch t with estimates from prior iterations. RAIS approximates the gain ratio by plugging these moment estimates into (7). We denote the result by r̂(t). Analogous to O-SGD, RAIS uses learning rate ⌘(t) = r̂(t)lr_sched t̂(t) , where t̂(t) is the effective iteration number: t̂(t) = Pt t0=1 r̂ (t0 1). Here we also define the edge case r̂(0) = 1. 4.5 Practical considerations Algorithm 4.1 summarizes our RAIS-SGD algorithm. We next discuss important practical details. Solving (PT) While computing p(t) requires a small number of length n operations (see Proposition 4.1), learning the uncertainty set parameters requires more computation. For this reason, RAIS should not solve (PT) during every iteration. Our implementation solves (PT) asynchronously after every d|D|/|M|e minibatches, with updates to w(t) continuing during the process. We describe our algorithm for solving (PT) in Appendix D.2. Since our features s(t) 1:n depend on past minibatch updates, we do not use RAIS for the first epoch of training—instead we sample examples sequentially. Compatibility with common tricks RAIS combines nicely with standard training tricks for deep learning. With no change, we find RAIS works well with momentum [8, 9]. Incorporating data augmentation, dropout [10], or batch normalization [11] adds variance to the model’s outputs and gradient norms. RAIS elegantly compensates for such inconsistency by learning a larger uncertainty set. Since the importance sampling distribution changes over time, we find it important to compute weighted batch statistics when using RAIS with batch normalization. That is, when computing normalization statistics during training, we weight contributions from each example by (np(t)i ) 1. Protecting against outliers In some cases—typically when the gain ratio is very large—we find Qcd(s (t) i , v ⇤ i ) can be quite small for most examples yet large for a small set of outliers. Typically we find RAIS does not require special treatment of such outliers. Even so, it is reasonable to protect against outliers, so that an example with extremely large Qcd(s (t) i , v ⇤ i ) cannot greatly increase the stochastic gradient’s variance. To achieve this, we use gradient clipping, and RAIS provides a natural way of doing so. We define an “outlier” as any example for which Qcd(s (t) i , v ⇤ i ) exceeds a threshold ⌧ . For each outlier i, we temporarily scale fi during iteration t until Qcd(s (t) i , rfi(w(t)) ) = ⌧ . In practice, we use ⌧ = 100; the fraction of outliers is often zero and rarely exceeds 0.1%. Approximating per-example gradient norms To train the uncertainty set, RAIS computes krfi(w(t))k for each example in each minibatch. Unfortunately, existing software tools do not provide efficient access to per-example gradient norms. Instead, libraries are optimized for aggregating gradients over minibatches. Thus, to make RAIS practical, we must approximate the gradient norms. We do so by replacing krfi(w(t))k with the norm of only the loss layer’s gradient (with respect to this layer’s inputs). These values correlate strongly, since the loss layer begins the backpropagation chain for computingrfi(w(t)). We show this empirically in Figure 2(left), and we include additional plots in Appendix E.1. We note this approximation may not work well for all models. 5 Relation to prior work Prior strategies also consider importance sampling for speeding up deep learning. [3] proposes distributing the computation of sampling probabilities. In parallel with regular training, [4] trains a miniature neural network to predict importance values. [5] approximates importance values using additional forward passes. [12] and [13] apply importance sampling to deep reinforcement learning. With the exception of [5] (which requires considerable time to compute importance values), these prior algorithms are sensitive to errors in importance value estimates. For this reason, all require critical smoothing hyperparameters to converge. In contrast, RAIS elegantly compensates for approximation error by choosing a sampling distribution that is minimax optimal with respect to an uncertainty set. Since RAIS adaptively trains this uncertainty set, RAIS does not require hyperparameter tuning. Researchers have also considered other ways to prioritize training examples for deep learning. [14] considers examples in order of increasing difficulty. Other researchers prioritize challenging training examples [15, 16]. And yet others prioritize examples closest to the model’s decision boundary [17]. Unlike RAIS, the primary goal of these approaches is improved model performance, not optimization efficiency. Importance sampling may work well in conjunction with these strategies. There also exist ideas for sampling minibatches non-uniformly outside the context of deep learning. [18, 19] consider sampling diverse minibatches via repulsive point processes. Another strategy uses side information, such as class labels, for approximate importance sampling [6]. By choosing appropriate features for the uncertainty set, RAIS can use side information in the same way. In the convex setting, there are several importance sampling strategies for SGD with theoretical guarantees. This includes [1] and [2], which sample training examples proportional to Lipschitz constants. Leverage score sampling uses a closely related concept for matrix approximation algorithms [20, 21]. For more general convex problems, some adaptive sampling strategies include [22] and [23]. 6 Empirical comparisons In this section, we demonstrate how RAIS performs in practice. We consider the very popular task of training a convolutional neural network to classify images. We first train a LeNet-5 model [24] on the MNIST digits dataset. The model’s small size makes it possible to compare with O-SGD. We use learning rate ⌘(t) = 3.4/ p 100 + t, L2 penalty = 2.5⇥ 10 4, and batch size 32—the parameters are chosen so that SGD performs well. We do not use momentum or data augmentation. Figure 2(middle) includes the results of this experiment. Oracle sampling significantly outperforms RAIS, and RAIS significantly outperforms uniform sampling. For our remaining comparisons, we consider street view house numbers [25], rotated MNIST [26], and CIFAR tiny image [27] datasets. For rot-MNIST, we train a 7 layer CNN with 20 channels per layer—a strong baseline from [28]. Otherwise, we train an 18 layer ResNet preactivation model [29]. CIFAR-100 contains 100 classes, while the other problems contain 10. The number of training examples is 6.0⇥ 105 for SVHN, 1.2⇥ 104 for rot-MNIST, and 5.0⇥ 104 for the CIFAR problems. We follow standard training procedures to attain good generalization performance. We use batch normalization and standard momentum of 0.9. For rot-MNIST, we follow [28], augmenting data with random rotations and training with dropout. For the CIFAR problems, we augment the training set with random horizontal reflections and random crops (pad to 40x40 pixels; crop to 32x32). We train the SVHN model with batch size 64 and the remaining models with |M| = 128. For each problem, we approximately optimize and the learning rate schedule in order to achieve good validation performance with SGD at the end of training. The learning rate schedule decreases by a fixed fraction after each epoch (n/|M| iterations). This fraction is 0.8 for SVHN, 0.972 for rot-MNIST, 0.96 for CIFAR-10, and 0.96 for CIFAR-100. The initial learning rates are 0.15, 0.09, 0.08, and 0.1, respectively. We use = 3⇥ 10 3 for rot-MNIST and = 5⇥ 10 4 otherwise. For RAIS-SGD, we use |D| = 2⇥ 104 training examples to learn c and d and ↵ = 0.01 to estimate r̂(t). The performance of RAIS varies little with these parameters, since they only determine the number of minibatches to consider when training the uncertainty set and estimating the gain ratio. For the uncertainty set features, we use simple moving averages of the most recently computed gradient norms for each example. We use moving averages of different lengths—1, 2, 4, 8, and 16. For lengths of at least four, we also include the variance and standard deviation of these prior gradient norm values. We also incorporate a bias feature as well as the magnitude of the random crop offset. We compare training curves of RAIS-SGD and SGD in Figure 3. Notice that RAIS-SGD consistently outperforms SGD. The relative speed-up ranges from approximately 20% for the CIFAR-100 problem to more than 2x for the SVHN problem. Due to varying machine loads, we plot results in terms of epochs (not wall time), but RAIS introduces very little time overhead. For example, Figure 2(right) includes time overhead results for the rot-MNIST comparison, which we ran on an isolated machine. Figure 4 provides additional details of these results. In the figure’s first row, we see the speed-up in terms of the gain ratio (the blue curve averages the value (r̂(t) 1) · 100% over consecutive epochs). The gain ratio tends to increase as training progresses, implying RAIS is most useful during later stages of training. We also plot the relative wall time overhead for RAIS, which again is very small. In the second row of Figure 4, we compare RAIS-SGD and SGD in terms of epochs equivalent—the number of epochs measured in terms of effective iterations. Interestingly, the curves align closely. This alignment confirms that our learning rate adjustment is reasonable, as it results in a suitable drop-in replacement for SGD. This result contrasts starkly with [3], for example, in which case generalization performance differs significantly for the importance sampling and standard algorithms. Table 1 concludes these comparisons with a summary of results: 7 Discussion We proposed a relatively simple and very practical importance sampling procedure for speeding up the training of deep models. By using robust optimization to define the sampling distribution, RAIS depends minimally on user-specified parameters. Additionally, RAIS introduces little computational overhead and combines nicely with standard training strategies. All together, RAIS is a promising approach with minimal downside and potential for large improvements in training speed. Acknowledgements We thank Marco Tulio Ribeiro, Tianqi Chen, Maryam Fazel, Sham Kakade, and Ali Shojaie for helpful discussion and feedback. This work was supported by PECASE N00014-13-1-0023.
1. What is the main contribution of the paper regarding optimal importance sampling weights? 2. What are the strengths and weaknesses of the proposed approach compared to existing methods? 3. Do you have any questions or suggestions regarding the experiments and their presentation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor issues or typos that can be improved in the paper?
Review
Review The paper proposes a robust way to approximate optimal importance sampling weights to reduce the variance of stochastic gradient for training machine learning models. I think the idea (and implementation) is definitely valuable for the community and deserves a publication, but improving experiments can make the paper much more useful and insightful. In particular, I believe that 1) the exact O-SGD baseline (sampling without approximations) should be included in the plots at least for small-scale experiments; 2) there should be not only epoch plots, but the time plots as well to directly see the overhead of the sampling scheme; 3) there should be oblation studies for the factors you think make your method particularly effective (the ones listed in sec 6.2: large dataset size, lack of random data augmentation; etc). Also, you mention that you use only the loss layer of the networks to compute the gradient norms, which I see as a major approximation. Consider using the method from [1] at least as a baseline if the overhead is too large to use it on practice. Again, the paper is really good, but I would love to see it getting even better before the publication. Minor points: Would be nice to have pointer to appendix whenever you included the proof in where. E.g. now it looks like you consider proof of (4) too trivial to write down and I spent some time to rederive it only to find later that it’s actually proven in the appendix. The introduction of the dataset D on line 109 is confusing because it appears out of nowhere and when get explained slowly. Also, I read this part many times and still not sure how do you choose objects for D. I don’t get equation (6): how does it follow from (4) and what is g_{U1} and g_{R1}? Figure 6.2 referenced on line 228 doesn’t seem to exist Since you mentioned generalization as something you pay attention to (line 237), it would be nice to have test errors as well (in contrast to only having validation errors, since you used validation set for hyperparameters tuning). BTW, did you find optimal hyperparameters for SGD and for your method independently? [1] Goodfellow, Ian. "Efficient per-example gradient computations." arXiv preprint arXiv:1510.01799 (2015). UPD: thanks for addressing my concerns, I'm now even more willing to accept the paper.
NIPS
Title A Variational Edge Partition Model for Supervised Graph Representation Learning Abstract Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks. 1 Introduction Many real-world entities are bonded by relations, e.g., the users in a social network service are connected by online friendships, and the atoms in molecules are held together by chemical bonds. Representing the entities by nodes and relations by edges, a set of interconnected entities naturally forms a graph. Reasoning about these entities and their relations could be conducted under graph neural networks (GNNs) [1–4]. Instead of isolating the feature transformation for each individual entity, GNNs allow the information to be exchanged between related entities. Models employing this strategy have achieved great success in a wide range of applications involving graph-structured data, such as classification [5, 6], link prediction [7], recommendation [8, 9], (overlapping) community detection [10–13], and drug discovery [14]. The essence of many graph-analytic tasks could be summarized as supervised graph representation learning, where the information on the graph is usually characterized by the node features, adjacency matrix, and node or graph labels. A supervised machine learning pipeline is often introduced to embed the nodes or graph under label supervision for a specific classification task. For most of the GNN-based methods, the information on nodes and edges is composited through neighborhood aggregation, i.e., updating the features of each node by combining the features of the surrounding nodes. During this process, the graph structure mostly serves as a given source to indicate the nodes’ neighborship [15] or provide the weights for aggregation [5]. Treating the graph structure as given ∗Equal contribution. † Corresponding authors. Code available at https://github.com/YH-UtMSB/VEPM 36th Conference on Neural Information Processing Systems (NeurIPS 2022). overlooks the latent structures that control the formation of the graph, which, however, could provide valuable information for graph representation learning. In other words, failing to consider the latent graph structures may limit the ultimate potential of this line of work. One type of latent structure at the hinge of the graph structure and node information is node communities. Their connection to the graph can be explained under a latent-community-based graph generation process. For example, both the mixed-membership stochastic blockmodel (MMSB) [10] and edge partition model (EPM) [11] explain the formation of edges by node interactions over overlapping latent communities. Let us consider person u in social network G, some of her social connections may be established through interactions with colleagues as a machine learning researcher, some may originate from her interactions with co-members of the same hobby club, and different types of interactions could overlap to strengthen certain social connections between the nodes. The node community structure also sets the aspects of node information. In the example of social network G where person u is affiliated with both a research group and a few hobby groups, due to the diverse nature of these different communities, it is likely that she exhibits different characteristics when interacting with co-members of different communities. Namely, the information on u for the research group may be related to her research expertise, and the information on u for the hobby clubs may be related to her hobbies. Therefore, an ideal solution is to learn a different aspect of node properties for each community that it is affiliated with, and represent the overall node information as an aggregation of all community-specific node properties. Based on this insight, we develop a variational edge partition model (VEPM), which is a generative graph representation learning framework. VEPM models how the edges and labels are generated from overlapping latent communities. Instead of hard assigning a node to a single community, VEPM encodes each node into a K-dimensional vector of scores measuring the strength that the node is affiliated with each of the K communities. From these scores, we compute the intensities of pairwise interactions within each community. Given that information, under the Bernoulli-Poisson link, the edges could be modeled by the logical OR of independently generated binary latent edges [11]. The generation of labels includes community-specific node representation learning and aggregation. A key premise of the first part is to detect the hidden structure specified for each community. In our model, it is achieved through edge partition, i.e., decomposing each edge into a summation of link strengths according to the intensities of community-specific interactions. The edge partition step isolates the link strengths accumulated through each community. With the partitioned edges, we learn K separate GNNs to produce community-specific node representations, then compose them into the overall node embeddings to generate the labels. Evaluation shows that the proposed framework can achieve significant performance enhancement on various node and graph classification benchmarks. We summarize our main contributions as follows: • We introduce VEPM that utilizes the idea of edge partitioning to extract overlapping latent community structures, which are used to not only enrich node attributes with nodecommunity affiliation scores, but also define community-specific node feature aggregations. • We formalize the training of VEPM in a variational inference framework, which is powered by GNNs in its latent community inference, representation generation, and label prediction. • We analyze how VEPM works and evaluate it over various real-world network datasets. Empirical results show that the proposed VEPM outperforms many previous methods in various node- and graph-level classification tasks, especially with limited labels. 2 Preliminaries and Related Work Embedding nodes and graphs with GNNs. Given a node-attributed graph G with N nodes, its information is generally expressed by a design matrix X ∈ RN×F , whose rows represent node features of dimension F , and an adjacency matrix A of shape N × N . The nonzero values of A indicate the weights on the corresponding edges. GNNs [1–5, 15, 16] are multi-layer parametric models that maps (X,A) to the embedding space. The update of the embedding of node u at the tth GNN layer could be summarized as h(t)u = AGG(fθ(h (t−1) u ), {fθ(h(t−1)v )}v∈N(u),Au,:) and h(0)u = xu. Here AGG denotes the neighborhood aggregation function, N(u) denotes the neighbors of u, and fθ(·) is a transformation function parameterized with θ. In VEPM, we adopt the aggregation function introduced by Kipf and Welling [5], which can be expressed as h(t)u =∑ v∈{u}∪N(u) Ãu,vfθ(h (t−1) v ), where à is the normalized adjacency matrix of G augmented with self-loops. When the task requires a graph-level representation, all node embeddings would be summarized into a single embedding vector. This process is called graph pooling [17–19]. In this work, we focus on improving the overall performance from GNN’s perspective and adopt a simple graph pooling method proposed by Xu et al. [6]. In the sequel, we unify the expression of models consisting of GNN layers as GNNθ ( X,A ) . Community-regularized representation learning. Communities are latent groups of nodes in a graph [20]. The simplest way to use community information to help a supervised learning task is to regularize it with a community-detection related task. For instance, Hasanzadeh et al. [21] and Wang et al. [22] jointly train a deep stochastic blockmodel and a classification model; a similar approach could be found in Liu et al. [23], where the graph reconstruction regularizer is replaced by a modularity metric. Either way, the node embeddings are expected to reflect patterns of community structures and maintain informative to the task at the same time. These methods generally treat node-community affiliation as node embeddings and expect them to contain sufficient information for the downstream task. However, the affiliation strengths of nodes to a community may oversimplify the information that such a community can provide for the downstream task. Unlike these methods, VEPM embeds node information on each community with a specific GNN, where the community structures are reflected by the partitioned adjacency matrices. Aggregating node features with these weighted adjacency matrices naturally incorporates community information into task-learning. Multi-relational data analysis. Leveraging heterogeneous relations has been extensively studied in the literature of mining Knowledge Graphs [24–26]. The data of these models is usually organized by a multigraph, which is related to our perspective of the latent node interactions that generate the observed graph because the interactions in different communities are potentially heterogeneous, and they are permitted to coexist between a pair of nodes. This leads to a shared high-level idea to break down the original heterogeneous graph into homogeneous factors and analyze each factor with a customized model. However, unlike those multigraphs where the edge types are explicitly annotated, the overlapping heterogeneous node interactions are not observable from the data that VEPM aims to handle, escalating the difficulty of our optimization problem. To deal with the latent variables, we formalize VEPM into a generative model and train it via variational inference. Graph factorization-based models. The existence of multiple hidden structures in graphs is also noticed by some previous work [27, 28]. The architecture of these models starts with graph factorization, followed by a set of modules with each one processing the information from one of the graph factors. In these methods, the only ground-truth information that trains graph factorization is from label supervision, making them rely on excessive label annotations, which might be hard to suffice in practice and thus potentially hinder their application. On account of the potential heterogeneity of graph information on different communities, VEPM adopts a similar pipeline to these methods, but the factorization of the original graph is driven by not only the supervised task but also a graph generative model, which greatly reduces our dependency on the amount of labeled data. 3 Variational Edge Partition Model Many analytical tasks on node-attributed graphs could boil down to (semi-) supervised classification problems, i.e., given (X,A) and observed node or graph labels yo, predicting the unobserved labels yu. In VEPM, we formalize the classification task as modeling the predictive distribution as p(yu |X,A,yo), and construct our variational supervised learning pipeline with GNNs. 3.1 Generative task-dependent graph representation learning with latent communities To model p(yu |X,A,yo), we introduce Z ∈ RN×K+ , a latent non-negative node-community affiliation matrix, whose entry at the nth row and kth column is interpreted as the strength that the nth node is affiliated with the kth community. Given Z, we specify a generative network as A ∼ pθ(A |Z), yo ∼ pθ(y |X,A,Z) (1) to describe how the observed edges A and labels yo are generated. Due to the complexity brought by the deep architecture of the generative network, analytically inferring the posterior of Z is impractical. We hence approximate the posterior pθ(Z |A,yo) with a variational distribution, qϕ(Z |A,X), modeled by a separate GNN-based inference network. The subscripts θ and ϕ denote the parameters in the generative network and inference network, respectively. We illustrate the architecture of VEPM in Figure 1. We approximate the posterior predictive distribution using the Monte Carlo method as p̂θ(yu |X,A,yo) ≈ 1S ∑S s=1 pθ(yu |X,A,Z(s)), (2) where Z(s) iid∼ qϕ(Z |A,X) for s = 1, . . . , S. In what follows, we outline the data generation process and corresponding modules in the generative network, introduce the module in the inference network, and describe how to train both networks. 3.2 Edge generation and label prediction To model the distribution of the edges given Z, we adopt the generative process developed in EPM [11], which explains the generation of the edges under the Bernoulli-Poisson link as Ai,j = 1Mi,j≥1, Mi,j ∼ Poisson (∑K k=1 γkZi,kZj,k ) . Here γk is a positive community activation level indicator, which measures the member interaction frequency via community k, and in our practice is treated as a trainable parameter shared by both the inference network and generative network. We interpret γkZi,kZj,k as the interaction rate between nodes i and j via community k. The EPM has an alternative representation that partitions each edge into the logical disjunction (i.e., logical OR) of K latent binary edges [29], expressed as Ai,j = ∨Kk=1Ai,j,k, Ai,j,k ∼ Bernoulli(pijk), where pijk := 1− e−γkZi,kZj,k . Thus nodes i and j would be connected as long as their interaction forms an edge in at least one community. In other words, they are disconnected only if their interactions in all communities fail to generate any edges. Under the EPM, we have the conditional probability as pθ(Ai,j = 1 |Z) = 1− e− ∑K k=1 γkZi,kZj,k . To complete the model, we set the prior distribution of the node-community affiliation matrix as Z ∼ ∏N i=1 ∏K k=1 Gamma(Zi,k;α, β). Given A and Z, we now describe the predictive process of the labels. The node-community affiliation impacts this process from two aspects: for each node, the corresponding row of Z serves as side information that could enrich node attributes; for each pair of connected nodes, it could be used to derive the node interactions rate via each community, i.e., {γkZi,kZj,k}k=1,K , which are further used to partition the edges into K communities, carried out by the edge partitioner: Edge partitioner. The edge partitioner takes edges A and node-community affiliation matrix Z as inputs and returns K positive-weighted edges: {A(1),A(2), · · · ,A(K)}, s.t. ∑K k=1 A (k) = A. The edge partition function is A (k) i,j = Ai,j · e (γkZi,kZj,k)/τ∑ k′ e (γk′Zi,k′Zj,k′ )/τ , k ∈ {1, . . . ,K}, i, j ∈ {1, . . . , N}, (3) where τ is a “temperature” that controls the sharpness of partition. The effect of the edge partitioner could be considered as a soft assignment of each edge into different communities. Setting temperature τ to be low could drive the soft assignment towards a hard assignment, and consequently, when aggregating node features with {A(1),A(2), · · · ,A(K)}, the edge partitioner would concentrate the information exchange between any two connected nodes at one community. In other words, the mutual influence between two nodes, measured by the edge weight, is relatively high in the community that contributes the most of the interactions between the pair, while being relatively weak in the other communities. In our implementation, we further generalize Equation (3) to a metacommunity-based edge partition, specifically, by replacing γkZi,kZj,k with Zi,kdiag(γk)Z′j,k. In the new expression, Zi,k denotes the kth segment in node i’s community-affiliation encoding, γk is a vector of the activation levels for communities in the kth metacommunity. This generalization allows the total number of communities to be greater than K, enhancing the model implementation flexibility at a minor interpretability cost. The edge partitioner provides a soft separation of the neighborhood aggregation routes for each community, which is passed to the community-GNN bank: Community-GNN bank. The module takes X∗ := X ∥ Z as input, where the operator ∥ denotes “concatenation,” and learns community-specific node embeddings with K separate GNNs, i.e., the outputs are g(1)θ (X ∗), g (2) θ (X ∗), · · · , g(K)θ (X∗), where g (k) θ (·) := GNNθ ( ·,A(k) ) , k ∈ {1, . . . ,K}. The intention of the edge partitioner and community-GNN bank is to capture the node information specific to each community. For instance, in a social network, such kind of information could be the different social roles of people when considered affiliated with different social groups, e.g., one may be a former computer science major student in an alumni group, a research scientist in a work group, and an amateur chess player in a hobby club group. As shown in Section 4.3, the node embeddings produced by the community-GNN bank are separable by communities. We finally introduce the representation composer, which embeds a combination of the information provided by each community into a global representation via a GNN: Representation composer. Let H(k) := g(k)θ (X ∗) denote the node representations learned from the kth community, where k = 1, . . . ,K, and f(·) denote the representation composer, whose functionality is to project a composite of community-specific node representations to one representation matrix, i.e., HV = f ( H(1),H(2), · · · ,H(K) ) := GNNθ ( ∥Kk=1H(k),A ) . For graph-level tasks, we further pool node embeddings into a single vector representation hG , as in Xu et al. [6]. Taking softmax on the feature dimension of HV or hG gives the predicted probabilities of labels, from which we are able to classify the unlabeled objects. Cascading the edge partitioner and community-GNN bank with the representation composer yields the probability pθ(y |X,A,Z). 3.3 Variational latent community inference We use a community encoder as the module in the inference network to approximate the posterior distribution pθ(Z |A,y), and provide the generative network with the critical latent variable Z: Community encoder. The community encoder models the variational posterior qϕ(Z |A,X) by a Weibull distribution with shape K and scale Λ, whose parameters are learned by a GNN as K ∥Λ = GNNϕ ( X,A ) . A Weibull random sample from qϕ(Z |A,X) could be created through the inverse CDF transformation of a uniform random variable, given as follows: Z = Λ⊙ ( − log(1−U) )(1⊘K) ,Ui,k iid∼ U(0, 1), ∀(i, k) ∈ {1, . . . , N} × {1, . . . ,K}, (4) where ⊙ and ⊘ denote element-wise multiplication and division. 3.4 The overall training algorithm and complexity analysis We train VEPM by optimizing the evidence lower bound (ELBO), decomposed into three terms, as L = Ltask + Legen + LKL, (5) where Ltask = Eqϕ(Z |A,X) log pθ(yo |A,Z,X), Legen = Eqϕ(Z |A,X) log pθ(A |Z), and LKL = −DKL ( qϕ(Z |A,X) ∥ p(Z) ) . These three terms correspond to the classification task, edge generation, and KL-regularization, respectively. Note that our specifications of Z’s prior and variational posterior yield an analytical expression of LKL, as described in detail in Appendix B. Recall that N , M , K denote the numbers of nodes, edges, and communities (or metacommunities) in the graph; F denotes the feature dimension; and L1, L2, L3 denote the number of layers in the community encoder, in the community-GNN bank, and in the representation composer. In VEPM, we limit the size of hidden dimension in each community-GNN to 1/K of what is commonly used among the baselines, hence the time complexity of training VEPM isO((L1+L2+L3)MF +(L1+L2/K+ L3)NF 2 +N2F ). For a sparse graph where N2 ≫M , the computational overhead can be reduced to O(M) 2 if graph reconstruction is accelerated via subsampling the nodes to O( √ M), as in Salha et al. [30]. Effects and implications of adopting such type of acceleration algorithms are discussed in Appendix A. The space complexity of VEPM isO((L1+L2+L3)NF+KM+(L1+L2/K+L3)F 2), among which O((L1 + L2/K + L3)F 2) is contributed by model parameters. It is noteworthy to point out that the memory cost of graph reconstruction is manageable by computing the dense matrix multiplication block-wise with fixed maximum block size. From all aspects of complexity, VEPM (with acceleration) is comparable to GATs [15, 31] and models involving graph factorization [27, 28]. 4 Empirical Evaluation 4.1 Node & graph classification Datasets & experimental settings. For node classification, we consider three citation networks (Cora, Citeseer, and Pubmed) and a Wikipedia-based online article network (WikiCS) [32], which provide either bag-of-words document representations or average word embeddings as node features, and (undirected) citations or hyperlinks as edges; For graph classification, we consider four bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and four social network datasets (IMDBBINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI). The input node features are crafted in the same way as Xu et al. [6]. Most of the baselines are compared following the 10-fold crossvalidation-based evaluation protocol proposed by Xu et al. [6]. For the graph classification task, we also evaluate our model following Zhang and Chen [33], which conducts a more rigorous trainvalidation-test protocol. More details about the experiments are elaborated in Appendix C.2. Node classification. We use classification accuracy as the evaluation metric for node classification. Table 1 reports the average performance of VEPM (± standard error) against related baselines that are categorized into three different groups. The first group consists of the GCNs [5] and its variant GCN-64, which expands the hidden dimension from 16 to 64. VEPM outperforms the first group by a significant margin. The explanation is twofold: (i) VEPM augments the node attributes with community information carried by the inferred node-community affiliations, (ii) VEPM learns community-specific node embeddings. The second group includes SIG-VAE [21] and WGCAE [22], both of which learn node embeddings from graph generative models jointly optimized with a supervised loss. The performance gain that VEPM obtains could be attributed to our unique label predictive process, which not only appends extra community information to node attributes but also injects structural patterns of communities into neighborhood aggregation. The third group [15, 31, 27] is focused on learning node embeddings leveraging heterogeneous hidden relations. When fitting the hidden relations via the attention mechanism, they only use information from the node features 2For expression simplicity, the scale of F is treated as constant, which is ubiquitous in practice. through label supervision, whereas our approach also takes the observed graph structure into account via a graph generative model. The additional information from the graph utilized by VEPM could explain the enhancement achieved by VEPM over the third group. Graph classification. For the first protocol, we compare VEPM with classical graph classification baselines [34–36], generic-GNN-based models [15, 31, 6], and GNNs with relation-based or taskdriven graph factorization [25, 26, 28]. Results in Table 2 show that VEPM achieves the best graph classification performance on 5 out of 8 benchmarks, and the second-best performance on the other 3 benchmarks, including NCI1, where VEPM outperforms all the other GNN-based models. For the second protocol, aside from a traditional method, WL [34], and a generic-GNN-based method, DGCNN [37], we compare VEPM with two GNNs [38, 33] that adapt capsule neural networks [39] to graph-structured data, which aim to learn different aspects of graph properties via dynamic routing [40]. The results in Table 3 show that VEPM outperforms these GNN-based methods on most of the benchmarks. This indicates that the aspects of graph information, as defined with communities and learned via aggregating node features with partitioned graphs, are more pertinent to the end tasks. Classification with reduced labels. The gain of incorporating the graph generation process into VEPM is that the observed graph structure also provides information for hidden community detection and edge partition, which is beneficial if the amount of labeled data is not sufficient for both decomposing the graph into hidden factors and semi-supervised task-learning. To illustrate this point, based on the evaluation protocol by Kipf and Welling [5] for node classification and Xu et al. [6] for graph classification, we re-evaluate the performance of VEPM under reduced training labels on Cora (for node classification) and MUTAG (for graph classification), along with generic-GNN-based methods [5, 15, 6] and decomposition-based methods [27, 28]. The results are shown in Figure 2: the performance of all models is negatively impacted by the reduction in training labels, those suffering from the strongest performance decline are DisenGCN [27] and FactorGCN [28] that decompose graphs solely based on label supervision, while VEPM also performs graph factorization with limited labels, it consistently outperforms the other methods under all settings, and the superiority becomes even more evident as the amount of annotated data further decreases. We consider the enhanced robustness under sparsely labeled data as a crucial feature to have for real-world graphs. 4.2 Ablation Studies Different edge partition schemes. In this part, we study how much a meaningful edge partition would benefit the task. Except for the partition we obtained from training the regular VEPM, we tested the performance of VEPM under two edge partition schemes: even partition and random partition. Even partition means all edge weights are 1/K. For random partition, we sample the unnormalized edge weights from U(0, 100), then normalize them along the community dimension with softmax. The random edge weights are fixed for training to simulate an undesirable convergence state of edge partition. The results are recorded in Table 4. Generally speaking, the performance of VEPM with even edge partition is comparable to that of GIN, our base model; when we substitute random partition for even partition, the model performance becomes worse than the base model. These ablation study results suggest that VEPM can benefit from enhancing the quality of edge partitions. Different representation composer. Next, we use the partition obtained from a regular VEPM to represent partitions that are meaningful for the task, and a random partition to represent “meaningless partitions.” For each partition scheme, we contrast the performance of VEPM between two designs of the representation composer: (i) GNN-based design, represented by the current design; (ii) parsimonious design, represented by a fully-connected (FC) layer. The results are in Table 5. When the edge partition is meaningful, i.e., relevant to the task, the overall performance with a simpler representation composer is generally slightly worse than the current design, but still better than most of the baselines. However, if the partition is irrelevant to the task, the GNN-based representation composer would be the last thing standing between the model and a failed task, as the graph structure used for neighborhood aggregation is still informative to the task. So the current design would slightly benefit VEPM in a general case and protect the performance of the model in the worst-case scenario. Different edge partitioning temperatures. We use the same way as the previous study to get meaningful and random partitions. In this study, we focus on the effect of different selections of τ , the temperature parameter of softmax. The value of τ controls the sharpness of the partitioned edge weights, i.e., a small τ drives the partitioned weights for the same edge towards a one-hot vector, whereas a large τ would eliminate the divergence among the edge weights and make the edge partition no more different from an even partition. The results of this experiment are recorded in Table 6, the upper half of which is obtained from a meaningful edge partition, and the lower half is obtained from a random edge partition. We can induce from these results that when the basis to perform edge partition, namely, the inferred latent node interactions, is relevant to the task, a sharp edge partition that highlights the differences among the communities may be helpful for the task. Otherwise, a smooth edge partition might be more favorable. 4.3 Qualitative analysis Visualizing community structures. To show that through partitioning the edges, VEPM identifies each latent community from the original graph, in Figure 3, we plot the adjacency matrices of a 200-node subgraph of Cora, before and after edge partition. The subgraph is created via breadth-first search [41] node selection to ensure connectivity. We sort the nodes sample S in order to present a clearer view of the community structures. Specifically, we prepare K buckets; for node u, we compute the total interactions it engages under metacommunity k by µu,k := ∑ v∈S Zu,kdiag(γk)Z ′ v,k, and assign it to bucket k′, where k′ = argmaxk µu,k. We first sort the buckets by descending their counts of assigned nodes, then sort the nodes within bucket k, k ∈ {1, . . . ,K}, by descending value of µu,k. The Z used for sorting is sampled at the end of training. The community structures that VEPM detects could be found in Figure 3a as the blocks on the main diagonal. The fact that most of the bright spots (i.e., edges) are located in these on-diagonal blocks reflects dense node connections internal to each community. Sparsely distributed off-diagonal spots provide evidence of overlapping membership between the communities, which is also taken into account by our graph generative model. Figures 3b to 3i show that the edge partitioner has done a sound job in separating latent communities. Due to the limited size of the subgraph, some small metacommunities may not contain any large-weighted edges between the nodes in this subgraph, which accounts for the appeared emptiness in Figures 3h and 3i. Such kind of artifact disappears when we replace the subgraph by the full graph of Cora, as visualized in Figures 6 to 14 in Appendix G. Visualizing latent representations. The previous visualization experiment verifies the following propositions: (i) the identified hidden structures are latent communities, and (ii) the partitioned graphs are different from each other. We now study how they benefit representation learning. Cora and MUTAG are selected as the representatives for node and graph classification benchmarks. For the MUTAG dataset, we remove graphs that contain node categories with less than 5 instances (less than 10 from a totality of 188) and randomly sample 10 graphs for the visualization experiments. We first visualize Z obtained at the end of the unsupervised pretrain stage (Figures 4a and 4b), where the node-community affiliation vectors are projected to 2-D space via t-SNE [42]. Proposition (i) ensures that the node information carried by Z is about communities. We color code the scatters by node labels, both Figures 4a and 4b exhibit a strong correlation between the spatial clusters and colors, which indicates that even without label supervision, the node information provided with Z has discriminative power on classifying nodes. We then visualize the community-specific node emebddings obtained at the end of the supervised finetune stage (Figures 4c and 4d). Similarly, t-SNE is adopted to reduce the dimensionality of obtained node embeddings. This time we color code the scatters by the latent metacommunity they correspond to. Both Figures 4c and 4d exhibit clear boundaries between the colored clusters, which show that the model is able to extract different information from multiple communities, which enhances the overall expressiveness of learned node or graph representations and potentially leads to better model performance on the downstream tasks. 5 Exploratory Studies Task-relevant communities. We assess the relevance of the learned communities to the task from an experiment on the Cora dataset. In order to see the statistical dependency between communities and the task, we first hard-assign each node to the community of the highest affiliation strength, then we compute the normalized mutual information (NMI) between the hard-partitioned communities and node labels. NMI is a score between 0 and 1, with higher values indicating stronger statistical dependencies between two random variables. In the above experiment, we obtain an NMI of 0.316 when the communities are learned solely by the unsupervised pretrain, such value increases to 0.322 after the supervised finetune. The results of this experiment show that the initial communities we obtained from the unsupervised pretrain are meaningful to the task, and the relevance between the inferred communities and the task would be enhanced by the supervised finetune. Diverse community-specific information. To see the effect of the information provided by different communities, we first train VEPM until convergence, then use the node representations learned by each community GNN to train an SVM and visualize the averaged 10-fold cross-validation results by a confusion matrix. With the results shown in Figure 5, we can further conclude that most of the communities can provide task-related information, as we can observe from Figure 5 that the diagonal cells in most of the confusion matrices have darker shades than off-diagonal cells. Beyond that, the information extracted from different communities is complementary to each other. For example, in the second column of Figure 5, information from the upper community can separate classes 3 and 4 well, but would mix classes 4 and 5; information from the lower community, on the other hand, can separate classes 4 and 5 but works worse on separating classes 3 and 4. The working mechanism of VEPM. In summary, the communities learned by VEPM have the following properties: (i) the communities are relevant to the task; (ii) the information provided by different communities is complementary, so the overall amount of information for the task is accumulated over communities. Both properties are intuitively helpful for the downstream task. 6 Conclusion Moving beyond treating the graph adjacency matrix as given, we develop variational edge partition models (VEPMs) to extract overlapping node communities and perform community-specific node feature aggregations. Specifically, we first utilize a GNN-based inference network to obtain nodecommunity affiliation strengths, with which we augment node attributes and partition the edges according to the intensities of node interactions with respect to each community. We learn GNNbased node embeddings for each community by aggregating node features with the corresponding partitioned graph, and aggregate all community-specific node embeddings for the downstream tasks. Extensive qualitative and quantitative experiments on both node-level and graph-level classification tasks are performed to illustrate the working mechanism and demonstrate the efficacy of VEPMs in supervised graph representation learning. Acknowledgments Y. He and M. Zhou acknowledge the support of NSF IIS 1812699 and 2212418, and the Texas Advanced Computing Center (TACC) for providing HPC resources that have contributed to the research results reported within this paper. Besides, this work was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by Shaanxi Youth Innovation Team Project; in part by the 111 Project under Grant B18039; in part by the Fundamental Research Funds for the Central Universities QTZX22160.
1. What is the focus and contribution of the paper on generative graph learning? 2. What are the strengths of the proposed approach, particularly in its originality and view of edges in GNNs? 3. Do you have any concerns or suggestions regarding the paper's organization, such as the placement of the "Related Work" section? 4. Are there any limitations or areas that could be improved in the paper, such as including more theoretical analysis or releasing code for reproducibility?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a method called "variational edge partition model" that is generative graph learning framework. VEPM views edges and labels as overlapping community structures. The authors formalize the training of VEPM, and perform experiments evaluating its performance. Strengths And Weaknesses Originality: The work has high originality as it views the edges of GNNs very differently compared to existing K-hop GNN idea. Quality: The work is of quality. Clarity: The paper is clearly written. I have pointed out a few minor points below though. Significance. The contribution seems significant as I have indicated above for my point about the work's originality. Minor points: Line 58, put a comma after "From these scores" Line 101, end the sentence at "impractical". Line 171, typo on "community" Questions Q1. The location of the "Related Work" is a bit strange. I would actually move it before Variational Edge Partition Model. Is there a particular reason why you did it this way? Q2. A bit of a theoretical analysis of why the current method works would make the paper more complete. Any plans to include some analysis? Q3. Why not release the code to make your work reproducible and your claims stronger? Limitations N/A
NIPS
Title A Variational Edge Partition Model for Supervised Graph Representation Learning Abstract Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks. 1 Introduction Many real-world entities are bonded by relations, e.g., the users in a social network service are connected by online friendships, and the atoms in molecules are held together by chemical bonds. Representing the entities by nodes and relations by edges, a set of interconnected entities naturally forms a graph. Reasoning about these entities and their relations could be conducted under graph neural networks (GNNs) [1–4]. Instead of isolating the feature transformation for each individual entity, GNNs allow the information to be exchanged between related entities. Models employing this strategy have achieved great success in a wide range of applications involving graph-structured data, such as classification [5, 6], link prediction [7], recommendation [8, 9], (overlapping) community detection [10–13], and drug discovery [14]. The essence of many graph-analytic tasks could be summarized as supervised graph representation learning, where the information on the graph is usually characterized by the node features, adjacency matrix, and node or graph labels. A supervised machine learning pipeline is often introduced to embed the nodes or graph under label supervision for a specific classification task. For most of the GNN-based methods, the information on nodes and edges is composited through neighborhood aggregation, i.e., updating the features of each node by combining the features of the surrounding nodes. During this process, the graph structure mostly serves as a given source to indicate the nodes’ neighborship [15] or provide the weights for aggregation [5]. Treating the graph structure as given ∗Equal contribution. † Corresponding authors. Code available at https://github.com/YH-UtMSB/VEPM 36th Conference on Neural Information Processing Systems (NeurIPS 2022). overlooks the latent structures that control the formation of the graph, which, however, could provide valuable information for graph representation learning. In other words, failing to consider the latent graph structures may limit the ultimate potential of this line of work. One type of latent structure at the hinge of the graph structure and node information is node communities. Their connection to the graph can be explained under a latent-community-based graph generation process. For example, both the mixed-membership stochastic blockmodel (MMSB) [10] and edge partition model (EPM) [11] explain the formation of edges by node interactions over overlapping latent communities. Let us consider person u in social network G, some of her social connections may be established through interactions with colleagues as a machine learning researcher, some may originate from her interactions with co-members of the same hobby club, and different types of interactions could overlap to strengthen certain social connections between the nodes. The node community structure also sets the aspects of node information. In the example of social network G where person u is affiliated with both a research group and a few hobby groups, due to the diverse nature of these different communities, it is likely that she exhibits different characteristics when interacting with co-members of different communities. Namely, the information on u for the research group may be related to her research expertise, and the information on u for the hobby clubs may be related to her hobbies. Therefore, an ideal solution is to learn a different aspect of node properties for each community that it is affiliated with, and represent the overall node information as an aggregation of all community-specific node properties. Based on this insight, we develop a variational edge partition model (VEPM), which is a generative graph representation learning framework. VEPM models how the edges and labels are generated from overlapping latent communities. Instead of hard assigning a node to a single community, VEPM encodes each node into a K-dimensional vector of scores measuring the strength that the node is affiliated with each of the K communities. From these scores, we compute the intensities of pairwise interactions within each community. Given that information, under the Bernoulli-Poisson link, the edges could be modeled by the logical OR of independently generated binary latent edges [11]. The generation of labels includes community-specific node representation learning and aggregation. A key premise of the first part is to detect the hidden structure specified for each community. In our model, it is achieved through edge partition, i.e., decomposing each edge into a summation of link strengths according to the intensities of community-specific interactions. The edge partition step isolates the link strengths accumulated through each community. With the partitioned edges, we learn K separate GNNs to produce community-specific node representations, then compose them into the overall node embeddings to generate the labels. Evaluation shows that the proposed framework can achieve significant performance enhancement on various node and graph classification benchmarks. We summarize our main contributions as follows: • We introduce VEPM that utilizes the idea of edge partitioning to extract overlapping latent community structures, which are used to not only enrich node attributes with nodecommunity affiliation scores, but also define community-specific node feature aggregations. • We formalize the training of VEPM in a variational inference framework, which is powered by GNNs in its latent community inference, representation generation, and label prediction. • We analyze how VEPM works and evaluate it over various real-world network datasets. Empirical results show that the proposed VEPM outperforms many previous methods in various node- and graph-level classification tasks, especially with limited labels. 2 Preliminaries and Related Work Embedding nodes and graphs with GNNs. Given a node-attributed graph G with N nodes, its information is generally expressed by a design matrix X ∈ RN×F , whose rows represent node features of dimension F , and an adjacency matrix A of shape N × N . The nonzero values of A indicate the weights on the corresponding edges. GNNs [1–5, 15, 16] are multi-layer parametric models that maps (X,A) to the embedding space. The update of the embedding of node u at the tth GNN layer could be summarized as h(t)u = AGG(fθ(h (t−1) u ), {fθ(h(t−1)v )}v∈N(u),Au,:) and h(0)u = xu. Here AGG denotes the neighborhood aggregation function, N(u) denotes the neighbors of u, and fθ(·) is a transformation function parameterized with θ. In VEPM, we adopt the aggregation function introduced by Kipf and Welling [5], which can be expressed as h(t)u =∑ v∈{u}∪N(u) Ãu,vfθ(h (t−1) v ), where à is the normalized adjacency matrix of G augmented with self-loops. When the task requires a graph-level representation, all node embeddings would be summarized into a single embedding vector. This process is called graph pooling [17–19]. In this work, we focus on improving the overall performance from GNN’s perspective and adopt a simple graph pooling method proposed by Xu et al. [6]. In the sequel, we unify the expression of models consisting of GNN layers as GNNθ ( X,A ) . Community-regularized representation learning. Communities are latent groups of nodes in a graph [20]. The simplest way to use community information to help a supervised learning task is to regularize it with a community-detection related task. For instance, Hasanzadeh et al. [21] and Wang et al. [22] jointly train a deep stochastic blockmodel and a classification model; a similar approach could be found in Liu et al. [23], where the graph reconstruction regularizer is replaced by a modularity metric. Either way, the node embeddings are expected to reflect patterns of community structures and maintain informative to the task at the same time. These methods generally treat node-community affiliation as node embeddings and expect them to contain sufficient information for the downstream task. However, the affiliation strengths of nodes to a community may oversimplify the information that such a community can provide for the downstream task. Unlike these methods, VEPM embeds node information on each community with a specific GNN, where the community structures are reflected by the partitioned adjacency matrices. Aggregating node features with these weighted adjacency matrices naturally incorporates community information into task-learning. Multi-relational data analysis. Leveraging heterogeneous relations has been extensively studied in the literature of mining Knowledge Graphs [24–26]. The data of these models is usually organized by a multigraph, which is related to our perspective of the latent node interactions that generate the observed graph because the interactions in different communities are potentially heterogeneous, and they are permitted to coexist between a pair of nodes. This leads to a shared high-level idea to break down the original heterogeneous graph into homogeneous factors and analyze each factor with a customized model. However, unlike those multigraphs where the edge types are explicitly annotated, the overlapping heterogeneous node interactions are not observable from the data that VEPM aims to handle, escalating the difficulty of our optimization problem. To deal with the latent variables, we formalize VEPM into a generative model and train it via variational inference. Graph factorization-based models. The existence of multiple hidden structures in graphs is also noticed by some previous work [27, 28]. The architecture of these models starts with graph factorization, followed by a set of modules with each one processing the information from one of the graph factors. In these methods, the only ground-truth information that trains graph factorization is from label supervision, making them rely on excessive label annotations, which might be hard to suffice in practice and thus potentially hinder their application. On account of the potential heterogeneity of graph information on different communities, VEPM adopts a similar pipeline to these methods, but the factorization of the original graph is driven by not only the supervised task but also a graph generative model, which greatly reduces our dependency on the amount of labeled data. 3 Variational Edge Partition Model Many analytical tasks on node-attributed graphs could boil down to (semi-) supervised classification problems, i.e., given (X,A) and observed node or graph labels yo, predicting the unobserved labels yu. In VEPM, we formalize the classification task as modeling the predictive distribution as p(yu |X,A,yo), and construct our variational supervised learning pipeline with GNNs. 3.1 Generative task-dependent graph representation learning with latent communities To model p(yu |X,A,yo), we introduce Z ∈ RN×K+ , a latent non-negative node-community affiliation matrix, whose entry at the nth row and kth column is interpreted as the strength that the nth node is affiliated with the kth community. Given Z, we specify a generative network as A ∼ pθ(A |Z), yo ∼ pθ(y |X,A,Z) (1) to describe how the observed edges A and labels yo are generated. Due to the complexity brought by the deep architecture of the generative network, analytically inferring the posterior of Z is impractical. We hence approximate the posterior pθ(Z |A,yo) with a variational distribution, qϕ(Z |A,X), modeled by a separate GNN-based inference network. The subscripts θ and ϕ denote the parameters in the generative network and inference network, respectively. We illustrate the architecture of VEPM in Figure 1. We approximate the posterior predictive distribution using the Monte Carlo method as p̂θ(yu |X,A,yo) ≈ 1S ∑S s=1 pθ(yu |X,A,Z(s)), (2) where Z(s) iid∼ qϕ(Z |A,X) for s = 1, . . . , S. In what follows, we outline the data generation process and corresponding modules in the generative network, introduce the module in the inference network, and describe how to train both networks. 3.2 Edge generation and label prediction To model the distribution of the edges given Z, we adopt the generative process developed in EPM [11], which explains the generation of the edges under the Bernoulli-Poisson link as Ai,j = 1Mi,j≥1, Mi,j ∼ Poisson (∑K k=1 γkZi,kZj,k ) . Here γk is a positive community activation level indicator, which measures the member interaction frequency via community k, and in our practice is treated as a trainable parameter shared by both the inference network and generative network. We interpret γkZi,kZj,k as the interaction rate between nodes i and j via community k. The EPM has an alternative representation that partitions each edge into the logical disjunction (i.e., logical OR) of K latent binary edges [29], expressed as Ai,j = ∨Kk=1Ai,j,k, Ai,j,k ∼ Bernoulli(pijk), where pijk := 1− e−γkZi,kZj,k . Thus nodes i and j would be connected as long as their interaction forms an edge in at least one community. In other words, they are disconnected only if their interactions in all communities fail to generate any edges. Under the EPM, we have the conditional probability as pθ(Ai,j = 1 |Z) = 1− e− ∑K k=1 γkZi,kZj,k . To complete the model, we set the prior distribution of the node-community affiliation matrix as Z ∼ ∏N i=1 ∏K k=1 Gamma(Zi,k;α, β). Given A and Z, we now describe the predictive process of the labels. The node-community affiliation impacts this process from two aspects: for each node, the corresponding row of Z serves as side information that could enrich node attributes; for each pair of connected nodes, it could be used to derive the node interactions rate via each community, i.e., {γkZi,kZj,k}k=1,K , which are further used to partition the edges into K communities, carried out by the edge partitioner: Edge partitioner. The edge partitioner takes edges A and node-community affiliation matrix Z as inputs and returns K positive-weighted edges: {A(1),A(2), · · · ,A(K)}, s.t. ∑K k=1 A (k) = A. The edge partition function is A (k) i,j = Ai,j · e (γkZi,kZj,k)/τ∑ k′ e (γk′Zi,k′Zj,k′ )/τ , k ∈ {1, . . . ,K}, i, j ∈ {1, . . . , N}, (3) where τ is a “temperature” that controls the sharpness of partition. The effect of the edge partitioner could be considered as a soft assignment of each edge into different communities. Setting temperature τ to be low could drive the soft assignment towards a hard assignment, and consequently, when aggregating node features with {A(1),A(2), · · · ,A(K)}, the edge partitioner would concentrate the information exchange between any two connected nodes at one community. In other words, the mutual influence between two nodes, measured by the edge weight, is relatively high in the community that contributes the most of the interactions between the pair, while being relatively weak in the other communities. In our implementation, we further generalize Equation (3) to a metacommunity-based edge partition, specifically, by replacing γkZi,kZj,k with Zi,kdiag(γk)Z′j,k. In the new expression, Zi,k denotes the kth segment in node i’s community-affiliation encoding, γk is a vector of the activation levels for communities in the kth metacommunity. This generalization allows the total number of communities to be greater than K, enhancing the model implementation flexibility at a minor interpretability cost. The edge partitioner provides a soft separation of the neighborhood aggregation routes for each community, which is passed to the community-GNN bank: Community-GNN bank. The module takes X∗ := X ∥ Z as input, where the operator ∥ denotes “concatenation,” and learns community-specific node embeddings with K separate GNNs, i.e., the outputs are g(1)θ (X ∗), g (2) θ (X ∗), · · · , g(K)θ (X∗), where g (k) θ (·) := GNNθ ( ·,A(k) ) , k ∈ {1, . . . ,K}. The intention of the edge partitioner and community-GNN bank is to capture the node information specific to each community. For instance, in a social network, such kind of information could be the different social roles of people when considered affiliated with different social groups, e.g., one may be a former computer science major student in an alumni group, a research scientist in a work group, and an amateur chess player in a hobby club group. As shown in Section 4.3, the node embeddings produced by the community-GNN bank are separable by communities. We finally introduce the representation composer, which embeds a combination of the information provided by each community into a global representation via a GNN: Representation composer. Let H(k) := g(k)θ (X ∗) denote the node representations learned from the kth community, where k = 1, . . . ,K, and f(·) denote the representation composer, whose functionality is to project a composite of community-specific node representations to one representation matrix, i.e., HV = f ( H(1),H(2), · · · ,H(K) ) := GNNθ ( ∥Kk=1H(k),A ) . For graph-level tasks, we further pool node embeddings into a single vector representation hG , as in Xu et al. [6]. Taking softmax on the feature dimension of HV or hG gives the predicted probabilities of labels, from which we are able to classify the unlabeled objects. Cascading the edge partitioner and community-GNN bank with the representation composer yields the probability pθ(y |X,A,Z). 3.3 Variational latent community inference We use a community encoder as the module in the inference network to approximate the posterior distribution pθ(Z |A,y), and provide the generative network with the critical latent variable Z: Community encoder. The community encoder models the variational posterior qϕ(Z |A,X) by a Weibull distribution with shape K and scale Λ, whose parameters are learned by a GNN as K ∥Λ = GNNϕ ( X,A ) . A Weibull random sample from qϕ(Z |A,X) could be created through the inverse CDF transformation of a uniform random variable, given as follows: Z = Λ⊙ ( − log(1−U) )(1⊘K) ,Ui,k iid∼ U(0, 1), ∀(i, k) ∈ {1, . . . , N} × {1, . . . ,K}, (4) where ⊙ and ⊘ denote element-wise multiplication and division. 3.4 The overall training algorithm and complexity analysis We train VEPM by optimizing the evidence lower bound (ELBO), decomposed into three terms, as L = Ltask + Legen + LKL, (5) where Ltask = Eqϕ(Z |A,X) log pθ(yo |A,Z,X), Legen = Eqϕ(Z |A,X) log pθ(A |Z), and LKL = −DKL ( qϕ(Z |A,X) ∥ p(Z) ) . These three terms correspond to the classification task, edge generation, and KL-regularization, respectively. Note that our specifications of Z’s prior and variational posterior yield an analytical expression of LKL, as described in detail in Appendix B. Recall that N , M , K denote the numbers of nodes, edges, and communities (or metacommunities) in the graph; F denotes the feature dimension; and L1, L2, L3 denote the number of layers in the community encoder, in the community-GNN bank, and in the representation composer. In VEPM, we limit the size of hidden dimension in each community-GNN to 1/K of what is commonly used among the baselines, hence the time complexity of training VEPM isO((L1+L2+L3)MF +(L1+L2/K+ L3)NF 2 +N2F ). For a sparse graph where N2 ≫M , the computational overhead can be reduced to O(M) 2 if graph reconstruction is accelerated via subsampling the nodes to O( √ M), as in Salha et al. [30]. Effects and implications of adopting such type of acceleration algorithms are discussed in Appendix A. The space complexity of VEPM isO((L1+L2+L3)NF+KM+(L1+L2/K+L3)F 2), among which O((L1 + L2/K + L3)F 2) is contributed by model parameters. It is noteworthy to point out that the memory cost of graph reconstruction is manageable by computing the dense matrix multiplication block-wise with fixed maximum block size. From all aspects of complexity, VEPM (with acceleration) is comparable to GATs [15, 31] and models involving graph factorization [27, 28]. 4 Empirical Evaluation 4.1 Node & graph classification Datasets & experimental settings. For node classification, we consider three citation networks (Cora, Citeseer, and Pubmed) and a Wikipedia-based online article network (WikiCS) [32], which provide either bag-of-words document representations or average word embeddings as node features, and (undirected) citations or hyperlinks as edges; For graph classification, we consider four bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and four social network datasets (IMDBBINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI). The input node features are crafted in the same way as Xu et al. [6]. Most of the baselines are compared following the 10-fold crossvalidation-based evaluation protocol proposed by Xu et al. [6]. For the graph classification task, we also evaluate our model following Zhang and Chen [33], which conducts a more rigorous trainvalidation-test protocol. More details about the experiments are elaborated in Appendix C.2. Node classification. We use classification accuracy as the evaluation metric for node classification. Table 1 reports the average performance of VEPM (± standard error) against related baselines that are categorized into three different groups. The first group consists of the GCNs [5] and its variant GCN-64, which expands the hidden dimension from 16 to 64. VEPM outperforms the first group by a significant margin. The explanation is twofold: (i) VEPM augments the node attributes with community information carried by the inferred node-community affiliations, (ii) VEPM learns community-specific node embeddings. The second group includes SIG-VAE [21] and WGCAE [22], both of which learn node embeddings from graph generative models jointly optimized with a supervised loss. The performance gain that VEPM obtains could be attributed to our unique label predictive process, which not only appends extra community information to node attributes but also injects structural patterns of communities into neighborhood aggregation. The third group [15, 31, 27] is focused on learning node embeddings leveraging heterogeneous hidden relations. When fitting the hidden relations via the attention mechanism, they only use information from the node features 2For expression simplicity, the scale of F is treated as constant, which is ubiquitous in practice. through label supervision, whereas our approach also takes the observed graph structure into account via a graph generative model. The additional information from the graph utilized by VEPM could explain the enhancement achieved by VEPM over the third group. Graph classification. For the first protocol, we compare VEPM with classical graph classification baselines [34–36], generic-GNN-based models [15, 31, 6], and GNNs with relation-based or taskdriven graph factorization [25, 26, 28]. Results in Table 2 show that VEPM achieves the best graph classification performance on 5 out of 8 benchmarks, and the second-best performance on the other 3 benchmarks, including NCI1, where VEPM outperforms all the other GNN-based models. For the second protocol, aside from a traditional method, WL [34], and a generic-GNN-based method, DGCNN [37], we compare VEPM with two GNNs [38, 33] that adapt capsule neural networks [39] to graph-structured data, which aim to learn different aspects of graph properties via dynamic routing [40]. The results in Table 3 show that VEPM outperforms these GNN-based methods on most of the benchmarks. This indicates that the aspects of graph information, as defined with communities and learned via aggregating node features with partitioned graphs, are more pertinent to the end tasks. Classification with reduced labels. The gain of incorporating the graph generation process into VEPM is that the observed graph structure also provides information for hidden community detection and edge partition, which is beneficial if the amount of labeled data is not sufficient for both decomposing the graph into hidden factors and semi-supervised task-learning. To illustrate this point, based on the evaluation protocol by Kipf and Welling [5] for node classification and Xu et al. [6] for graph classification, we re-evaluate the performance of VEPM under reduced training labels on Cora (for node classification) and MUTAG (for graph classification), along with generic-GNN-based methods [5, 15, 6] and decomposition-based methods [27, 28]. The results are shown in Figure 2: the performance of all models is negatively impacted by the reduction in training labels, those suffering from the strongest performance decline are DisenGCN [27] and FactorGCN [28] that decompose graphs solely based on label supervision, while VEPM also performs graph factorization with limited labels, it consistently outperforms the other methods under all settings, and the superiority becomes even more evident as the amount of annotated data further decreases. We consider the enhanced robustness under sparsely labeled data as a crucial feature to have for real-world graphs. 4.2 Ablation Studies Different edge partition schemes. In this part, we study how much a meaningful edge partition would benefit the task. Except for the partition we obtained from training the regular VEPM, we tested the performance of VEPM under two edge partition schemes: even partition and random partition. Even partition means all edge weights are 1/K. For random partition, we sample the unnormalized edge weights from U(0, 100), then normalize them along the community dimension with softmax. The random edge weights are fixed for training to simulate an undesirable convergence state of edge partition. The results are recorded in Table 4. Generally speaking, the performance of VEPM with even edge partition is comparable to that of GIN, our base model; when we substitute random partition for even partition, the model performance becomes worse than the base model. These ablation study results suggest that VEPM can benefit from enhancing the quality of edge partitions. Different representation composer. Next, we use the partition obtained from a regular VEPM to represent partitions that are meaningful for the task, and a random partition to represent “meaningless partitions.” For each partition scheme, we contrast the performance of VEPM between two designs of the representation composer: (i) GNN-based design, represented by the current design; (ii) parsimonious design, represented by a fully-connected (FC) layer. The results are in Table 5. When the edge partition is meaningful, i.e., relevant to the task, the overall performance with a simpler representation composer is generally slightly worse than the current design, but still better than most of the baselines. However, if the partition is irrelevant to the task, the GNN-based representation composer would be the last thing standing between the model and a failed task, as the graph structure used for neighborhood aggregation is still informative to the task. So the current design would slightly benefit VEPM in a general case and protect the performance of the model in the worst-case scenario. Different edge partitioning temperatures. We use the same way as the previous study to get meaningful and random partitions. In this study, we focus on the effect of different selections of τ , the temperature parameter of softmax. The value of τ controls the sharpness of the partitioned edge weights, i.e., a small τ drives the partitioned weights for the same edge towards a one-hot vector, whereas a large τ would eliminate the divergence among the edge weights and make the edge partition no more different from an even partition. The results of this experiment are recorded in Table 6, the upper half of which is obtained from a meaningful edge partition, and the lower half is obtained from a random edge partition. We can induce from these results that when the basis to perform edge partition, namely, the inferred latent node interactions, is relevant to the task, a sharp edge partition that highlights the differences among the communities may be helpful for the task. Otherwise, a smooth edge partition might be more favorable. 4.3 Qualitative analysis Visualizing community structures. To show that through partitioning the edges, VEPM identifies each latent community from the original graph, in Figure 3, we plot the adjacency matrices of a 200-node subgraph of Cora, before and after edge partition. The subgraph is created via breadth-first search [41] node selection to ensure connectivity. We sort the nodes sample S in order to present a clearer view of the community structures. Specifically, we prepare K buckets; for node u, we compute the total interactions it engages under metacommunity k by µu,k := ∑ v∈S Zu,kdiag(γk)Z ′ v,k, and assign it to bucket k′, where k′ = argmaxk µu,k. We first sort the buckets by descending their counts of assigned nodes, then sort the nodes within bucket k, k ∈ {1, . . . ,K}, by descending value of µu,k. The Z used for sorting is sampled at the end of training. The community structures that VEPM detects could be found in Figure 3a as the blocks on the main diagonal. The fact that most of the bright spots (i.e., edges) are located in these on-diagonal blocks reflects dense node connections internal to each community. Sparsely distributed off-diagonal spots provide evidence of overlapping membership between the communities, which is also taken into account by our graph generative model. Figures 3b to 3i show that the edge partitioner has done a sound job in separating latent communities. Due to the limited size of the subgraph, some small metacommunities may not contain any large-weighted edges between the nodes in this subgraph, which accounts for the appeared emptiness in Figures 3h and 3i. Such kind of artifact disappears when we replace the subgraph by the full graph of Cora, as visualized in Figures 6 to 14 in Appendix G. Visualizing latent representations. The previous visualization experiment verifies the following propositions: (i) the identified hidden structures are latent communities, and (ii) the partitioned graphs are different from each other. We now study how they benefit representation learning. Cora and MUTAG are selected as the representatives for node and graph classification benchmarks. For the MUTAG dataset, we remove graphs that contain node categories with less than 5 instances (less than 10 from a totality of 188) and randomly sample 10 graphs for the visualization experiments. We first visualize Z obtained at the end of the unsupervised pretrain stage (Figures 4a and 4b), where the node-community affiliation vectors are projected to 2-D space via t-SNE [42]. Proposition (i) ensures that the node information carried by Z is about communities. We color code the scatters by node labels, both Figures 4a and 4b exhibit a strong correlation between the spatial clusters and colors, which indicates that even without label supervision, the node information provided with Z has discriminative power on classifying nodes. We then visualize the community-specific node emebddings obtained at the end of the supervised finetune stage (Figures 4c and 4d). Similarly, t-SNE is adopted to reduce the dimensionality of obtained node embeddings. This time we color code the scatters by the latent metacommunity they correspond to. Both Figures 4c and 4d exhibit clear boundaries between the colored clusters, which show that the model is able to extract different information from multiple communities, which enhances the overall expressiveness of learned node or graph representations and potentially leads to better model performance on the downstream tasks. 5 Exploratory Studies Task-relevant communities. We assess the relevance of the learned communities to the task from an experiment on the Cora dataset. In order to see the statistical dependency between communities and the task, we first hard-assign each node to the community of the highest affiliation strength, then we compute the normalized mutual information (NMI) between the hard-partitioned communities and node labels. NMI is a score between 0 and 1, with higher values indicating stronger statistical dependencies between two random variables. In the above experiment, we obtain an NMI of 0.316 when the communities are learned solely by the unsupervised pretrain, such value increases to 0.322 after the supervised finetune. The results of this experiment show that the initial communities we obtained from the unsupervised pretrain are meaningful to the task, and the relevance between the inferred communities and the task would be enhanced by the supervised finetune. Diverse community-specific information. To see the effect of the information provided by different communities, we first train VEPM until convergence, then use the node representations learned by each community GNN to train an SVM and visualize the averaged 10-fold cross-validation results by a confusion matrix. With the results shown in Figure 5, we can further conclude that most of the communities can provide task-related information, as we can observe from Figure 5 that the diagonal cells in most of the confusion matrices have darker shades than off-diagonal cells. Beyond that, the information extracted from different communities is complementary to each other. For example, in the second column of Figure 5, information from the upper community can separate classes 3 and 4 well, but would mix classes 4 and 5; information from the lower community, on the other hand, can separate classes 4 and 5 but works worse on separating classes 3 and 4. The working mechanism of VEPM. In summary, the communities learned by VEPM have the following properties: (i) the communities are relevant to the task; (ii) the information provided by different communities is complementary, so the overall amount of information for the task is accumulated over communities. Both properties are intuitively helpful for the downstream task. 6 Conclusion Moving beyond treating the graph adjacency matrix as given, we develop variational edge partition models (VEPMs) to extract overlapping node communities and perform community-specific node feature aggregations. Specifically, we first utilize a GNN-based inference network to obtain nodecommunity affiliation strengths, with which we augment node attributes and partition the edges according to the intensities of node interactions with respect to each community. We learn GNNbased node embeddings for each community by aggregating node features with the corresponding partitioned graph, and aggregate all community-specific node embeddings for the downstream tasks. Extensive qualitative and quantitative experiments on both node-level and graph-level classification tasks are performed to illustrate the working mechanism and demonstrate the efficacy of VEPMs in supervised graph representation learning. Acknowledgments Y. He and M. Zhou acknowledge the support of NSF IIS 1812699 and 2212418, and the Texas Advanced Computing Center (TACC) for providing HPC resources that have contributed to the research results reported within this paper. Besides, this work was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by Shaanxi Youth Innovation Team Project; in part by the 111 Project under Grant B18039; in part by the Fundamental Research Funds for the Central Universities QTZX22160.
1. What is the focus and contribution of the paper on graph representation learning? 2. What are the strengths of the proposed approach, particularly in its idea and experimental validation? 3. What are the weaknesses of the paper regarding the method's complexity and space usage? 4. Do you have any concerns or questions about the method's ability to handle large graphs? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed a variational edge partition method for supervised graph representation learning. The proposed method learns the community affiliation matrix Φ , generates K partitioned graph representing different communities, and learn from all of when with two stages of GNNs. The idea of partitioning the graph into overlapping latent communities is very interesting, and extensive experimental results showed the effectiveness of the proposed method. Strengths And Weaknesses Strengths: s1. This paper is clearly written and easy to follow. s2. The idea of edge partitioning with generative method is very interesting. s3. Extensive experiments validated the effectiveness of the proposed method. s4. The case study in Figures 3, 5-13 is shows that the proposed method successfully separates the community structures in the graph. Weakness: w1. It seems the method needs to back-propagate through all K adjacency matrices A ( 1 ) , … , A ( K ) , which could be expensive on the space when the graph is large. w2. The complexity analysis in Section 4.4 is pretty vague. I wonder if the authors can analyze in exact terms such as N / K / F instead of GCN/GIN. Questions Other than the weaknesses in the previous section, I also have two more questions: I wonder why many numbers are missing from Table 2. I would suggest the authors to get them if possible. In my understanding, the proposed method need to back-propagate through all K adjacency matrices (as mentioned in w1). If this is correct, then the space complexity should be at least O ( N 2 ) instead of the given one in Section 4.4 (assuming H << N ). If my understanding is wrong, please clarify. Limitations n/a
NIPS
Title A Variational Edge Partition Model for Supervised Graph Representation Learning Abstract Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks. 1 Introduction Many real-world entities are bonded by relations, e.g., the users in a social network service are connected by online friendships, and the atoms in molecules are held together by chemical bonds. Representing the entities by nodes and relations by edges, a set of interconnected entities naturally forms a graph. Reasoning about these entities and their relations could be conducted under graph neural networks (GNNs) [1–4]. Instead of isolating the feature transformation for each individual entity, GNNs allow the information to be exchanged between related entities. Models employing this strategy have achieved great success in a wide range of applications involving graph-structured data, such as classification [5, 6], link prediction [7], recommendation [8, 9], (overlapping) community detection [10–13], and drug discovery [14]. The essence of many graph-analytic tasks could be summarized as supervised graph representation learning, where the information on the graph is usually characterized by the node features, adjacency matrix, and node or graph labels. A supervised machine learning pipeline is often introduced to embed the nodes or graph under label supervision for a specific classification task. For most of the GNN-based methods, the information on nodes and edges is composited through neighborhood aggregation, i.e., updating the features of each node by combining the features of the surrounding nodes. During this process, the graph structure mostly serves as a given source to indicate the nodes’ neighborship [15] or provide the weights for aggregation [5]. Treating the graph structure as given ∗Equal contribution. † Corresponding authors. Code available at https://github.com/YH-UtMSB/VEPM 36th Conference on Neural Information Processing Systems (NeurIPS 2022). overlooks the latent structures that control the formation of the graph, which, however, could provide valuable information for graph representation learning. In other words, failing to consider the latent graph structures may limit the ultimate potential of this line of work. One type of latent structure at the hinge of the graph structure and node information is node communities. Their connection to the graph can be explained under a latent-community-based graph generation process. For example, both the mixed-membership stochastic blockmodel (MMSB) [10] and edge partition model (EPM) [11] explain the formation of edges by node interactions over overlapping latent communities. Let us consider person u in social network G, some of her social connections may be established through interactions with colleagues as a machine learning researcher, some may originate from her interactions with co-members of the same hobby club, and different types of interactions could overlap to strengthen certain social connections between the nodes. The node community structure also sets the aspects of node information. In the example of social network G where person u is affiliated with both a research group and a few hobby groups, due to the diverse nature of these different communities, it is likely that she exhibits different characteristics when interacting with co-members of different communities. Namely, the information on u for the research group may be related to her research expertise, and the information on u for the hobby clubs may be related to her hobbies. Therefore, an ideal solution is to learn a different aspect of node properties for each community that it is affiliated with, and represent the overall node information as an aggregation of all community-specific node properties. Based on this insight, we develop a variational edge partition model (VEPM), which is a generative graph representation learning framework. VEPM models how the edges and labels are generated from overlapping latent communities. Instead of hard assigning a node to a single community, VEPM encodes each node into a K-dimensional vector of scores measuring the strength that the node is affiliated with each of the K communities. From these scores, we compute the intensities of pairwise interactions within each community. Given that information, under the Bernoulli-Poisson link, the edges could be modeled by the logical OR of independently generated binary latent edges [11]. The generation of labels includes community-specific node representation learning and aggregation. A key premise of the first part is to detect the hidden structure specified for each community. In our model, it is achieved through edge partition, i.e., decomposing each edge into a summation of link strengths according to the intensities of community-specific interactions. The edge partition step isolates the link strengths accumulated through each community. With the partitioned edges, we learn K separate GNNs to produce community-specific node representations, then compose them into the overall node embeddings to generate the labels. Evaluation shows that the proposed framework can achieve significant performance enhancement on various node and graph classification benchmarks. We summarize our main contributions as follows: • We introduce VEPM that utilizes the idea of edge partitioning to extract overlapping latent community structures, which are used to not only enrich node attributes with nodecommunity affiliation scores, but also define community-specific node feature aggregations. • We formalize the training of VEPM in a variational inference framework, which is powered by GNNs in its latent community inference, representation generation, and label prediction. • We analyze how VEPM works and evaluate it over various real-world network datasets. Empirical results show that the proposed VEPM outperforms many previous methods in various node- and graph-level classification tasks, especially with limited labels. 2 Preliminaries and Related Work Embedding nodes and graphs with GNNs. Given a node-attributed graph G with N nodes, its information is generally expressed by a design matrix X ∈ RN×F , whose rows represent node features of dimension F , and an adjacency matrix A of shape N × N . The nonzero values of A indicate the weights on the corresponding edges. GNNs [1–5, 15, 16] are multi-layer parametric models that maps (X,A) to the embedding space. The update of the embedding of node u at the tth GNN layer could be summarized as h(t)u = AGG(fθ(h (t−1) u ), {fθ(h(t−1)v )}v∈N(u),Au,:) and h(0)u = xu. Here AGG denotes the neighborhood aggregation function, N(u) denotes the neighbors of u, and fθ(·) is a transformation function parameterized with θ. In VEPM, we adopt the aggregation function introduced by Kipf and Welling [5], which can be expressed as h(t)u =∑ v∈{u}∪N(u) Ãu,vfθ(h (t−1) v ), where à is the normalized adjacency matrix of G augmented with self-loops. When the task requires a graph-level representation, all node embeddings would be summarized into a single embedding vector. This process is called graph pooling [17–19]. In this work, we focus on improving the overall performance from GNN’s perspective and adopt a simple graph pooling method proposed by Xu et al. [6]. In the sequel, we unify the expression of models consisting of GNN layers as GNNθ ( X,A ) . Community-regularized representation learning. Communities are latent groups of nodes in a graph [20]. The simplest way to use community information to help a supervised learning task is to regularize it with a community-detection related task. For instance, Hasanzadeh et al. [21] and Wang et al. [22] jointly train a deep stochastic blockmodel and a classification model; a similar approach could be found in Liu et al. [23], where the graph reconstruction regularizer is replaced by a modularity metric. Either way, the node embeddings are expected to reflect patterns of community structures and maintain informative to the task at the same time. These methods generally treat node-community affiliation as node embeddings and expect them to contain sufficient information for the downstream task. However, the affiliation strengths of nodes to a community may oversimplify the information that such a community can provide for the downstream task. Unlike these methods, VEPM embeds node information on each community with a specific GNN, where the community structures are reflected by the partitioned adjacency matrices. Aggregating node features with these weighted adjacency matrices naturally incorporates community information into task-learning. Multi-relational data analysis. Leveraging heterogeneous relations has been extensively studied in the literature of mining Knowledge Graphs [24–26]. The data of these models is usually organized by a multigraph, which is related to our perspective of the latent node interactions that generate the observed graph because the interactions in different communities are potentially heterogeneous, and they are permitted to coexist between a pair of nodes. This leads to a shared high-level idea to break down the original heterogeneous graph into homogeneous factors and analyze each factor with a customized model. However, unlike those multigraphs where the edge types are explicitly annotated, the overlapping heterogeneous node interactions are not observable from the data that VEPM aims to handle, escalating the difficulty of our optimization problem. To deal with the latent variables, we formalize VEPM into a generative model and train it via variational inference. Graph factorization-based models. The existence of multiple hidden structures in graphs is also noticed by some previous work [27, 28]. The architecture of these models starts with graph factorization, followed by a set of modules with each one processing the information from one of the graph factors. In these methods, the only ground-truth information that trains graph factorization is from label supervision, making them rely on excessive label annotations, which might be hard to suffice in practice and thus potentially hinder their application. On account of the potential heterogeneity of graph information on different communities, VEPM adopts a similar pipeline to these methods, but the factorization of the original graph is driven by not only the supervised task but also a graph generative model, which greatly reduces our dependency on the amount of labeled data. 3 Variational Edge Partition Model Many analytical tasks on node-attributed graphs could boil down to (semi-) supervised classification problems, i.e., given (X,A) and observed node or graph labels yo, predicting the unobserved labels yu. In VEPM, we formalize the classification task as modeling the predictive distribution as p(yu |X,A,yo), and construct our variational supervised learning pipeline with GNNs. 3.1 Generative task-dependent graph representation learning with latent communities To model p(yu |X,A,yo), we introduce Z ∈ RN×K+ , a latent non-negative node-community affiliation matrix, whose entry at the nth row and kth column is interpreted as the strength that the nth node is affiliated with the kth community. Given Z, we specify a generative network as A ∼ pθ(A |Z), yo ∼ pθ(y |X,A,Z) (1) to describe how the observed edges A and labels yo are generated. Due to the complexity brought by the deep architecture of the generative network, analytically inferring the posterior of Z is impractical. We hence approximate the posterior pθ(Z |A,yo) with a variational distribution, qϕ(Z |A,X), modeled by a separate GNN-based inference network. The subscripts θ and ϕ denote the parameters in the generative network and inference network, respectively. We illustrate the architecture of VEPM in Figure 1. We approximate the posterior predictive distribution using the Monte Carlo method as p̂θ(yu |X,A,yo) ≈ 1S ∑S s=1 pθ(yu |X,A,Z(s)), (2) where Z(s) iid∼ qϕ(Z |A,X) for s = 1, . . . , S. In what follows, we outline the data generation process and corresponding modules in the generative network, introduce the module in the inference network, and describe how to train both networks. 3.2 Edge generation and label prediction To model the distribution of the edges given Z, we adopt the generative process developed in EPM [11], which explains the generation of the edges under the Bernoulli-Poisson link as Ai,j = 1Mi,j≥1, Mi,j ∼ Poisson (∑K k=1 γkZi,kZj,k ) . Here γk is a positive community activation level indicator, which measures the member interaction frequency via community k, and in our practice is treated as a trainable parameter shared by both the inference network and generative network. We interpret γkZi,kZj,k as the interaction rate between nodes i and j via community k. The EPM has an alternative representation that partitions each edge into the logical disjunction (i.e., logical OR) of K latent binary edges [29], expressed as Ai,j = ∨Kk=1Ai,j,k, Ai,j,k ∼ Bernoulli(pijk), where pijk := 1− e−γkZi,kZj,k . Thus nodes i and j would be connected as long as their interaction forms an edge in at least one community. In other words, they are disconnected only if their interactions in all communities fail to generate any edges. Under the EPM, we have the conditional probability as pθ(Ai,j = 1 |Z) = 1− e− ∑K k=1 γkZi,kZj,k . To complete the model, we set the prior distribution of the node-community affiliation matrix as Z ∼ ∏N i=1 ∏K k=1 Gamma(Zi,k;α, β). Given A and Z, we now describe the predictive process of the labels. The node-community affiliation impacts this process from two aspects: for each node, the corresponding row of Z serves as side information that could enrich node attributes; for each pair of connected nodes, it could be used to derive the node interactions rate via each community, i.e., {γkZi,kZj,k}k=1,K , which are further used to partition the edges into K communities, carried out by the edge partitioner: Edge partitioner. The edge partitioner takes edges A and node-community affiliation matrix Z as inputs and returns K positive-weighted edges: {A(1),A(2), · · · ,A(K)}, s.t. ∑K k=1 A (k) = A. The edge partition function is A (k) i,j = Ai,j · e (γkZi,kZj,k)/τ∑ k′ e (γk′Zi,k′Zj,k′ )/τ , k ∈ {1, . . . ,K}, i, j ∈ {1, . . . , N}, (3) where τ is a “temperature” that controls the sharpness of partition. The effect of the edge partitioner could be considered as a soft assignment of each edge into different communities. Setting temperature τ to be low could drive the soft assignment towards a hard assignment, and consequently, when aggregating node features with {A(1),A(2), · · · ,A(K)}, the edge partitioner would concentrate the information exchange between any two connected nodes at one community. In other words, the mutual influence between two nodes, measured by the edge weight, is relatively high in the community that contributes the most of the interactions between the pair, while being relatively weak in the other communities. In our implementation, we further generalize Equation (3) to a metacommunity-based edge partition, specifically, by replacing γkZi,kZj,k with Zi,kdiag(γk)Z′j,k. In the new expression, Zi,k denotes the kth segment in node i’s community-affiliation encoding, γk is a vector of the activation levels for communities in the kth metacommunity. This generalization allows the total number of communities to be greater than K, enhancing the model implementation flexibility at a minor interpretability cost. The edge partitioner provides a soft separation of the neighborhood aggregation routes for each community, which is passed to the community-GNN bank: Community-GNN bank. The module takes X∗ := X ∥ Z as input, where the operator ∥ denotes “concatenation,” and learns community-specific node embeddings with K separate GNNs, i.e., the outputs are g(1)θ (X ∗), g (2) θ (X ∗), · · · , g(K)θ (X∗), where g (k) θ (·) := GNNθ ( ·,A(k) ) , k ∈ {1, . . . ,K}. The intention of the edge partitioner and community-GNN bank is to capture the node information specific to each community. For instance, in a social network, such kind of information could be the different social roles of people when considered affiliated with different social groups, e.g., one may be a former computer science major student in an alumni group, a research scientist in a work group, and an amateur chess player in a hobby club group. As shown in Section 4.3, the node embeddings produced by the community-GNN bank are separable by communities. We finally introduce the representation composer, which embeds a combination of the information provided by each community into a global representation via a GNN: Representation composer. Let H(k) := g(k)θ (X ∗) denote the node representations learned from the kth community, where k = 1, . . . ,K, and f(·) denote the representation composer, whose functionality is to project a composite of community-specific node representations to one representation matrix, i.e., HV = f ( H(1),H(2), · · · ,H(K) ) := GNNθ ( ∥Kk=1H(k),A ) . For graph-level tasks, we further pool node embeddings into a single vector representation hG , as in Xu et al. [6]. Taking softmax on the feature dimension of HV or hG gives the predicted probabilities of labels, from which we are able to classify the unlabeled objects. Cascading the edge partitioner and community-GNN bank with the representation composer yields the probability pθ(y |X,A,Z). 3.3 Variational latent community inference We use a community encoder as the module in the inference network to approximate the posterior distribution pθ(Z |A,y), and provide the generative network with the critical latent variable Z: Community encoder. The community encoder models the variational posterior qϕ(Z |A,X) by a Weibull distribution with shape K and scale Λ, whose parameters are learned by a GNN as K ∥Λ = GNNϕ ( X,A ) . A Weibull random sample from qϕ(Z |A,X) could be created through the inverse CDF transformation of a uniform random variable, given as follows: Z = Λ⊙ ( − log(1−U) )(1⊘K) ,Ui,k iid∼ U(0, 1), ∀(i, k) ∈ {1, . . . , N} × {1, . . . ,K}, (4) where ⊙ and ⊘ denote element-wise multiplication and division. 3.4 The overall training algorithm and complexity analysis We train VEPM by optimizing the evidence lower bound (ELBO), decomposed into three terms, as L = Ltask + Legen + LKL, (5) where Ltask = Eqϕ(Z |A,X) log pθ(yo |A,Z,X), Legen = Eqϕ(Z |A,X) log pθ(A |Z), and LKL = −DKL ( qϕ(Z |A,X) ∥ p(Z) ) . These three terms correspond to the classification task, edge generation, and KL-regularization, respectively. Note that our specifications of Z’s prior and variational posterior yield an analytical expression of LKL, as described in detail in Appendix B. Recall that N , M , K denote the numbers of nodes, edges, and communities (or metacommunities) in the graph; F denotes the feature dimension; and L1, L2, L3 denote the number of layers in the community encoder, in the community-GNN bank, and in the representation composer. In VEPM, we limit the size of hidden dimension in each community-GNN to 1/K of what is commonly used among the baselines, hence the time complexity of training VEPM isO((L1+L2+L3)MF +(L1+L2/K+ L3)NF 2 +N2F ). For a sparse graph where N2 ≫M , the computational overhead can be reduced to O(M) 2 if graph reconstruction is accelerated via subsampling the nodes to O( √ M), as in Salha et al. [30]. Effects and implications of adopting such type of acceleration algorithms are discussed in Appendix A. The space complexity of VEPM isO((L1+L2+L3)NF+KM+(L1+L2/K+L3)F 2), among which O((L1 + L2/K + L3)F 2) is contributed by model parameters. It is noteworthy to point out that the memory cost of graph reconstruction is manageable by computing the dense matrix multiplication block-wise with fixed maximum block size. From all aspects of complexity, VEPM (with acceleration) is comparable to GATs [15, 31] and models involving graph factorization [27, 28]. 4 Empirical Evaluation 4.1 Node & graph classification Datasets & experimental settings. For node classification, we consider three citation networks (Cora, Citeseer, and Pubmed) and a Wikipedia-based online article network (WikiCS) [32], which provide either bag-of-words document representations or average word embeddings as node features, and (undirected) citations or hyperlinks as edges; For graph classification, we consider four bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and four social network datasets (IMDBBINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI). The input node features are crafted in the same way as Xu et al. [6]. Most of the baselines are compared following the 10-fold crossvalidation-based evaluation protocol proposed by Xu et al. [6]. For the graph classification task, we also evaluate our model following Zhang and Chen [33], which conducts a more rigorous trainvalidation-test protocol. More details about the experiments are elaborated in Appendix C.2. Node classification. We use classification accuracy as the evaluation metric for node classification. Table 1 reports the average performance of VEPM (± standard error) against related baselines that are categorized into three different groups. The first group consists of the GCNs [5] and its variant GCN-64, which expands the hidden dimension from 16 to 64. VEPM outperforms the first group by a significant margin. The explanation is twofold: (i) VEPM augments the node attributes with community information carried by the inferred node-community affiliations, (ii) VEPM learns community-specific node embeddings. The second group includes SIG-VAE [21] and WGCAE [22], both of which learn node embeddings from graph generative models jointly optimized with a supervised loss. The performance gain that VEPM obtains could be attributed to our unique label predictive process, which not only appends extra community information to node attributes but also injects structural patterns of communities into neighborhood aggregation. The third group [15, 31, 27] is focused on learning node embeddings leveraging heterogeneous hidden relations. When fitting the hidden relations via the attention mechanism, they only use information from the node features 2For expression simplicity, the scale of F is treated as constant, which is ubiquitous in practice. through label supervision, whereas our approach also takes the observed graph structure into account via a graph generative model. The additional information from the graph utilized by VEPM could explain the enhancement achieved by VEPM over the third group. Graph classification. For the first protocol, we compare VEPM with classical graph classification baselines [34–36], generic-GNN-based models [15, 31, 6], and GNNs with relation-based or taskdriven graph factorization [25, 26, 28]. Results in Table 2 show that VEPM achieves the best graph classification performance on 5 out of 8 benchmarks, and the second-best performance on the other 3 benchmarks, including NCI1, where VEPM outperforms all the other GNN-based models. For the second protocol, aside from a traditional method, WL [34], and a generic-GNN-based method, DGCNN [37], we compare VEPM with two GNNs [38, 33] that adapt capsule neural networks [39] to graph-structured data, which aim to learn different aspects of graph properties via dynamic routing [40]. The results in Table 3 show that VEPM outperforms these GNN-based methods on most of the benchmarks. This indicates that the aspects of graph information, as defined with communities and learned via aggregating node features with partitioned graphs, are more pertinent to the end tasks. Classification with reduced labels. The gain of incorporating the graph generation process into VEPM is that the observed graph structure also provides information for hidden community detection and edge partition, which is beneficial if the amount of labeled data is not sufficient for both decomposing the graph into hidden factors and semi-supervised task-learning. To illustrate this point, based on the evaluation protocol by Kipf and Welling [5] for node classification and Xu et al. [6] for graph classification, we re-evaluate the performance of VEPM under reduced training labels on Cora (for node classification) and MUTAG (for graph classification), along with generic-GNN-based methods [5, 15, 6] and decomposition-based methods [27, 28]. The results are shown in Figure 2: the performance of all models is negatively impacted by the reduction in training labels, those suffering from the strongest performance decline are DisenGCN [27] and FactorGCN [28] that decompose graphs solely based on label supervision, while VEPM also performs graph factorization with limited labels, it consistently outperforms the other methods under all settings, and the superiority becomes even more evident as the amount of annotated data further decreases. We consider the enhanced robustness under sparsely labeled data as a crucial feature to have for real-world graphs. 4.2 Ablation Studies Different edge partition schemes. In this part, we study how much a meaningful edge partition would benefit the task. Except for the partition we obtained from training the regular VEPM, we tested the performance of VEPM under two edge partition schemes: even partition and random partition. Even partition means all edge weights are 1/K. For random partition, we sample the unnormalized edge weights from U(0, 100), then normalize them along the community dimension with softmax. The random edge weights are fixed for training to simulate an undesirable convergence state of edge partition. The results are recorded in Table 4. Generally speaking, the performance of VEPM with even edge partition is comparable to that of GIN, our base model; when we substitute random partition for even partition, the model performance becomes worse than the base model. These ablation study results suggest that VEPM can benefit from enhancing the quality of edge partitions. Different representation composer. Next, we use the partition obtained from a regular VEPM to represent partitions that are meaningful for the task, and a random partition to represent “meaningless partitions.” For each partition scheme, we contrast the performance of VEPM between two designs of the representation composer: (i) GNN-based design, represented by the current design; (ii) parsimonious design, represented by a fully-connected (FC) layer. The results are in Table 5. When the edge partition is meaningful, i.e., relevant to the task, the overall performance with a simpler representation composer is generally slightly worse than the current design, but still better than most of the baselines. However, if the partition is irrelevant to the task, the GNN-based representation composer would be the last thing standing between the model and a failed task, as the graph structure used for neighborhood aggregation is still informative to the task. So the current design would slightly benefit VEPM in a general case and protect the performance of the model in the worst-case scenario. Different edge partitioning temperatures. We use the same way as the previous study to get meaningful and random partitions. In this study, we focus on the effect of different selections of τ , the temperature parameter of softmax. The value of τ controls the sharpness of the partitioned edge weights, i.e., a small τ drives the partitioned weights for the same edge towards a one-hot vector, whereas a large τ would eliminate the divergence among the edge weights and make the edge partition no more different from an even partition. The results of this experiment are recorded in Table 6, the upper half of which is obtained from a meaningful edge partition, and the lower half is obtained from a random edge partition. We can induce from these results that when the basis to perform edge partition, namely, the inferred latent node interactions, is relevant to the task, a sharp edge partition that highlights the differences among the communities may be helpful for the task. Otherwise, a smooth edge partition might be more favorable. 4.3 Qualitative analysis Visualizing community structures. To show that through partitioning the edges, VEPM identifies each latent community from the original graph, in Figure 3, we plot the adjacency matrices of a 200-node subgraph of Cora, before and after edge partition. The subgraph is created via breadth-first search [41] node selection to ensure connectivity. We sort the nodes sample S in order to present a clearer view of the community structures. Specifically, we prepare K buckets; for node u, we compute the total interactions it engages under metacommunity k by µu,k := ∑ v∈S Zu,kdiag(γk)Z ′ v,k, and assign it to bucket k′, where k′ = argmaxk µu,k. We first sort the buckets by descending their counts of assigned nodes, then sort the nodes within bucket k, k ∈ {1, . . . ,K}, by descending value of µu,k. The Z used for sorting is sampled at the end of training. The community structures that VEPM detects could be found in Figure 3a as the blocks on the main diagonal. The fact that most of the bright spots (i.e., edges) are located in these on-diagonal blocks reflects dense node connections internal to each community. Sparsely distributed off-diagonal spots provide evidence of overlapping membership between the communities, which is also taken into account by our graph generative model. Figures 3b to 3i show that the edge partitioner has done a sound job in separating latent communities. Due to the limited size of the subgraph, some small metacommunities may not contain any large-weighted edges between the nodes in this subgraph, which accounts for the appeared emptiness in Figures 3h and 3i. Such kind of artifact disappears when we replace the subgraph by the full graph of Cora, as visualized in Figures 6 to 14 in Appendix G. Visualizing latent representations. The previous visualization experiment verifies the following propositions: (i) the identified hidden structures are latent communities, and (ii) the partitioned graphs are different from each other. We now study how they benefit representation learning. Cora and MUTAG are selected as the representatives for node and graph classification benchmarks. For the MUTAG dataset, we remove graphs that contain node categories with less than 5 instances (less than 10 from a totality of 188) and randomly sample 10 graphs for the visualization experiments. We first visualize Z obtained at the end of the unsupervised pretrain stage (Figures 4a and 4b), where the node-community affiliation vectors are projected to 2-D space via t-SNE [42]. Proposition (i) ensures that the node information carried by Z is about communities. We color code the scatters by node labels, both Figures 4a and 4b exhibit a strong correlation between the spatial clusters and colors, which indicates that even without label supervision, the node information provided with Z has discriminative power on classifying nodes. We then visualize the community-specific node emebddings obtained at the end of the supervised finetune stage (Figures 4c and 4d). Similarly, t-SNE is adopted to reduce the dimensionality of obtained node embeddings. This time we color code the scatters by the latent metacommunity they correspond to. Both Figures 4c and 4d exhibit clear boundaries between the colored clusters, which show that the model is able to extract different information from multiple communities, which enhances the overall expressiveness of learned node or graph representations and potentially leads to better model performance on the downstream tasks. 5 Exploratory Studies Task-relevant communities. We assess the relevance of the learned communities to the task from an experiment on the Cora dataset. In order to see the statistical dependency between communities and the task, we first hard-assign each node to the community of the highest affiliation strength, then we compute the normalized mutual information (NMI) between the hard-partitioned communities and node labels. NMI is a score between 0 and 1, with higher values indicating stronger statistical dependencies between two random variables. In the above experiment, we obtain an NMI of 0.316 when the communities are learned solely by the unsupervised pretrain, such value increases to 0.322 after the supervised finetune. The results of this experiment show that the initial communities we obtained from the unsupervised pretrain are meaningful to the task, and the relevance between the inferred communities and the task would be enhanced by the supervised finetune. Diverse community-specific information. To see the effect of the information provided by different communities, we first train VEPM until convergence, then use the node representations learned by each community GNN to train an SVM and visualize the averaged 10-fold cross-validation results by a confusion matrix. With the results shown in Figure 5, we can further conclude that most of the communities can provide task-related information, as we can observe from Figure 5 that the diagonal cells in most of the confusion matrices have darker shades than off-diagonal cells. Beyond that, the information extracted from different communities is complementary to each other. For example, in the second column of Figure 5, information from the upper community can separate classes 3 and 4 well, but would mix classes 4 and 5; information from the lower community, on the other hand, can separate classes 4 and 5 but works worse on separating classes 3 and 4. The working mechanism of VEPM. In summary, the communities learned by VEPM have the following properties: (i) the communities are relevant to the task; (ii) the information provided by different communities is complementary, so the overall amount of information for the task is accumulated over communities. Both properties are intuitively helpful for the downstream task. 6 Conclusion Moving beyond treating the graph adjacency matrix as given, we develop variational edge partition models (VEPMs) to extract overlapping node communities and perform community-specific node feature aggregations. Specifically, we first utilize a GNN-based inference network to obtain nodecommunity affiliation strengths, with which we augment node attributes and partition the edges according to the intensities of node interactions with respect to each community. We learn GNNbased node embeddings for each community by aggregating node features with the corresponding partitioned graph, and aggregate all community-specific node embeddings for the downstream tasks. Extensive qualitative and quantitative experiments on both node-level and graph-level classification tasks are performed to illustrate the working mechanism and demonstrate the efficacy of VEPMs in supervised graph representation learning. Acknowledgments Y. He and M. Zhou acknowledge the support of NSF IIS 1812699 and 2212418, and the Texas Advanced Computing Center (TACC) for providing HPC resources that have contributed to the research results reported within this paper. Besides, this work was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by Shaanxi Youth Innovation Team Project; in part by the 111 Project under Grant B18039; in part by the Fundamental Research Funds for the Central Universities QTZX22160.
1. What is the novelty of the proposed variational autoencoder model? 2. What are the strengths and weaknesses of the paper regarding its originality, quality, clarity, significance, and grammar? 3. What are some open questions or ablation studies that can be explored further? 4. How does the reviewer assess the scalability of the method in practical applications? 5. Are there any suggestions for improving the diagram in Figure 1? 6. What are the limitations of the approach, and how adequately have they been addressed by the authors?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes a novel variational auto-encoder that learns to separate the graph into different communities, and then perform a graph learning task by aggregating over those separate inferred community subgraphs. The model works by learning a variational distribution for a node affiliation matrix, which essentially says how affiliated a node is with each community. From that, edges are are sampled using a poisson distribution based on how well adjacent nodes' affiliations overlap. The edges are then partitioned using an edge partitioner. Once partitioned, each community graph goes through a separate GNN encoder, and then the representations are composed using another model. The overall training algorithm has three objective terms corresponding to the classification task, the reconstruction task, and the KL divergence regularization. The model is evaluated on node and graph classification tasks against a suite of relevant baselines. In addition, multiple visualizations and ablations are performed. Strengths And Weaknesses Originality: ++ Seems to be a novel model overall and involves interesting ideas to learn these community associations through an unsupervised process. Quality: ++ Paper is a complete work, it is well evaluated. In addition, the authors perform some good ablation studies and visualizations. -- I would like to see more ablations indicating where the model improvement comes from. How much does the edge partitioner matter? How about the composer? Would replacing these with super simple versions hurt performance? I think there are some questions potentially about where the gains in performance might be coming from The analysis of space complexity is incorrect. It shows the parameter space complexity, not the space complexity. For example, if a model requires an N x N adjacency matrix in memory, that is O(N^2) space complexity even if there are no parameters. (This might just be a clarity issue, the authors should specify that this is analysis of the number of parameters). Clarity: ++ Paper is very well written, and is precise. Figure 1 is very helpful Figure 1 could be improved by incorporating the different loss terms in the diagram. -Should probably specify that the optimization in equation 6 is maximization. Significance: ++ Model is definitely significant in that it improves the state of the art as well as introducing some new ideas. -- Unclear what the scalability of the method is in practice. No large graphs were tested. Grammar/Spelling: line 116: disjuction -> disjunction line 271 "amount of label" -> "amount of labelled data" Questions In no particular order: (Some of these are just open questions/ablations that do not necessarily have to be performed) In figure 1, should the concatenation of X and Phi be horizontally concatenated instead of vertically? (ablation) What would happen if there was no edge partitioner and all edges were passed to each GNN? Or randomly partitioned? (ablation) What if the composer was just a mean of the representations? Or weighted sum of the representations based on each nodes' affiliation scores? This would further tests the idea that the GNN community bank is doing a significant amount of work. Are there any datasets with community labels that we could see if the learned communities align with? Limitations Adequately addressed
NIPS
Title A Variational Edge Partition Model for Supervised Graph Representation Learning Abstract Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks. 1 Introduction Many real-world entities are bonded by relations, e.g., the users in a social network service are connected by online friendships, and the atoms in molecules are held together by chemical bonds. Representing the entities by nodes and relations by edges, a set of interconnected entities naturally forms a graph. Reasoning about these entities and their relations could be conducted under graph neural networks (GNNs) [1–4]. Instead of isolating the feature transformation for each individual entity, GNNs allow the information to be exchanged between related entities. Models employing this strategy have achieved great success in a wide range of applications involving graph-structured data, such as classification [5, 6], link prediction [7], recommendation [8, 9], (overlapping) community detection [10–13], and drug discovery [14]. The essence of many graph-analytic tasks could be summarized as supervised graph representation learning, where the information on the graph is usually characterized by the node features, adjacency matrix, and node or graph labels. A supervised machine learning pipeline is often introduced to embed the nodes or graph under label supervision for a specific classification task. For most of the GNN-based methods, the information on nodes and edges is composited through neighborhood aggregation, i.e., updating the features of each node by combining the features of the surrounding nodes. During this process, the graph structure mostly serves as a given source to indicate the nodes’ neighborship [15] or provide the weights for aggregation [5]. Treating the graph structure as given ∗Equal contribution. † Corresponding authors. Code available at https://github.com/YH-UtMSB/VEPM 36th Conference on Neural Information Processing Systems (NeurIPS 2022). overlooks the latent structures that control the formation of the graph, which, however, could provide valuable information for graph representation learning. In other words, failing to consider the latent graph structures may limit the ultimate potential of this line of work. One type of latent structure at the hinge of the graph structure and node information is node communities. Their connection to the graph can be explained under a latent-community-based graph generation process. For example, both the mixed-membership stochastic blockmodel (MMSB) [10] and edge partition model (EPM) [11] explain the formation of edges by node interactions over overlapping latent communities. Let us consider person u in social network G, some of her social connections may be established through interactions with colleagues as a machine learning researcher, some may originate from her interactions with co-members of the same hobby club, and different types of interactions could overlap to strengthen certain social connections between the nodes. The node community structure also sets the aspects of node information. In the example of social network G where person u is affiliated with both a research group and a few hobby groups, due to the diverse nature of these different communities, it is likely that she exhibits different characteristics when interacting with co-members of different communities. Namely, the information on u for the research group may be related to her research expertise, and the information on u for the hobby clubs may be related to her hobbies. Therefore, an ideal solution is to learn a different aspect of node properties for each community that it is affiliated with, and represent the overall node information as an aggregation of all community-specific node properties. Based on this insight, we develop a variational edge partition model (VEPM), which is a generative graph representation learning framework. VEPM models how the edges and labels are generated from overlapping latent communities. Instead of hard assigning a node to a single community, VEPM encodes each node into a K-dimensional vector of scores measuring the strength that the node is affiliated with each of the K communities. From these scores, we compute the intensities of pairwise interactions within each community. Given that information, under the Bernoulli-Poisson link, the edges could be modeled by the logical OR of independently generated binary latent edges [11]. The generation of labels includes community-specific node representation learning and aggregation. A key premise of the first part is to detect the hidden structure specified for each community. In our model, it is achieved through edge partition, i.e., decomposing each edge into a summation of link strengths according to the intensities of community-specific interactions. The edge partition step isolates the link strengths accumulated through each community. With the partitioned edges, we learn K separate GNNs to produce community-specific node representations, then compose them into the overall node embeddings to generate the labels. Evaluation shows that the proposed framework can achieve significant performance enhancement on various node and graph classification benchmarks. We summarize our main contributions as follows: • We introduce VEPM that utilizes the idea of edge partitioning to extract overlapping latent community structures, which are used to not only enrich node attributes with nodecommunity affiliation scores, but also define community-specific node feature aggregations. • We formalize the training of VEPM in a variational inference framework, which is powered by GNNs in its latent community inference, representation generation, and label prediction. • We analyze how VEPM works and evaluate it over various real-world network datasets. Empirical results show that the proposed VEPM outperforms many previous methods in various node- and graph-level classification tasks, especially with limited labels. 2 Preliminaries and Related Work Embedding nodes and graphs with GNNs. Given a node-attributed graph G with N nodes, its information is generally expressed by a design matrix X ∈ RN×F , whose rows represent node features of dimension F , and an adjacency matrix A of shape N × N . The nonzero values of A indicate the weights on the corresponding edges. GNNs [1–5, 15, 16] are multi-layer parametric models that maps (X,A) to the embedding space. The update of the embedding of node u at the tth GNN layer could be summarized as h(t)u = AGG(fθ(h (t−1) u ), {fθ(h(t−1)v )}v∈N(u),Au,:) and h(0)u = xu. Here AGG denotes the neighborhood aggregation function, N(u) denotes the neighbors of u, and fθ(·) is a transformation function parameterized with θ. In VEPM, we adopt the aggregation function introduced by Kipf and Welling [5], which can be expressed as h(t)u =∑ v∈{u}∪N(u) Ãu,vfθ(h (t−1) v ), where à is the normalized adjacency matrix of G augmented with self-loops. When the task requires a graph-level representation, all node embeddings would be summarized into a single embedding vector. This process is called graph pooling [17–19]. In this work, we focus on improving the overall performance from GNN’s perspective and adopt a simple graph pooling method proposed by Xu et al. [6]. In the sequel, we unify the expression of models consisting of GNN layers as GNNθ ( X,A ) . Community-regularized representation learning. Communities are latent groups of nodes in a graph [20]. The simplest way to use community information to help a supervised learning task is to regularize it with a community-detection related task. For instance, Hasanzadeh et al. [21] and Wang et al. [22] jointly train a deep stochastic blockmodel and a classification model; a similar approach could be found in Liu et al. [23], where the graph reconstruction regularizer is replaced by a modularity metric. Either way, the node embeddings are expected to reflect patterns of community structures and maintain informative to the task at the same time. These methods generally treat node-community affiliation as node embeddings and expect them to contain sufficient information for the downstream task. However, the affiliation strengths of nodes to a community may oversimplify the information that such a community can provide for the downstream task. Unlike these methods, VEPM embeds node information on each community with a specific GNN, where the community structures are reflected by the partitioned adjacency matrices. Aggregating node features with these weighted adjacency matrices naturally incorporates community information into task-learning. Multi-relational data analysis. Leveraging heterogeneous relations has been extensively studied in the literature of mining Knowledge Graphs [24–26]. The data of these models is usually organized by a multigraph, which is related to our perspective of the latent node interactions that generate the observed graph because the interactions in different communities are potentially heterogeneous, and they are permitted to coexist between a pair of nodes. This leads to a shared high-level idea to break down the original heterogeneous graph into homogeneous factors and analyze each factor with a customized model. However, unlike those multigraphs where the edge types are explicitly annotated, the overlapping heterogeneous node interactions are not observable from the data that VEPM aims to handle, escalating the difficulty of our optimization problem. To deal with the latent variables, we formalize VEPM into a generative model and train it via variational inference. Graph factorization-based models. The existence of multiple hidden structures in graphs is also noticed by some previous work [27, 28]. The architecture of these models starts with graph factorization, followed by a set of modules with each one processing the information from one of the graph factors. In these methods, the only ground-truth information that trains graph factorization is from label supervision, making them rely on excessive label annotations, which might be hard to suffice in practice and thus potentially hinder their application. On account of the potential heterogeneity of graph information on different communities, VEPM adopts a similar pipeline to these methods, but the factorization of the original graph is driven by not only the supervised task but also a graph generative model, which greatly reduces our dependency on the amount of labeled data. 3 Variational Edge Partition Model Many analytical tasks on node-attributed graphs could boil down to (semi-) supervised classification problems, i.e., given (X,A) and observed node or graph labels yo, predicting the unobserved labels yu. In VEPM, we formalize the classification task as modeling the predictive distribution as p(yu |X,A,yo), and construct our variational supervised learning pipeline with GNNs. 3.1 Generative task-dependent graph representation learning with latent communities To model p(yu |X,A,yo), we introduce Z ∈ RN×K+ , a latent non-negative node-community affiliation matrix, whose entry at the nth row and kth column is interpreted as the strength that the nth node is affiliated with the kth community. Given Z, we specify a generative network as A ∼ pθ(A |Z), yo ∼ pθ(y |X,A,Z) (1) to describe how the observed edges A and labels yo are generated. Due to the complexity brought by the deep architecture of the generative network, analytically inferring the posterior of Z is impractical. We hence approximate the posterior pθ(Z |A,yo) with a variational distribution, qϕ(Z |A,X), modeled by a separate GNN-based inference network. The subscripts θ and ϕ denote the parameters in the generative network and inference network, respectively. We illustrate the architecture of VEPM in Figure 1. We approximate the posterior predictive distribution using the Monte Carlo method as p̂θ(yu |X,A,yo) ≈ 1S ∑S s=1 pθ(yu |X,A,Z(s)), (2) where Z(s) iid∼ qϕ(Z |A,X) for s = 1, . . . , S. In what follows, we outline the data generation process and corresponding modules in the generative network, introduce the module in the inference network, and describe how to train both networks. 3.2 Edge generation and label prediction To model the distribution of the edges given Z, we adopt the generative process developed in EPM [11], which explains the generation of the edges under the Bernoulli-Poisson link as Ai,j = 1Mi,j≥1, Mi,j ∼ Poisson (∑K k=1 γkZi,kZj,k ) . Here γk is a positive community activation level indicator, which measures the member interaction frequency via community k, and in our practice is treated as a trainable parameter shared by both the inference network and generative network. We interpret γkZi,kZj,k as the interaction rate between nodes i and j via community k. The EPM has an alternative representation that partitions each edge into the logical disjunction (i.e., logical OR) of K latent binary edges [29], expressed as Ai,j = ∨Kk=1Ai,j,k, Ai,j,k ∼ Bernoulli(pijk), where pijk := 1− e−γkZi,kZj,k . Thus nodes i and j would be connected as long as their interaction forms an edge in at least one community. In other words, they are disconnected only if their interactions in all communities fail to generate any edges. Under the EPM, we have the conditional probability as pθ(Ai,j = 1 |Z) = 1− e− ∑K k=1 γkZi,kZj,k . To complete the model, we set the prior distribution of the node-community affiliation matrix as Z ∼ ∏N i=1 ∏K k=1 Gamma(Zi,k;α, β). Given A and Z, we now describe the predictive process of the labels. The node-community affiliation impacts this process from two aspects: for each node, the corresponding row of Z serves as side information that could enrich node attributes; for each pair of connected nodes, it could be used to derive the node interactions rate via each community, i.e., {γkZi,kZj,k}k=1,K , which are further used to partition the edges into K communities, carried out by the edge partitioner: Edge partitioner. The edge partitioner takes edges A and node-community affiliation matrix Z as inputs and returns K positive-weighted edges: {A(1),A(2), · · · ,A(K)}, s.t. ∑K k=1 A (k) = A. The edge partition function is A (k) i,j = Ai,j · e (γkZi,kZj,k)/τ∑ k′ e (γk′Zi,k′Zj,k′ )/τ , k ∈ {1, . . . ,K}, i, j ∈ {1, . . . , N}, (3) where τ is a “temperature” that controls the sharpness of partition. The effect of the edge partitioner could be considered as a soft assignment of each edge into different communities. Setting temperature τ to be low could drive the soft assignment towards a hard assignment, and consequently, when aggregating node features with {A(1),A(2), · · · ,A(K)}, the edge partitioner would concentrate the information exchange between any two connected nodes at one community. In other words, the mutual influence between two nodes, measured by the edge weight, is relatively high in the community that contributes the most of the interactions between the pair, while being relatively weak in the other communities. In our implementation, we further generalize Equation (3) to a metacommunity-based edge partition, specifically, by replacing γkZi,kZj,k with Zi,kdiag(γk)Z′j,k. In the new expression, Zi,k denotes the kth segment in node i’s community-affiliation encoding, γk is a vector of the activation levels for communities in the kth metacommunity. This generalization allows the total number of communities to be greater than K, enhancing the model implementation flexibility at a minor interpretability cost. The edge partitioner provides a soft separation of the neighborhood aggregation routes for each community, which is passed to the community-GNN bank: Community-GNN bank. The module takes X∗ := X ∥ Z as input, where the operator ∥ denotes “concatenation,” and learns community-specific node embeddings with K separate GNNs, i.e., the outputs are g(1)θ (X ∗), g (2) θ (X ∗), · · · , g(K)θ (X∗), where g (k) θ (·) := GNNθ ( ·,A(k) ) , k ∈ {1, . . . ,K}. The intention of the edge partitioner and community-GNN bank is to capture the node information specific to each community. For instance, in a social network, such kind of information could be the different social roles of people when considered affiliated with different social groups, e.g., one may be a former computer science major student in an alumni group, a research scientist in a work group, and an amateur chess player in a hobby club group. As shown in Section 4.3, the node embeddings produced by the community-GNN bank are separable by communities. We finally introduce the representation composer, which embeds a combination of the information provided by each community into a global representation via a GNN: Representation composer. Let H(k) := g(k)θ (X ∗) denote the node representations learned from the kth community, where k = 1, . . . ,K, and f(·) denote the representation composer, whose functionality is to project a composite of community-specific node representations to one representation matrix, i.e., HV = f ( H(1),H(2), · · · ,H(K) ) := GNNθ ( ∥Kk=1H(k),A ) . For graph-level tasks, we further pool node embeddings into a single vector representation hG , as in Xu et al. [6]. Taking softmax on the feature dimension of HV or hG gives the predicted probabilities of labels, from which we are able to classify the unlabeled objects. Cascading the edge partitioner and community-GNN bank with the representation composer yields the probability pθ(y |X,A,Z). 3.3 Variational latent community inference We use a community encoder as the module in the inference network to approximate the posterior distribution pθ(Z |A,y), and provide the generative network with the critical latent variable Z: Community encoder. The community encoder models the variational posterior qϕ(Z |A,X) by a Weibull distribution with shape K and scale Λ, whose parameters are learned by a GNN as K ∥Λ = GNNϕ ( X,A ) . A Weibull random sample from qϕ(Z |A,X) could be created through the inverse CDF transformation of a uniform random variable, given as follows: Z = Λ⊙ ( − log(1−U) )(1⊘K) ,Ui,k iid∼ U(0, 1), ∀(i, k) ∈ {1, . . . , N} × {1, . . . ,K}, (4) where ⊙ and ⊘ denote element-wise multiplication and division. 3.4 The overall training algorithm and complexity analysis We train VEPM by optimizing the evidence lower bound (ELBO), decomposed into three terms, as L = Ltask + Legen + LKL, (5) where Ltask = Eqϕ(Z |A,X) log pθ(yo |A,Z,X), Legen = Eqϕ(Z |A,X) log pθ(A |Z), and LKL = −DKL ( qϕ(Z |A,X) ∥ p(Z) ) . These three terms correspond to the classification task, edge generation, and KL-regularization, respectively. Note that our specifications of Z’s prior and variational posterior yield an analytical expression of LKL, as described in detail in Appendix B. Recall that N , M , K denote the numbers of nodes, edges, and communities (or metacommunities) in the graph; F denotes the feature dimension; and L1, L2, L3 denote the number of layers in the community encoder, in the community-GNN bank, and in the representation composer. In VEPM, we limit the size of hidden dimension in each community-GNN to 1/K of what is commonly used among the baselines, hence the time complexity of training VEPM isO((L1+L2+L3)MF +(L1+L2/K+ L3)NF 2 +N2F ). For a sparse graph where N2 ≫M , the computational overhead can be reduced to O(M) 2 if graph reconstruction is accelerated via subsampling the nodes to O( √ M), as in Salha et al. [30]. Effects and implications of adopting such type of acceleration algorithms are discussed in Appendix A. The space complexity of VEPM isO((L1+L2+L3)NF+KM+(L1+L2/K+L3)F 2), among which O((L1 + L2/K + L3)F 2) is contributed by model parameters. It is noteworthy to point out that the memory cost of graph reconstruction is manageable by computing the dense matrix multiplication block-wise with fixed maximum block size. From all aspects of complexity, VEPM (with acceleration) is comparable to GATs [15, 31] and models involving graph factorization [27, 28]. 4 Empirical Evaluation 4.1 Node & graph classification Datasets & experimental settings. For node classification, we consider three citation networks (Cora, Citeseer, and Pubmed) and a Wikipedia-based online article network (WikiCS) [32], which provide either bag-of-words document representations or average word embeddings as node features, and (undirected) citations or hyperlinks as edges; For graph classification, we consider four bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and four social network datasets (IMDBBINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI). The input node features are crafted in the same way as Xu et al. [6]. Most of the baselines are compared following the 10-fold crossvalidation-based evaluation protocol proposed by Xu et al. [6]. For the graph classification task, we also evaluate our model following Zhang and Chen [33], which conducts a more rigorous trainvalidation-test protocol. More details about the experiments are elaborated in Appendix C.2. Node classification. We use classification accuracy as the evaluation metric for node classification. Table 1 reports the average performance of VEPM (± standard error) against related baselines that are categorized into three different groups. The first group consists of the GCNs [5] and its variant GCN-64, which expands the hidden dimension from 16 to 64. VEPM outperforms the first group by a significant margin. The explanation is twofold: (i) VEPM augments the node attributes with community information carried by the inferred node-community affiliations, (ii) VEPM learns community-specific node embeddings. The second group includes SIG-VAE [21] and WGCAE [22], both of which learn node embeddings from graph generative models jointly optimized with a supervised loss. The performance gain that VEPM obtains could be attributed to our unique label predictive process, which not only appends extra community information to node attributes but also injects structural patterns of communities into neighborhood aggregation. The third group [15, 31, 27] is focused on learning node embeddings leveraging heterogeneous hidden relations. When fitting the hidden relations via the attention mechanism, they only use information from the node features 2For expression simplicity, the scale of F is treated as constant, which is ubiquitous in practice. through label supervision, whereas our approach also takes the observed graph structure into account via a graph generative model. The additional information from the graph utilized by VEPM could explain the enhancement achieved by VEPM over the third group. Graph classification. For the first protocol, we compare VEPM with classical graph classification baselines [34–36], generic-GNN-based models [15, 31, 6], and GNNs with relation-based or taskdriven graph factorization [25, 26, 28]. Results in Table 2 show that VEPM achieves the best graph classification performance on 5 out of 8 benchmarks, and the second-best performance on the other 3 benchmarks, including NCI1, where VEPM outperforms all the other GNN-based models. For the second protocol, aside from a traditional method, WL [34], and a generic-GNN-based method, DGCNN [37], we compare VEPM with two GNNs [38, 33] that adapt capsule neural networks [39] to graph-structured data, which aim to learn different aspects of graph properties via dynamic routing [40]. The results in Table 3 show that VEPM outperforms these GNN-based methods on most of the benchmarks. This indicates that the aspects of graph information, as defined with communities and learned via aggregating node features with partitioned graphs, are more pertinent to the end tasks. Classification with reduced labels. The gain of incorporating the graph generation process into VEPM is that the observed graph structure also provides information for hidden community detection and edge partition, which is beneficial if the amount of labeled data is not sufficient for both decomposing the graph into hidden factors and semi-supervised task-learning. To illustrate this point, based on the evaluation protocol by Kipf and Welling [5] for node classification and Xu et al. [6] for graph classification, we re-evaluate the performance of VEPM under reduced training labels on Cora (for node classification) and MUTAG (for graph classification), along with generic-GNN-based methods [5, 15, 6] and decomposition-based methods [27, 28]. The results are shown in Figure 2: the performance of all models is negatively impacted by the reduction in training labels, those suffering from the strongest performance decline are DisenGCN [27] and FactorGCN [28] that decompose graphs solely based on label supervision, while VEPM also performs graph factorization with limited labels, it consistently outperforms the other methods under all settings, and the superiority becomes even more evident as the amount of annotated data further decreases. We consider the enhanced robustness under sparsely labeled data as a crucial feature to have for real-world graphs. 4.2 Ablation Studies Different edge partition schemes. In this part, we study how much a meaningful edge partition would benefit the task. Except for the partition we obtained from training the regular VEPM, we tested the performance of VEPM under two edge partition schemes: even partition and random partition. Even partition means all edge weights are 1/K. For random partition, we sample the unnormalized edge weights from U(0, 100), then normalize them along the community dimension with softmax. The random edge weights are fixed for training to simulate an undesirable convergence state of edge partition. The results are recorded in Table 4. Generally speaking, the performance of VEPM with even edge partition is comparable to that of GIN, our base model; when we substitute random partition for even partition, the model performance becomes worse than the base model. These ablation study results suggest that VEPM can benefit from enhancing the quality of edge partitions. Different representation composer. Next, we use the partition obtained from a regular VEPM to represent partitions that are meaningful for the task, and a random partition to represent “meaningless partitions.” For each partition scheme, we contrast the performance of VEPM between two designs of the representation composer: (i) GNN-based design, represented by the current design; (ii) parsimonious design, represented by a fully-connected (FC) layer. The results are in Table 5. When the edge partition is meaningful, i.e., relevant to the task, the overall performance with a simpler representation composer is generally slightly worse than the current design, but still better than most of the baselines. However, if the partition is irrelevant to the task, the GNN-based representation composer would be the last thing standing between the model and a failed task, as the graph structure used for neighborhood aggregation is still informative to the task. So the current design would slightly benefit VEPM in a general case and protect the performance of the model in the worst-case scenario. Different edge partitioning temperatures. We use the same way as the previous study to get meaningful and random partitions. In this study, we focus on the effect of different selections of τ , the temperature parameter of softmax. The value of τ controls the sharpness of the partitioned edge weights, i.e., a small τ drives the partitioned weights for the same edge towards a one-hot vector, whereas a large τ would eliminate the divergence among the edge weights and make the edge partition no more different from an even partition. The results of this experiment are recorded in Table 6, the upper half of which is obtained from a meaningful edge partition, and the lower half is obtained from a random edge partition. We can induce from these results that when the basis to perform edge partition, namely, the inferred latent node interactions, is relevant to the task, a sharp edge partition that highlights the differences among the communities may be helpful for the task. Otherwise, a smooth edge partition might be more favorable. 4.3 Qualitative analysis Visualizing community structures. To show that through partitioning the edges, VEPM identifies each latent community from the original graph, in Figure 3, we plot the adjacency matrices of a 200-node subgraph of Cora, before and after edge partition. The subgraph is created via breadth-first search [41] node selection to ensure connectivity. We sort the nodes sample S in order to present a clearer view of the community structures. Specifically, we prepare K buckets; for node u, we compute the total interactions it engages under metacommunity k by µu,k := ∑ v∈S Zu,kdiag(γk)Z ′ v,k, and assign it to bucket k′, where k′ = argmaxk µu,k. We first sort the buckets by descending their counts of assigned nodes, then sort the nodes within bucket k, k ∈ {1, . . . ,K}, by descending value of µu,k. The Z used for sorting is sampled at the end of training. The community structures that VEPM detects could be found in Figure 3a as the blocks on the main diagonal. The fact that most of the bright spots (i.e., edges) are located in these on-diagonal blocks reflects dense node connections internal to each community. Sparsely distributed off-diagonal spots provide evidence of overlapping membership between the communities, which is also taken into account by our graph generative model. Figures 3b to 3i show that the edge partitioner has done a sound job in separating latent communities. Due to the limited size of the subgraph, some small metacommunities may not contain any large-weighted edges between the nodes in this subgraph, which accounts for the appeared emptiness in Figures 3h and 3i. Such kind of artifact disappears when we replace the subgraph by the full graph of Cora, as visualized in Figures 6 to 14 in Appendix G. Visualizing latent representations. The previous visualization experiment verifies the following propositions: (i) the identified hidden structures are latent communities, and (ii) the partitioned graphs are different from each other. We now study how they benefit representation learning. Cora and MUTAG are selected as the representatives for node and graph classification benchmarks. For the MUTAG dataset, we remove graphs that contain node categories with less than 5 instances (less than 10 from a totality of 188) and randomly sample 10 graphs for the visualization experiments. We first visualize Z obtained at the end of the unsupervised pretrain stage (Figures 4a and 4b), where the node-community affiliation vectors are projected to 2-D space via t-SNE [42]. Proposition (i) ensures that the node information carried by Z is about communities. We color code the scatters by node labels, both Figures 4a and 4b exhibit a strong correlation between the spatial clusters and colors, which indicates that even without label supervision, the node information provided with Z has discriminative power on classifying nodes. We then visualize the community-specific node emebddings obtained at the end of the supervised finetune stage (Figures 4c and 4d). Similarly, t-SNE is adopted to reduce the dimensionality of obtained node embeddings. This time we color code the scatters by the latent metacommunity they correspond to. Both Figures 4c and 4d exhibit clear boundaries between the colored clusters, which show that the model is able to extract different information from multiple communities, which enhances the overall expressiveness of learned node or graph representations and potentially leads to better model performance on the downstream tasks. 5 Exploratory Studies Task-relevant communities. We assess the relevance of the learned communities to the task from an experiment on the Cora dataset. In order to see the statistical dependency between communities and the task, we first hard-assign each node to the community of the highest affiliation strength, then we compute the normalized mutual information (NMI) between the hard-partitioned communities and node labels. NMI is a score between 0 and 1, with higher values indicating stronger statistical dependencies between two random variables. In the above experiment, we obtain an NMI of 0.316 when the communities are learned solely by the unsupervised pretrain, such value increases to 0.322 after the supervised finetune. The results of this experiment show that the initial communities we obtained from the unsupervised pretrain are meaningful to the task, and the relevance between the inferred communities and the task would be enhanced by the supervised finetune. Diverse community-specific information. To see the effect of the information provided by different communities, we first train VEPM until convergence, then use the node representations learned by each community GNN to train an SVM and visualize the averaged 10-fold cross-validation results by a confusion matrix. With the results shown in Figure 5, we can further conclude that most of the communities can provide task-related information, as we can observe from Figure 5 that the diagonal cells in most of the confusion matrices have darker shades than off-diagonal cells. Beyond that, the information extracted from different communities is complementary to each other. For example, in the second column of Figure 5, information from the upper community can separate classes 3 and 4 well, but would mix classes 4 and 5; information from the lower community, on the other hand, can separate classes 4 and 5 but works worse on separating classes 3 and 4. The working mechanism of VEPM. In summary, the communities learned by VEPM have the following properties: (i) the communities are relevant to the task; (ii) the information provided by different communities is complementary, so the overall amount of information for the task is accumulated over communities. Both properties are intuitively helpful for the downstream task. 6 Conclusion Moving beyond treating the graph adjacency matrix as given, we develop variational edge partition models (VEPMs) to extract overlapping node communities and perform community-specific node feature aggregations. Specifically, we first utilize a GNN-based inference network to obtain nodecommunity affiliation strengths, with which we augment node attributes and partition the edges according to the intensities of node interactions with respect to each community. We learn GNNbased node embeddings for each community by aggregating node features with the corresponding partitioned graph, and aggregate all community-specific node embeddings for the downstream tasks. Extensive qualitative and quantitative experiments on both node-level and graph-level classification tasks are performed to illustrate the working mechanism and demonstrate the efficacy of VEPMs in supervised graph representation learning. Acknowledgments Y. He and M. Zhou acknowledge the support of NSF IIS 1812699 and 2212418, and the Texas Advanced Computing Center (TACC) for providing HPC resources that have contributed to the research results reported within this paper. Besides, this work was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by Shaanxi Youth Innovation Team Project; in part by the 111 Project under Grant B18039; in part by the Fundamental Research Funds for the Central Universities QTZX22160.
1. What is the main contribution of the paper regarding modeling community structures in graphs? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to relevant works? 3. Are there any concerns or suggestions regarding the training process, performance improvements, dataset choices, and analyses? 4. How does the captured community by the proposed VEPM compare to community patterns extracted from existing community detection methods? 5. Are there any questions or suggestions regarding the usage of generated edge weights in GNNs, the claims of state-of-the-art performance, temperature scaling values, and visualizations?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This works aim at modeling the community structure of graphs via the latent variable of the generative model, that captures which nodes are assigned to which communities and then leverages such information on message passing of GNNs, all of which are trainable in an end-to-end fashion with a variational inference framework. In particular, The authors model the community structure in the latent variable of the graph generative process, where they first calculate the affinity scores of nodes to the community and then use that scores for weighting edges during the aggregation of neighboring nodes in GNNs. The authors use different GNNs for different communities and then merge representations from them to make a prediction for downstream tasks. The authors formulate the proposed community-based generative model over a variational inference framework, making the proposed VEPM trainable. The authors verify the performance of the proposed VEPM on node and graph classification tasks and also conduct analyses to see its efficacy. Strengths And Weaknesses Strengths This work models the community structure into the latent variable of the graph generative process, which is reasonable, convincing, and interesting. I like the core idea that, the method captures community structures based on edge weights calculated from affinity scores of nodes to communities, which are then used for aggregating neighborhood features in GNNs. This paper is well-structured and easy to follow. Weaknesses There is relevant work [1, 2] that considers latent community (cluster) structures of nodes, which should be discussed. There is relevant work [3] that breaks down the entire training process into unsupervised training for capturing graph (edge) structures and supervised training for performing on downstream tasks, which is similar to this work's training process in Lines 182-192. The performance improvements are not significant. Specifically, in Table 2 and 3, the proposed VEPM has huge variances, and, considering the variance, the results are not significant against baselines. Also, the powerful FactorGCN model is only compared on two datasets, namely IMDB-B and MUTAG among eight different datasets, on which the FactorGCN may outperform the proposed VEPM. The authors should evaluate on recent benchmark datasets, namely OGB [4]. The used datasets, namely Cora, Citeseer, and Pubmed for node classification; TU datasets for graph classification, are relatively easy and somewhat outdated. The analyses on the MUTAG dataset may be problematic. This is a very small dataset only having less than 200 graphs, compared to other datasets (e.g., the PROTEINS dataset has more than 1,000 graphs), and also known as showing high variance due to its size. Thus, I suggest authors analyze their model with larger datasets. [1] StructPool: Structured Graph Pooling via Conditional Random Fields. ICLR 2020. [2] Accurate Learning of Graph Representations with Graph Multiset Pooling. ICLR 2021. [3] Data Augmentation for Graph Neural Networks. AAAI 2021. [4] Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS 2021. Questions Major Questions and Suggestions Firstly, see the weaknesses above. As shown in Table 5, if the proposed VEPM including inference and generative networks is trained jointly without pre-training, it totally fails on downstream tasks when comparing its performance to other baselines. More explanations are needed for this particular behavior. I am wondering does the captured community by the proposed VEPM follow the community pattern extracted from existing community detection methods, for example, the METIS graph partitioning algorithm. Minor Questions and Suggestions Explaining how the generated edge weights are used in GNNs may be necessary. The form of a GNN layer in equation (1) lacks an expression of edges during neighborhood aggregation, which might be confusing for readers who are not familiar with GNNs. The claims on "state-of-the-art graph classification performance" should be tone-downed. The compared works are mostly before 2021, and there are lots of works achieving remarkable performances [2, 5]. More analysis of the temperature scaling value in equation (4) would be valuable. I would like to see the chaining communities of the proposed VEPM by varying the temperature value (i.e., from soft to hard community assignments and their performances). The visualization in Figure 3 is extremely hard to see when I see this in a printed version. I would like to suggest authors visualize lesser nodes (e.g., 50 instead of the current 200). [5] MaximumEntropy Weighted Independent Set Pooling for Graph Neural Networks. arXiv 2021. Limitations The authors discuss the limitations and potential societal impact of their work in Section F of the supplementary file.
NIPS
Title Differentiable Simulation of Soft Multi-body Systems Abstract We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots. 1 Introduction Soft articulated bodies have been studied and utilized in a number of important applications, such as microsurgery [32], underwater robots [37], and adaptive soft grippers [20]. Since the compliance of deformable materials can enable robots to operate more robustly and adaptively, soft biomimetic robots are drawing a lot of attention and have made considerable progress. A snailfish robot dives at a depth of 10,900 meters in the Mariana Trench [43]. Drones equipped with soft manipulators grasp and transmit objects with a 91.7% success rate [17]. Soft hands with pneumatic actuators are able to grasp objects of different shapes, including water bottles, eyeglasses, and sheets of cloth [12]. To enable rapid prototyping of soft robots and efficient design of control algorithms through virtual experiments, we aim to create a realistic deformable multi-body dynamics framework, in which soft articulated robots can be simulated to learn powerful control policies. Design and control of soft robots are challenging because of their nonlinear dynamics and many degrees of freedom. Differentiable physics has shown great promise to deal with such complex problems [4, 14, 31, 68]. One possibility is to treat soft bodies as volumes that are modeled as sets of particles or finite elements [29, 16]. These methods have made great progress, but the volumetric representations are difficult to scale to large multi-body systems and are poorly suited to modeling internal skeletons. Moreover, contact handling in recent differentiable physics frameworks [51, 30] often does not comply with Coulomb’s Law, which is central to plausible visual realism and correct physical behavior. In this paper, we design a powerful and accurate differentiable simulator for soft multi-body dynamics. Since our entire framework is differentiable, our method can be embedded with gradient-based optimization and learning algorithms, supporting gradient-based system identification, motion planning, and motor control. Within the simulator, we first use tetrahedral meshes to enable adaptive resolution and more accurate modeling. Next, to couple soft materials with articulated skeletons, we design a ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). top-down matrix assembly algorithm within the local steps of Projective Dynamics [5]. For accurate contact handling, we extend and generalize a dry friction model previously developed for cloth simulation [54] to soft solids and introduce a new matrix splitting strategy to stabilize the solver. In addition, our simulation framework incorporates actuator models widely used in robotics, including muscles [40], joint torques [75], and pneumatic actuators [12]. With the support of the articulated skeleton constraints, dry frictional contact, and versatile actuators, our novel differentiable algorithm can simulate soft articulated robots and compute gradients for a wide range of applications. The key contributions of this work are as follows. • A top-down matrix assembly algorithm within Projective Dynamics to make soft-body dynamics compatible with reduced-coordinate articulated systems (Sec. 4). • An extended and generalized dry friction model for soft solids with a new matrix splitting strategy to stabilize the solver (Sec. 5). • Analytical models of muscles, joint torques, and pneumatic actuators to enable more realistic and stable simulation results (Sec. 4.3 and Appendix C & D). • A unified differentiable framework that incorporates skeletons, contact, and actuators to enable gradient computation for learning and optimization (Sec. 6). • Experimental validation demonstrating that differentiable physics accelerates system identification and motion control with soft articulated bodies up to orders of magnitude (Sec. 6). In the following paper, Section 2 will discuss related papers in deformable body simulation and differentiable physics. Section 3 is basically a preliminary of projective dynamics to explain the high-level simulation framework and define notations that will be used later. Section 4 and 5 are about how to deal with articulated skeleton and contact in our method. Section 6 will show the ablation studies and comparisons results with other learning methods. Code is available on our project page: https://github.com/YilingQiao/diff_fem 2 Related Work Deformable body simulation using Finite Element Method (FEM) plays an important role in many scientific and engineering problems [32, 20, 57]. Previous works model soft bodies using different representations and methods for specific tasks. There are several kinds of approaches for modeling body actuation. Pneumatic-based methods [9] change the rest shape to produce reaction forces. Rigid bones attached within soft materials are also used to control the motion of deformable bodies [52, 38, 18, 45]. To further simulate biologically realistic motion, it is common to apply joint torques in articulated skeletons [36]. For example, [33, 76] use articulated body dynamics to govern the motion while handling collisions using soft contact. Inspired by animals, different designs of muscle-like actuators for soft-body simulations were also proposed [41, 1, 42]. Regarding contact modeling, spring-based penalty forces are widely used [58, 67, 27] for their simplicity. More advanced algorithms include inelastic projection [6, 24] and barrier-based repulsion [47, 48]. However, these methods do not always conform to Coulomb’s frictional law. We opt for a more realistic dry frictional model [44] to better handle collisions. Projective Dynamics [5] is widely used for its robustness and efficiency for implicit time integration. It has been extended to model muscles [59], rigid skeletons [46], realistic materials [53], and accurate contact forces [54]. Our method also adopts this framework for faster and more stable time integration. In contrast to the aforementioned methods using Projective Dynamics, our algorithm is the first to enable joint actuation in articulated skeletons together with a generalized dry frictional contact for soft body dynamics. Differentiable physics has recently been successfully applied to solve control and optimization problems. There are several types of physically-based simulations that are differentiable, including rigid bodies [10, 11], soft bodies [30, 29, 39, 19, 16], cloth [51, 62], articulated bodies [21, 74, 63], and fluids [71, 73, 28, 70]. Differentiable physics simulation can be used for system identification [68, 26], control [69], and design [14, 49]. For differentiable soft-body dynamics, Du et al. [16] propose a system for FEM simulation represented by volume mesh. This system has been applied to robot design [56] and control [15]. Different from this work [16], our approach uses tetrahedral meshes with adaptive resolution to model finer detail and scale better to complex articulated bodies. Hu et al. [30], Krishna Murthy et al. [39] use source code transformation to differentiate the dynamics, but their contact model does not follow Coulomb’s law. Geilinger et al. [19] simulate soft materials attached to rigid parts with penalty-based contact force, but their use of maximal coordinates makes it difficult to incorporate joint torques. In comparison, our model has realistic contact handling, versatile actuators, and skeletons with joint constraints, thereby enabling our method to simulate a much wider range of soft, multi-body systems not possible before. There are other works that approximate physical dynamics using neural networks [50, 2, 72, 65]. These methods are inherently differentiable but cannot guarantee physical correctness outside the training distribution. 3 Soft Body Simulation Using Projective Dynamics We use Projective Dynamics [5] to model the physics of soft, multi-body systems because its efficient implicit time integration can make the simulation more stable. We briefly introduce Projective Dynamics below. The dynamics model can be written as M(qn+1 − qn − hvn) = h2(∇E(qn+1) + fext), (1) with M being the mass matrix, qn the vertex locations at frame n, h the time step, vn the velocity, E the potential energy due to deformation, and fext the external forces. We choose implict Euler for a stable time integration, then the state qn+1 can be solved by qn+1 = arg min q 1 2h2 (q− sn)>M(q− sn) + E(q), (2) where sn = qn + hvn + h2M−1fext. Projective Dynamics reduces the computational cost by introducing an auxiliary variable p to represent the internal energy as the Euclidean distance between p and q after a projection G: E(q) = ∑ i ωi 2 ‖Giq− pi‖2F , (3) where ω is a scalar weight, and E contains internal energy from different sources, such as deformations, actuators, and constraints. The computation of ω, G, and p is dependent on the form of the energy. Combining all energy components into Eq. 2, we have qn+1 = arg min q 1 2 q> ( M h2 + L ) q + q> ( M h2 sn + Jp ) , (4) where L = ∑ ωiG > i Gi and J = ∑ ωiG > i Si, and Si is the selector matrix. Since this is a quadratic optimization without constraints, its optimal point is given by the solution of the following linear system: ( M h2 + L ) qn+1 = M h2 sn + Jp (5) Note that the estimation of p is based on the current values of q. Therefore we need to alternate between computing p and solving q until convergence. Luckily, both steps are fast and easy to solve: solving q in Eq. 5 is easy because it is a simple linear system, and computing p can be fast because it is local and can be parallelized. We show the generic Projective Dynamics method in Alg. 1. Algorithm 1 Soft body simulation using Projective Dynamics 1: x1 ← initial condition 2: for t = 1 to n− 1 do 3: while not converged do 4: Compute pi for all energy components i according to q (Local step) 5: Solve q in Eq. 5 according to pi (Global step) 6: end while 7: end for 4 Articulated Skeletons Skeletons are indispensable for vertebrate animals and nowadays articulated robots. However, adding skeletons into the soft body simulation is challenging. First, the rigid bones cannot be simply replaced by soft materials with large stiffness, since this can make the system unstable and unrealistic. Second, joint connections between bones must be physically valid at all times, and thus also cannot be modeled as soft constraints. Moreover, the formulation should support joint actuation as torques to drive the multi-body system like an articulated robot. Li et al. [45] proposed a method for passive articulated soft-body simulation with ball joint constraints. We extend this method to enable rotational/prismatic joints, torque actuation, and precise joint connections without introducing extra constraints. 4.1 Rigid Body System When integrated with hard skeletons, vertices on the rigid parts can be expressed as qk = QT r kVk, (6) where Q = (I 0) is the projection from homogeneous coordinates to 3D coordinates, Trk ∈ R4×4 the rigid transformation matrix, and Vk ∈ R4×mk the rest-pose homogeneous coordinates of the kth rigid body. During the global step in Projective Dynamics, we do not directly solve for Trk, but for the increment ∆zk in its degree-of-freedom (DoF), to avoid nonlinearity. This formulation restricts the changes of the rigid vertices to the tangent space yielded by the current Trk: qi+1k = q i k + ∆q i k ≈ qik + ∂qik ∂zk ∆zik, (7) where qik are the vertex locations of the k th body in the ith iteration step, and zk is the variable defining the DoF of the kth rigid body, including the rotation and the translation. The nonrigid part of the vertices can also be integrated with this formulation simply with ∂q i ∂z = I. Let B = ∂q i ∂z be the Jacobian of the concatenated variables. Eq. 4 can be rewritten as ∆zi = arg min ∆z 1 2 ∆z>B> ( M h2 + L ) B∆z + ∆z>B> (( M h2 + L ) qi − ( M h2 sn + Jp )) (8) After solving ∆zi, the new vertex states qi+1 are derived by the new rigid transformation matrix Tr′k using Eq. 6, which is subsequently computed in the local step, discussed below. Local step. The variable ∆zik is composed of the increment of rotation ωik and translation lik based on the current transformation Trk: ∆qik = [ −[qik]× I ] [ωik lik ] . (9) Here [qik]× is defined as the vertical stack of the cross product matrices of all vertices in the k th rigid body. During the local step, we compute the SVD of the new transformation matrix after integrating ωik and l i k, Tk = [ I + ωi∗k 0 0 1 ] Trk + [ 0 lik 0 0 ] = UΣV>, (10) and restrict it to SO(3) to obtain the new rigid transformation, Tr′k = UV >. The local step of Projective Dynamics for a single rigid body is the same as in [46]. However, we propose Eq. 7 to generalize the coupling to kinematic trees [55] with precise and actuated articulation. 4.2 Top-down Matrix Assembly for Articulated Body Systems The articulated body system formulation is similar to the rigid body one, except that the transformation matrix is now chained: Trk = ∏ u∈Uk Au, (11) where Uk contains all ancestor of the kth link (inclusive), and Au is the local transformation matrix defined by joint u. For rigid bodies, the vertex locations of a rigid body only depend on the body’s own DoF variables. In the articulated system, however, they are also affected by the body’s ancestors. Therefore, B changes from a block diagonal matrix to a block lower triangular matrix if the rigid body vertices are ordered by their kinematic tree depth. To compute the matrix B, we consider a link u with one of its non-root ancestor v. By the definitions in Sec. 4.1, the corresponding block in matrix B is Bu,v = ∂TruVu ∂zv = QPv ∂Av ∂zv Sv,uVu, (12) where Pv is the prefix product of the local transformation matrices of the link chain from root to v (exclusive), and Sv,u is the suffix product from v to u. In boundary cases where u = v, the formulation becomes Bu,u = QPu ∂Au ∂zu Vu. (13) When v is the root and thus has the same DoFs as a rigid body, using the results from Sec. 4.1, the formulation can be simplified to Bu,root = Q [−[qu]× I] . (14) Computing Eq. 12 requires the matrix products Pv and Sv,u of a link chain (v, u) in the tree. Straightforward approaches here could result in O(N3) complexity, where N is the number of links. However, by utilizing the kinematic tree and conducting the computation in top-down order, the complexity can be reduced to O(N2), which is optimal. The key observation is that the prefix and suffix products can be computed recursively: Pv = Pv′Av′ (15) Sv′,u = AvSv,u, (16) assuming v′ is the parent link of v. When we traverse the kinematic tree in a depth-first order, the prefix product can be computed in O(1). The suffix product is also obtained as we iterate along the path back to the root. Algorithm 2 shows the matrix assembly method starting from the root node: Algorithm 2 Matrix Assembly for the Articulated System 1: Input: tree link u 2: Compute Pu using Eq. 15 3: v ← u 4: while v is not root do 5: Compute Sv,u using Eq. 16 6: Compute Bu,v using Eq. 12 7: v ← parent(v) 8: end while 9: Compute Bu,root using Eq. 14 10: for s in descendants(u) do 11: Solve link s recursively 12: end for The transformation matrix A and the Jacobian of a joint depend on the joint type. This is further derived in Appendix C. 4.3 Articulated Joint Actuation Eq. 8 is a quadratic optimization, so the optimal ∆zi is given by the linear system H∆zi = k, (17) where H = B> ( M h2 + L ) B and k = −B> (( M h2 + L ) qi − ( M h2 sn + Jp )) . Reordering the vertices into sets of deformable and rigid ones yields the following partitioning of the matrix:[ Hd H > c Hc Hr ] [ ∆zid ∆zir ] = [ kd kr ] , (18) where ∗d and ∗r represents the deformable and the rigid parts, respectively. The joint actuation can be directly added to kr since the linear system is analogous to the basic formulation Ma = f where the right hand side represents the sum of forces and/or torques. The formulation of pneumatic and muscle actuators can be found in Appendix D. 5 Contact Modeling We handle the contact using Coulomb’s frictional law via a Jacobian. To compute the velocities after collisions, we split the left-hand side of Equation 5 into the diagonal mass matrix M and the constraint matrix h2L, and move the latter to the right-hand side: Mvi+1 = f − h2Lvi + ξi, (19) where f = Msn − (M + h2L)qn + h2Jp and the contact force ξi is determined according to f − h2Lvi (the current momentum) to enforce non-penetration and static/sliding friction. The idea here is to enforce Coulomb’s law at every iteration, which is ensured by solving vi+1 using the inverse of a diagonal matrix M. As long as the solver converges at the end, the final v and ξ will conform to the frictional law. This method works well for cloth contacts [54], but cannot be directly applied to soft bodies, because solid continuum is much stiffer than thin sheets, i.e. the elements in h2L on the right-hand side are much larger than those in M on the left-hand side, resulting in severe oscillation or even divergence during the iterative solve. We show that in order to guarantee the convergence of Equation 19, the time step h has to satisfy a certain condition: Proposition 1. Assuming f and ξ are fixed, Equation 19 converges if the time step h satisfies h2 < ρ 24 √ 3Tµ ∑3 k=1 ‖qk − q0‖22 (20) where ρ is the density, µ is the stiffness, T is the number of tetrahedra, and qi are the vertex positions. Details of the proof can be found in Appendix A. Using the setting in our experiments, where T ≈ 1000, µ ≈ 3× 105, ‖qk−q0‖2 ≈ 10−2, and ρ ≈ 1, we would need to set h < 1/1934 in order to ensure the convergence, which is too strict for the simulation to be useful in general applications. Splitting scheme. We improve this method to be compatible with soft body dynamics by introducing a new splitting scheme. Eq. 19 is reformulated as (M + h2D)vi+1 = f − h2(L−D)vi + ξi, (21) where D are the diagonals of L. Our key observation is that (a) the diagonals of L are necessary and sufficient to stabilize the Jacobian iteration, and (b) adding extra diagonal elements to the left-hand side will not break the Coulomb friction law. We show in Appendix B that under the same assumption as Proposition 1, our method is guaranteed to converge no matter how big h is. This improvement accelerates the simulation since larger time steps mean faster computation. We also note that the new splitting scheme will not modify the behavior of the collision response because the convergence point of Eq. 19 is the same as that of Eq. 21, and thus Coulomb’s Law is still satisfied at convergence. 6 Experiments In this section, we first introduce our implementation and then report ablation studies that demonstrate the importance of skeletons and collision contacts in soft-body dynamics. Subsequently, we use the gradients computed by our method to perform system identification; specifically, we estimate the physical parameters of bridges. Finally, we perform gradient-based learning of grasping and motion planning on robots with various actuators, including a pneumatic gripper, an octopus with muscles, and a skeleton-driven fish. Our method can converge more than an order of magnitude faster than reinforcement learning and derivative-free baselines. 6.1 Implementation Our simulator is written in C++, the learning algorithms are implemented in PyTorch [60], and Pybind [34] is used as the interface. We run our experiments on two desktops, one with an Intel Xeon W-2123 CPU @ 3.6GHz and the other with an Intel i9-10980XE @ 3.0GHz, respectively. For differentiation, the numerical data structure in our simulator is templatized and integrated into the C++ Eigen library, such that our method can conveniently interoperate with autodiff tools to differentiate the dynamics. Our method can also run in pure C++ to perform forward simulation. We refer to the open-source code from [59] (Apache-2.0), [54] (GNU GPL v3.0), and [45] (MPL2) during our implementation. More details can be found in our code in the supplement. To further improve the memory efficiency, we introduce a checkpointing scheme [7] into our pipeline. Instead of storing the entire simulation history, we only store the system’s state in each step. During the backward pass, we reload the saved state vector and resume all the intermediate variables before the backpropagation. This strategy can save a major part of the memory, compared to the brute-force implementation. We conduct an experiment to compare the memory consumption with and without this check- pointing scheme. The results are reported in Table 1. CppAD [3] is used to differentiate the simulation here. In this experiment, we simulate a bridge and estimate its material properties as shown in Figure 3(a). The results show that the memory footprint of the baseline scales linearly with simulation length, while our checkpointing scheme keeps memory consumption nearly constant. 6.2 Ablation Study Skeleton constraints. Controlling soft characters via skeletons is natural and convenient: vertebrate animals are soft, but are driven by piecewise-rigid skeletons. Our simulator supports skeletons and joint torques within soft bodies. This ablation study compares other designs with ours. In this experiment, a Baymax model [13] in its T-pose is released from above the ground, as shown in Figure 1. We embed 5 bones inside Baymax (4 in arms and legs, and 1 in the torso). When Baymax falls to the ground, we also add torques on its shoulders so it can lift its arms to a target Y-pose. More details of the setting and qualitative results can be found in Appendix E and the supplementary video. Three metrics, summarized in Figure 1, are used to measure realism. The metrics are averaged over 5 repetitions with different initial positions and velocities. For comparison, we simulate a ‘No skeleton’ Baymax without the support of rigid bones. Its bone error is non-zero because of the deformation. The Baymax in a differentiable rigid body simulator [62] is rigid, so the body length error is non-zero. Li et al. [45] simulate the ‘Passive’ skeleton case where there is no joint actuation and joint angles cannot be adjusted to the desired configuration. We also run Difftaichi-MPM [30] by converting the mesh model to the point-based MPM representation. ‘MPM’ does not have skeletons so the errors are high. The arms also detach from the body so the joint error is NaN. Our method attains the highest degree of physical realism and correctness overall. Contact handling. Good contact handling is critical for simulating multi-body systems that interact with their environment. In this experiment, we throw a 3D soft ball against a 2D thin sheet. Metrics in this experiment are penetration error and indicators of vertical compression and horizontal stretching. Zero penetration error is ideal. ‘Yes’ for compression/stretching indicates that the simulator can model the deformation of the soft ball correctly. The metrics are averaged over 5 experiments with different initial positions and velocities. The dry frictional contact model of Ly et al. [54] does not model the deformation of soft solids, and there could be penetration when the resolutions of the ball and cloth differ a lot due to the nodal collision handling scheme. The rigid differentiable simulator of Qiao et al. [62] can prevent interpenetration, but the ball remains rigid. MPM [30] can model the deformation of both the ball and the cloth, but the cloth is torn apart by the ball and penetration cannot be quantified. In contrast, our method accurately handles collision to avoid interpenetration and correctly simulates the deformation of the ball. 6.3 Applications System identification. Determining the material parameters of deformable objects can be challenging given their high dimensionality and complex dynamics. In this experiment, we use our differentiable simulator to identify the material property of each finite element cell within the soft body. As shown in Figure 3, there are two bridges with unknown materials: a suspension bridge with both ends fixed and the entire bridge being soft, and an arch bridge that has three piers attached to the ground. Given that the movement of the barycenter under gravity, compared to its rest pose, is ∆x = 8cm, we estimate Young’s modulus and Poisson’s ratio of each finite element cell in the bridge. The loss function is the distance from the actual barycenter to the target. The suspension bridge has n = 668 cells and the arch one has n = 2911 cells. The number of unknowns is 2n. We compare our method with four derivative-free methods (CMA-ES [25], LEAP [8], BOBYQA [61], and Nelder-Mead [66]). Each experiment is repeated 5 times with different random seeds. As shown in the figure, our method converges in ∼10 iterations while others fail to converge even after 100 iterations, indicating that derivative-free methods in this high-dimensional setting become too inefficient to converge to a reasonable solution. By making use of the gradients provided by our method, common gradient-based algorithms can quickly reach the target configuration. Motion planning. Controlling the motion of deformable bodies is challenging due to their flexible shapes. In this experiment, summarized in Figure 4, the task is to control robots with different actuator types. In general, given an initial state X0, control policy φθ1(·), and material parameters θ2, the simulator can generate a trajectory of the states at all time steps t, {simt(X0, φθ1(·), θ2)} = {X1, X2, ..., Xn}. If we want the system to reach a target state Xtarget at the end of the simulation, we can define an objective function L(X0, θ1, θ2) = ||(simN (X0, φθ1(·), θ2)−Xtarget)||2, where the optimization variable are θ1 and/or θ2. We compare our method with Reinforcement Learning algorithms (SAC [23], SQL [22], and PPO [64]), and the best derivative-free optimization method from the last experiment, CMA-ES. We also tried MBPO [35], but we found that this method takes too much memory and could not finish any test. All RL methods use the negative of the loss as the reward. The pneumatic gripper in Figure 4(a) has 56 pneumatic cells in four arms and is attached to an (invisible) drone as in [17]. The pneumatic activation can control the volume of a tetrahedron. When the cells inflate, the arms will move inwards and hold the ball tighter. We control the pneumatic activation as well as the movement of the drone to move the ball from the start (0, 0, 0) to our target (0, 0.3, 0) in 50 steps. The loss is the distance from the actual position to the target position. Our method converges in 10 episodes while CMA-ES and PPO gradually converge in 200 and 500 episodes, respectively. The muscle-driven octopus in Figure 4(b) has 8 legs, each with 2 muscles inside. It moves forward by actuating the muscles, being pushed by drag and thrust forces induced by the water on the octopus’s surface [59]. The octopus starts at (0, 0, 0) and our target location is (−0.4, 0.8,−0.4). We set the objective to be the distance between the current location and the desired location. The length of the simulation is 400 steps, and the control input in each step is 64-dimensional. In total, there are 64× 400 = 25600 variables to optimize. Our method converges in 50 episodes while other methods fail to converge in 500 episodes. The fish with an embedded skeleton in Figure 4(c) has 6 bones: 3 in its body, 2 in the fins, and 1 in the tail. The hydrodynamics in this environment is the same as in the octopus experiment. The fish starts at (0, 0, 0) in step 1 and the target location in step 100 is (0, 0, 0.15). The objective function is the distance from the actual location to the target location. For each step, there will be a torque vector of size 5 that represents the joint actuation level. In total, the optimization variable has 500 dimensions. Our method with gradient-based optimization can converge in roughly 50 episodes, while others cannot converge even after 500 episodes. In summary, gradient-free optimization methods and RL algorithms meet substantial difficulties when tackling problems with high dimensionality, such as soft, multi-body systems. Even when the action space is as small as the one in the gripper case, RL methods still fail to rapidly optimize the policy. By introducing the gradients of the simulation, simple gradient-based optimization outperforms other algorithms. This work hopefully may inspire improvements in RL algorithms that tackle such high-dimensional problems. 7 Conclusion Our paper has developed a differentiable physics framework for soft, articulated bodies with dry frictional contact. To make the simulation realistic and easy-to-use, we designed a recursive matrix assembly algorithm and a generalized dry frictional model for soft continuum with a new matrix splitting strategy. Integrated with joint, muscle, and pneumatic actuators, our method can simulate a variety of soft robots. Using our differentiable physics to enable gradient-based optimization, our method converges more than an order of magnitude faster than the baselines and other existing alternatives. There are some limitations in our contact handling and soft body dynamics. Currently, though our algorithm is more extensive and generalized than existing differentiable physics algorithms and our implementation handles the most commonly found contact configuration, vertex-face collisions, there could still be edge-edge penetration missed in some corner cases. Moreover, the Projective Dynamics pipeline limits the energy to have the form E = ‖Gq− p‖. Some nonlinear material models (e.g., neo-Hookean) are not captured in this framework and new models for differentiable physics will be required for handling nonlinear and heterogeneous materials. For future work, we aim to add edge-edge collision handling in the Projective Dynamics pipeline. The techniques in [53] can be used to incorporate addition material types. GPU or other parallel computing implementation can be used to boost the performance of gradient computation. Acknowledgements. This research is supported in part by Army Research Office, National Science Foundation, Dr. Barry Mersky and Capital One Endowed Professorship.
1. What is the main contribution of the paper regarding differentiable simulators for soft bodies? 2. What are the strengths of the proposed approach, particularly in its ability to enable faster system identification and control? 3. Do you have any concerns or questions regarding the experiments and their explanations in the paper? 4. How does the paper situate itself in context with relevant prior works, and what are some areas that could be improved in terms of referencing and discussing different options? 5. Are there any areas in the paper where the writing could be expanded upon or clarified, such as in the explanation of the mathematical formulation or the system identification experiment? 6. What are the novel aspects of the paper beyond the reimplementation of well-understood ideas in mechanics?
Summary Of The Paper Review
Summary Of The Paper This paper takes on the timely problem of devising differentiable simulators for soft bodies, with a focus on articulated bodies where the soft nature of the material comes with structure in the way different parts are connected together - hence requiring formulation of equations of motion in the sense of multi-body dynamics. The core contribution is to formulate the simulation problem in a projective dynamics framework, and to exploit the implicit matrix structure in the equations of motion to achieve computational speedups. This has been implemented in C++ libraries and demonstrated in a few different simulation settings, including a Baymax soft articulated body, a deformable ball falling on cloth and a bridge - each representing a different form of structure. The main outcome is that this form of differentiable simulation enables faster system identification and control. Review I think this is a timely piece of work, and it is well motivated. The authors have situated their work in context by referencing the relevant prior works - this is a large area (i.e., there are lots of papers on multi-body sim, FEM and so on), so I am content that they have referenced significant representative works to discuss different options. This includes recent interest in neural approximations, which do not really guarantee physical realism always. Also, I find the derivations to be sound. That said, I find that the following could be better developed and explained in the paper: (1) The experiment section is interesting in that the authors seem to have made a good attempt at comprehensively selecting different scenarios and teasing apart the reasons why their approach is better. However, some areas are lacking in detail. For instance, a crucial claim is that "this approach" outperforms RL and derivative free optimization. However, what is "this approach" in specific detail? The paper is almost entirely about the equations of motion and ways of simulating multi-body dynamics up to sec 4/5. Then, we do not have an optimization problem written down to say what the approach is for using the sim to synthesise motion. I can imagine doing this by a one-shot trajectory optimization over a horizon, I can do this by MPC, and so on - what is actually done (I suspect the former but is this stated?). In this sense, the point is not that this method outperforms RL but that a policy synthesis method with this sim as model performs better than policy synthesis without. Is this a fair characterisation of the claims (which I believe, but it isn't clearly stated)? (2) Likewise, I find the system identification experiment interesting, because this is not just identification of a bulk modulus for the whole object but element by element Young's modulus and Poisson's ratio estimation - which is nice. It is good to see that the proposed method does a good job of quickly identifying this. However, here again I am a little bit confused - "our method" is shown to do better than derivative free estimation, but what is the framing of the optimization or error using the projected dynamics and frictions models developed in sec 4? In this sense, the writing of sec 6 seems incomplete, and could benefit from revisiting. (3) Without rederiving everything from first principles, I have studied the mathematical formulation - which seems sound. However, a few steps could be expanded on. For instance, after eq 7, we are told that non-rigid part of the vertices can be integrated by setting \frac{\partial{q^i}}{\partial{z}} to identity. However, non-rigid dynamics will require additional consistency conditions, e.g., rod or rope dynamics - how does that factor into all this? (4) Lastly, it isn't entirely clear from the writing which parts of the paper are entirely original and which parts are a competent reimplementation of well understood ideas in mechanics. For instance, ideas like splitting schemes have a long tradition in applied mathematics, and the authors have drawn on this - which is great. However, it would be helpful to flag up in the prose where the novel departures are (beyond the list at the end of sec 1).
NIPS
Title Differentiable Simulation of Soft Multi-body Systems Abstract We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots. 1 Introduction Soft articulated bodies have been studied and utilized in a number of important applications, such as microsurgery [32], underwater robots [37], and adaptive soft grippers [20]. Since the compliance of deformable materials can enable robots to operate more robustly and adaptively, soft biomimetic robots are drawing a lot of attention and have made considerable progress. A snailfish robot dives at a depth of 10,900 meters in the Mariana Trench [43]. Drones equipped with soft manipulators grasp and transmit objects with a 91.7% success rate [17]. Soft hands with pneumatic actuators are able to grasp objects of different shapes, including water bottles, eyeglasses, and sheets of cloth [12]. To enable rapid prototyping of soft robots and efficient design of control algorithms through virtual experiments, we aim to create a realistic deformable multi-body dynamics framework, in which soft articulated robots can be simulated to learn powerful control policies. Design and control of soft robots are challenging because of their nonlinear dynamics and many degrees of freedom. Differentiable physics has shown great promise to deal with such complex problems [4, 14, 31, 68]. One possibility is to treat soft bodies as volumes that are modeled as sets of particles or finite elements [29, 16]. These methods have made great progress, but the volumetric representations are difficult to scale to large multi-body systems and are poorly suited to modeling internal skeletons. Moreover, contact handling in recent differentiable physics frameworks [51, 30] often does not comply with Coulomb’s Law, which is central to plausible visual realism and correct physical behavior. In this paper, we design a powerful and accurate differentiable simulator for soft multi-body dynamics. Since our entire framework is differentiable, our method can be embedded with gradient-based optimization and learning algorithms, supporting gradient-based system identification, motion planning, and motor control. Within the simulator, we first use tetrahedral meshes to enable adaptive resolution and more accurate modeling. Next, to couple soft materials with articulated skeletons, we design a ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). top-down matrix assembly algorithm within the local steps of Projective Dynamics [5]. For accurate contact handling, we extend and generalize a dry friction model previously developed for cloth simulation [54] to soft solids and introduce a new matrix splitting strategy to stabilize the solver. In addition, our simulation framework incorporates actuator models widely used in robotics, including muscles [40], joint torques [75], and pneumatic actuators [12]. With the support of the articulated skeleton constraints, dry frictional contact, and versatile actuators, our novel differentiable algorithm can simulate soft articulated robots and compute gradients for a wide range of applications. The key contributions of this work are as follows. • A top-down matrix assembly algorithm within Projective Dynamics to make soft-body dynamics compatible with reduced-coordinate articulated systems (Sec. 4). • An extended and generalized dry friction model for soft solids with a new matrix splitting strategy to stabilize the solver (Sec. 5). • Analytical models of muscles, joint torques, and pneumatic actuators to enable more realistic and stable simulation results (Sec. 4.3 and Appendix C & D). • A unified differentiable framework that incorporates skeletons, contact, and actuators to enable gradient computation for learning and optimization (Sec. 6). • Experimental validation demonstrating that differentiable physics accelerates system identification and motion control with soft articulated bodies up to orders of magnitude (Sec. 6). In the following paper, Section 2 will discuss related papers in deformable body simulation and differentiable physics. Section 3 is basically a preliminary of projective dynamics to explain the high-level simulation framework and define notations that will be used later. Section 4 and 5 are about how to deal with articulated skeleton and contact in our method. Section 6 will show the ablation studies and comparisons results with other learning methods. Code is available on our project page: https://github.com/YilingQiao/diff_fem 2 Related Work Deformable body simulation using Finite Element Method (FEM) plays an important role in many scientific and engineering problems [32, 20, 57]. Previous works model soft bodies using different representations and methods for specific tasks. There are several kinds of approaches for modeling body actuation. Pneumatic-based methods [9] change the rest shape to produce reaction forces. Rigid bones attached within soft materials are also used to control the motion of deformable bodies [52, 38, 18, 45]. To further simulate biologically realistic motion, it is common to apply joint torques in articulated skeletons [36]. For example, [33, 76] use articulated body dynamics to govern the motion while handling collisions using soft contact. Inspired by animals, different designs of muscle-like actuators for soft-body simulations were also proposed [41, 1, 42]. Regarding contact modeling, spring-based penalty forces are widely used [58, 67, 27] for their simplicity. More advanced algorithms include inelastic projection [6, 24] and barrier-based repulsion [47, 48]. However, these methods do not always conform to Coulomb’s frictional law. We opt for a more realistic dry frictional model [44] to better handle collisions. Projective Dynamics [5] is widely used for its robustness and efficiency for implicit time integration. It has been extended to model muscles [59], rigid skeletons [46], realistic materials [53], and accurate contact forces [54]. Our method also adopts this framework for faster and more stable time integration. In contrast to the aforementioned methods using Projective Dynamics, our algorithm is the first to enable joint actuation in articulated skeletons together with a generalized dry frictional contact for soft body dynamics. Differentiable physics has recently been successfully applied to solve control and optimization problems. There are several types of physically-based simulations that are differentiable, including rigid bodies [10, 11], soft bodies [30, 29, 39, 19, 16], cloth [51, 62], articulated bodies [21, 74, 63], and fluids [71, 73, 28, 70]. Differentiable physics simulation can be used for system identification [68, 26], control [69], and design [14, 49]. For differentiable soft-body dynamics, Du et al. [16] propose a system for FEM simulation represented by volume mesh. This system has been applied to robot design [56] and control [15]. Different from this work [16], our approach uses tetrahedral meshes with adaptive resolution to model finer detail and scale better to complex articulated bodies. Hu et al. [30], Krishna Murthy et al. [39] use source code transformation to differentiate the dynamics, but their contact model does not follow Coulomb’s law. Geilinger et al. [19] simulate soft materials attached to rigid parts with penalty-based contact force, but their use of maximal coordinates makes it difficult to incorporate joint torques. In comparison, our model has realistic contact handling, versatile actuators, and skeletons with joint constraints, thereby enabling our method to simulate a much wider range of soft, multi-body systems not possible before. There are other works that approximate physical dynamics using neural networks [50, 2, 72, 65]. These methods are inherently differentiable but cannot guarantee physical correctness outside the training distribution. 3 Soft Body Simulation Using Projective Dynamics We use Projective Dynamics [5] to model the physics of soft, multi-body systems because its efficient implicit time integration can make the simulation more stable. We briefly introduce Projective Dynamics below. The dynamics model can be written as M(qn+1 − qn − hvn) = h2(∇E(qn+1) + fext), (1) with M being the mass matrix, qn the vertex locations at frame n, h the time step, vn the velocity, E the potential energy due to deformation, and fext the external forces. We choose implict Euler for a stable time integration, then the state qn+1 can be solved by qn+1 = arg min q 1 2h2 (q− sn)>M(q− sn) + E(q), (2) where sn = qn + hvn + h2M−1fext. Projective Dynamics reduces the computational cost by introducing an auxiliary variable p to represent the internal energy as the Euclidean distance between p and q after a projection G: E(q) = ∑ i ωi 2 ‖Giq− pi‖2F , (3) where ω is a scalar weight, and E contains internal energy from different sources, such as deformations, actuators, and constraints. The computation of ω, G, and p is dependent on the form of the energy. Combining all energy components into Eq. 2, we have qn+1 = arg min q 1 2 q> ( M h2 + L ) q + q> ( M h2 sn + Jp ) , (4) where L = ∑ ωiG > i Gi and J = ∑ ωiG > i Si, and Si is the selector matrix. Since this is a quadratic optimization without constraints, its optimal point is given by the solution of the following linear system: ( M h2 + L ) qn+1 = M h2 sn + Jp (5) Note that the estimation of p is based on the current values of q. Therefore we need to alternate between computing p and solving q until convergence. Luckily, both steps are fast and easy to solve: solving q in Eq. 5 is easy because it is a simple linear system, and computing p can be fast because it is local and can be parallelized. We show the generic Projective Dynamics method in Alg. 1. Algorithm 1 Soft body simulation using Projective Dynamics 1: x1 ← initial condition 2: for t = 1 to n− 1 do 3: while not converged do 4: Compute pi for all energy components i according to q (Local step) 5: Solve q in Eq. 5 according to pi (Global step) 6: end while 7: end for 4 Articulated Skeletons Skeletons are indispensable for vertebrate animals and nowadays articulated robots. However, adding skeletons into the soft body simulation is challenging. First, the rigid bones cannot be simply replaced by soft materials with large stiffness, since this can make the system unstable and unrealistic. Second, joint connections between bones must be physically valid at all times, and thus also cannot be modeled as soft constraints. Moreover, the formulation should support joint actuation as torques to drive the multi-body system like an articulated robot. Li et al. [45] proposed a method for passive articulated soft-body simulation with ball joint constraints. We extend this method to enable rotational/prismatic joints, torque actuation, and precise joint connections without introducing extra constraints. 4.1 Rigid Body System When integrated with hard skeletons, vertices on the rigid parts can be expressed as qk = QT r kVk, (6) where Q = (I 0) is the projection from homogeneous coordinates to 3D coordinates, Trk ∈ R4×4 the rigid transformation matrix, and Vk ∈ R4×mk the rest-pose homogeneous coordinates of the kth rigid body. During the global step in Projective Dynamics, we do not directly solve for Trk, but for the increment ∆zk in its degree-of-freedom (DoF), to avoid nonlinearity. This formulation restricts the changes of the rigid vertices to the tangent space yielded by the current Trk: qi+1k = q i k + ∆q i k ≈ qik + ∂qik ∂zk ∆zik, (7) where qik are the vertex locations of the k th body in the ith iteration step, and zk is the variable defining the DoF of the kth rigid body, including the rotation and the translation. The nonrigid part of the vertices can also be integrated with this formulation simply with ∂q i ∂z = I. Let B = ∂q i ∂z be the Jacobian of the concatenated variables. Eq. 4 can be rewritten as ∆zi = arg min ∆z 1 2 ∆z>B> ( M h2 + L ) B∆z + ∆z>B> (( M h2 + L ) qi − ( M h2 sn + Jp )) (8) After solving ∆zi, the new vertex states qi+1 are derived by the new rigid transformation matrix Tr′k using Eq. 6, which is subsequently computed in the local step, discussed below. Local step. The variable ∆zik is composed of the increment of rotation ωik and translation lik based on the current transformation Trk: ∆qik = [ −[qik]× I ] [ωik lik ] . (9) Here [qik]× is defined as the vertical stack of the cross product matrices of all vertices in the k th rigid body. During the local step, we compute the SVD of the new transformation matrix after integrating ωik and l i k, Tk = [ I + ωi∗k 0 0 1 ] Trk + [ 0 lik 0 0 ] = UΣV>, (10) and restrict it to SO(3) to obtain the new rigid transformation, Tr′k = UV >. The local step of Projective Dynamics for a single rigid body is the same as in [46]. However, we propose Eq. 7 to generalize the coupling to kinematic trees [55] with precise and actuated articulation. 4.2 Top-down Matrix Assembly for Articulated Body Systems The articulated body system formulation is similar to the rigid body one, except that the transformation matrix is now chained: Trk = ∏ u∈Uk Au, (11) where Uk contains all ancestor of the kth link (inclusive), and Au is the local transformation matrix defined by joint u. For rigid bodies, the vertex locations of a rigid body only depend on the body’s own DoF variables. In the articulated system, however, they are also affected by the body’s ancestors. Therefore, B changes from a block diagonal matrix to a block lower triangular matrix if the rigid body vertices are ordered by their kinematic tree depth. To compute the matrix B, we consider a link u with one of its non-root ancestor v. By the definitions in Sec. 4.1, the corresponding block in matrix B is Bu,v = ∂TruVu ∂zv = QPv ∂Av ∂zv Sv,uVu, (12) where Pv is the prefix product of the local transformation matrices of the link chain from root to v (exclusive), and Sv,u is the suffix product from v to u. In boundary cases where u = v, the formulation becomes Bu,u = QPu ∂Au ∂zu Vu. (13) When v is the root and thus has the same DoFs as a rigid body, using the results from Sec. 4.1, the formulation can be simplified to Bu,root = Q [−[qu]× I] . (14) Computing Eq. 12 requires the matrix products Pv and Sv,u of a link chain (v, u) in the tree. Straightforward approaches here could result in O(N3) complexity, where N is the number of links. However, by utilizing the kinematic tree and conducting the computation in top-down order, the complexity can be reduced to O(N2), which is optimal. The key observation is that the prefix and suffix products can be computed recursively: Pv = Pv′Av′ (15) Sv′,u = AvSv,u, (16) assuming v′ is the parent link of v. When we traverse the kinematic tree in a depth-first order, the prefix product can be computed in O(1). The suffix product is also obtained as we iterate along the path back to the root. Algorithm 2 shows the matrix assembly method starting from the root node: Algorithm 2 Matrix Assembly for the Articulated System 1: Input: tree link u 2: Compute Pu using Eq. 15 3: v ← u 4: while v is not root do 5: Compute Sv,u using Eq. 16 6: Compute Bu,v using Eq. 12 7: v ← parent(v) 8: end while 9: Compute Bu,root using Eq. 14 10: for s in descendants(u) do 11: Solve link s recursively 12: end for The transformation matrix A and the Jacobian of a joint depend on the joint type. This is further derived in Appendix C. 4.3 Articulated Joint Actuation Eq. 8 is a quadratic optimization, so the optimal ∆zi is given by the linear system H∆zi = k, (17) where H = B> ( M h2 + L ) B and k = −B> (( M h2 + L ) qi − ( M h2 sn + Jp )) . Reordering the vertices into sets of deformable and rigid ones yields the following partitioning of the matrix:[ Hd H > c Hc Hr ] [ ∆zid ∆zir ] = [ kd kr ] , (18) where ∗d and ∗r represents the deformable and the rigid parts, respectively. The joint actuation can be directly added to kr since the linear system is analogous to the basic formulation Ma = f where the right hand side represents the sum of forces and/or torques. The formulation of pneumatic and muscle actuators can be found in Appendix D. 5 Contact Modeling We handle the contact using Coulomb’s frictional law via a Jacobian. To compute the velocities after collisions, we split the left-hand side of Equation 5 into the diagonal mass matrix M and the constraint matrix h2L, and move the latter to the right-hand side: Mvi+1 = f − h2Lvi + ξi, (19) where f = Msn − (M + h2L)qn + h2Jp and the contact force ξi is determined according to f − h2Lvi (the current momentum) to enforce non-penetration and static/sliding friction. The idea here is to enforce Coulomb’s law at every iteration, which is ensured by solving vi+1 using the inverse of a diagonal matrix M. As long as the solver converges at the end, the final v and ξ will conform to the frictional law. This method works well for cloth contacts [54], but cannot be directly applied to soft bodies, because solid continuum is much stiffer than thin sheets, i.e. the elements in h2L on the right-hand side are much larger than those in M on the left-hand side, resulting in severe oscillation or even divergence during the iterative solve. We show that in order to guarantee the convergence of Equation 19, the time step h has to satisfy a certain condition: Proposition 1. Assuming f and ξ are fixed, Equation 19 converges if the time step h satisfies h2 < ρ 24 √ 3Tµ ∑3 k=1 ‖qk − q0‖22 (20) where ρ is the density, µ is the stiffness, T is the number of tetrahedra, and qi are the vertex positions. Details of the proof can be found in Appendix A. Using the setting in our experiments, where T ≈ 1000, µ ≈ 3× 105, ‖qk−q0‖2 ≈ 10−2, and ρ ≈ 1, we would need to set h < 1/1934 in order to ensure the convergence, which is too strict for the simulation to be useful in general applications. Splitting scheme. We improve this method to be compatible with soft body dynamics by introducing a new splitting scheme. Eq. 19 is reformulated as (M + h2D)vi+1 = f − h2(L−D)vi + ξi, (21) where D are the diagonals of L. Our key observation is that (a) the diagonals of L are necessary and sufficient to stabilize the Jacobian iteration, and (b) adding extra diagonal elements to the left-hand side will not break the Coulomb friction law. We show in Appendix B that under the same assumption as Proposition 1, our method is guaranteed to converge no matter how big h is. This improvement accelerates the simulation since larger time steps mean faster computation. We also note that the new splitting scheme will not modify the behavior of the collision response because the convergence point of Eq. 19 is the same as that of Eq. 21, and thus Coulomb’s Law is still satisfied at convergence. 6 Experiments In this section, we first introduce our implementation and then report ablation studies that demonstrate the importance of skeletons and collision contacts in soft-body dynamics. Subsequently, we use the gradients computed by our method to perform system identification; specifically, we estimate the physical parameters of bridges. Finally, we perform gradient-based learning of grasping and motion planning on robots with various actuators, including a pneumatic gripper, an octopus with muscles, and a skeleton-driven fish. Our method can converge more than an order of magnitude faster than reinforcement learning and derivative-free baselines. 6.1 Implementation Our simulator is written in C++, the learning algorithms are implemented in PyTorch [60], and Pybind [34] is used as the interface. We run our experiments on two desktops, one with an Intel Xeon W-2123 CPU @ 3.6GHz and the other with an Intel i9-10980XE @ 3.0GHz, respectively. For differentiation, the numerical data structure in our simulator is templatized and integrated into the C++ Eigen library, such that our method can conveniently interoperate with autodiff tools to differentiate the dynamics. Our method can also run in pure C++ to perform forward simulation. We refer to the open-source code from [59] (Apache-2.0), [54] (GNU GPL v3.0), and [45] (MPL2) during our implementation. More details can be found in our code in the supplement. To further improve the memory efficiency, we introduce a checkpointing scheme [7] into our pipeline. Instead of storing the entire simulation history, we only store the system’s state in each step. During the backward pass, we reload the saved state vector and resume all the intermediate variables before the backpropagation. This strategy can save a major part of the memory, compared to the brute-force implementation. We conduct an experiment to compare the memory consumption with and without this check- pointing scheme. The results are reported in Table 1. CppAD [3] is used to differentiate the simulation here. In this experiment, we simulate a bridge and estimate its material properties as shown in Figure 3(a). The results show that the memory footprint of the baseline scales linearly with simulation length, while our checkpointing scheme keeps memory consumption nearly constant. 6.2 Ablation Study Skeleton constraints. Controlling soft characters via skeletons is natural and convenient: vertebrate animals are soft, but are driven by piecewise-rigid skeletons. Our simulator supports skeletons and joint torques within soft bodies. This ablation study compares other designs with ours. In this experiment, a Baymax model [13] in its T-pose is released from above the ground, as shown in Figure 1. We embed 5 bones inside Baymax (4 in arms and legs, and 1 in the torso). When Baymax falls to the ground, we also add torques on its shoulders so it can lift its arms to a target Y-pose. More details of the setting and qualitative results can be found in Appendix E and the supplementary video. Three metrics, summarized in Figure 1, are used to measure realism. The metrics are averaged over 5 repetitions with different initial positions and velocities. For comparison, we simulate a ‘No skeleton’ Baymax without the support of rigid bones. Its bone error is non-zero because of the deformation. The Baymax in a differentiable rigid body simulator [62] is rigid, so the body length error is non-zero. Li et al. [45] simulate the ‘Passive’ skeleton case where there is no joint actuation and joint angles cannot be adjusted to the desired configuration. We also run Difftaichi-MPM [30] by converting the mesh model to the point-based MPM representation. ‘MPM’ does not have skeletons so the errors are high. The arms also detach from the body so the joint error is NaN. Our method attains the highest degree of physical realism and correctness overall. Contact handling. Good contact handling is critical for simulating multi-body systems that interact with their environment. In this experiment, we throw a 3D soft ball against a 2D thin sheet. Metrics in this experiment are penetration error and indicators of vertical compression and horizontal stretching. Zero penetration error is ideal. ‘Yes’ for compression/stretching indicates that the simulator can model the deformation of the soft ball correctly. The metrics are averaged over 5 experiments with different initial positions and velocities. The dry frictional contact model of Ly et al. [54] does not model the deformation of soft solids, and there could be penetration when the resolutions of the ball and cloth differ a lot due to the nodal collision handling scheme. The rigid differentiable simulator of Qiao et al. [62] can prevent interpenetration, but the ball remains rigid. MPM [30] can model the deformation of both the ball and the cloth, but the cloth is torn apart by the ball and penetration cannot be quantified. In contrast, our method accurately handles collision to avoid interpenetration and correctly simulates the deformation of the ball. 6.3 Applications System identification. Determining the material parameters of deformable objects can be challenging given their high dimensionality and complex dynamics. In this experiment, we use our differentiable simulator to identify the material property of each finite element cell within the soft body. As shown in Figure 3, there are two bridges with unknown materials: a suspension bridge with both ends fixed and the entire bridge being soft, and an arch bridge that has three piers attached to the ground. Given that the movement of the barycenter under gravity, compared to its rest pose, is ∆x = 8cm, we estimate Young’s modulus and Poisson’s ratio of each finite element cell in the bridge. The loss function is the distance from the actual barycenter to the target. The suspension bridge has n = 668 cells and the arch one has n = 2911 cells. The number of unknowns is 2n. We compare our method with four derivative-free methods (CMA-ES [25], LEAP [8], BOBYQA [61], and Nelder-Mead [66]). Each experiment is repeated 5 times with different random seeds. As shown in the figure, our method converges in ∼10 iterations while others fail to converge even after 100 iterations, indicating that derivative-free methods in this high-dimensional setting become too inefficient to converge to a reasonable solution. By making use of the gradients provided by our method, common gradient-based algorithms can quickly reach the target configuration. Motion planning. Controlling the motion of deformable bodies is challenging due to their flexible shapes. In this experiment, summarized in Figure 4, the task is to control robots with different actuator types. In general, given an initial state X0, control policy φθ1(·), and material parameters θ2, the simulator can generate a trajectory of the states at all time steps t, {simt(X0, φθ1(·), θ2)} = {X1, X2, ..., Xn}. If we want the system to reach a target state Xtarget at the end of the simulation, we can define an objective function L(X0, θ1, θ2) = ||(simN (X0, φθ1(·), θ2)−Xtarget)||2, where the optimization variable are θ1 and/or θ2. We compare our method with Reinforcement Learning algorithms (SAC [23], SQL [22], and PPO [64]), and the best derivative-free optimization method from the last experiment, CMA-ES. We also tried MBPO [35], but we found that this method takes too much memory and could not finish any test. All RL methods use the negative of the loss as the reward. The pneumatic gripper in Figure 4(a) has 56 pneumatic cells in four arms and is attached to an (invisible) drone as in [17]. The pneumatic activation can control the volume of a tetrahedron. When the cells inflate, the arms will move inwards and hold the ball tighter. We control the pneumatic activation as well as the movement of the drone to move the ball from the start (0, 0, 0) to our target (0, 0.3, 0) in 50 steps. The loss is the distance from the actual position to the target position. Our method converges in 10 episodes while CMA-ES and PPO gradually converge in 200 and 500 episodes, respectively. The muscle-driven octopus in Figure 4(b) has 8 legs, each with 2 muscles inside. It moves forward by actuating the muscles, being pushed by drag and thrust forces induced by the water on the octopus’s surface [59]. The octopus starts at (0, 0, 0) and our target location is (−0.4, 0.8,−0.4). We set the objective to be the distance between the current location and the desired location. The length of the simulation is 400 steps, and the control input in each step is 64-dimensional. In total, there are 64× 400 = 25600 variables to optimize. Our method converges in 50 episodes while other methods fail to converge in 500 episodes. The fish with an embedded skeleton in Figure 4(c) has 6 bones: 3 in its body, 2 in the fins, and 1 in the tail. The hydrodynamics in this environment is the same as in the octopus experiment. The fish starts at (0, 0, 0) in step 1 and the target location in step 100 is (0, 0, 0.15). The objective function is the distance from the actual location to the target location. For each step, there will be a torque vector of size 5 that represents the joint actuation level. In total, the optimization variable has 500 dimensions. Our method with gradient-based optimization can converge in roughly 50 episodes, while others cannot converge even after 500 episodes. In summary, gradient-free optimization methods and RL algorithms meet substantial difficulties when tackling problems with high dimensionality, such as soft, multi-body systems. Even when the action space is as small as the one in the gripper case, RL methods still fail to rapidly optimize the policy. By introducing the gradients of the simulation, simple gradient-based optimization outperforms other algorithms. This work hopefully may inspire improvements in RL algorithms that tackle such high-dimensional problems. 7 Conclusion Our paper has developed a differentiable physics framework for soft, articulated bodies with dry frictional contact. To make the simulation realistic and easy-to-use, we designed a recursive matrix assembly algorithm and a generalized dry frictional model for soft continuum with a new matrix splitting strategy. Integrated with joint, muscle, and pneumatic actuators, our method can simulate a variety of soft robots. Using our differentiable physics to enable gradient-based optimization, our method converges more than an order of magnitude faster than the baselines and other existing alternatives. There are some limitations in our contact handling and soft body dynamics. Currently, though our algorithm is more extensive and generalized than existing differentiable physics algorithms and our implementation handles the most commonly found contact configuration, vertex-face collisions, there could still be edge-edge penetration missed in some corner cases. Moreover, the Projective Dynamics pipeline limits the energy to have the form E = ‖Gq− p‖. Some nonlinear material models (e.g., neo-Hookean) are not captured in this framework and new models for differentiable physics will be required for handling nonlinear and heterogeneous materials. For future work, we aim to add edge-edge collision handling in the Projective Dynamics pipeline. The techniques in [53] can be used to incorporate addition material types. GPU or other parallel computing implementation can be used to boost the performance of gradient computation. Acknowledgements. This research is supported in part by Army Research Office, National Science Foundation, Dr. Barry Mersky and Capital One Endowed Professorship.
1. What is the focus and contribution of the paper on soft multi-body differentiable physics simulation? 2. What are the strengths of the proposed framework, particularly in its application to machine learning and robotics? 3. Do you have any concerns regarding the suitability of the paper for NeurIPS, considering its focus on simulation? 4. Can you elaborate on the differences between the proposed algorithm for computing Jacobian and previous methods used in rigid body simulation? 5. How does the novel matrix splitting strategy impact the experimental and visual effects of the simulations? Are there any experiments included in the paper to support these claims? 6. What is the efficiency of the proposed algorithm compared to other methods like MPM implemented with Taichi? Is it suitable for real-time applications in reinforcement learning (RL) research?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a soft multi-body differentiable physics simulation framework based on projective dynamics. The authors propose a top-down matrix assembly algorithm for rigid body simulation algorithms and a new matrix splitting strategy for a generalized dry friction model for soft continuum. The experiments demonstrate that their designs make soft body simulation more stable and realistic compared to other frameworks, and the gradients help accelerate system identification and motion control of soft robots. Review Differentiation of projective dynamics and simulation of an articulated soft body system are all noticeable features of the proposed framework and would benefit the machine learning and robotics community. The authors also provide code supplementarily. I would be happy to see it at the Nuerips conference. I am only a little bit worry if this paper fits Neurips's general audience. The paper focuses more on the simulation side, which might be better evaluated in other relevant conferences such as Siggraph or other soft robot conferences. Here are some comments: Algorithm 2 computes the Jacobian of each link. I wonder what is the differences between the proposed algorithm and the way of computing jacobian in rigid body simulation? Section 5 mentions a novel matrix splitting method and shows that in theory it convergences much faster. I wonder how this will affect the experimental/visual effects. I hope the authors could include experiments to justify their contributions. What's the efficiency of the proposed algorithm? How is it compared with MPM implemented with Taichi? Is it fast enough for RL research?
NIPS
Title Differentiable Simulation of Soft Multi-body Systems Abstract We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots. 1 Introduction Soft articulated bodies have been studied and utilized in a number of important applications, such as microsurgery [32], underwater robots [37], and adaptive soft grippers [20]. Since the compliance of deformable materials can enable robots to operate more robustly and adaptively, soft biomimetic robots are drawing a lot of attention and have made considerable progress. A snailfish robot dives at a depth of 10,900 meters in the Mariana Trench [43]. Drones equipped with soft manipulators grasp and transmit objects with a 91.7% success rate [17]. Soft hands with pneumatic actuators are able to grasp objects of different shapes, including water bottles, eyeglasses, and sheets of cloth [12]. To enable rapid prototyping of soft robots and efficient design of control algorithms through virtual experiments, we aim to create a realistic deformable multi-body dynamics framework, in which soft articulated robots can be simulated to learn powerful control policies. Design and control of soft robots are challenging because of their nonlinear dynamics and many degrees of freedom. Differentiable physics has shown great promise to deal with such complex problems [4, 14, 31, 68]. One possibility is to treat soft bodies as volumes that are modeled as sets of particles or finite elements [29, 16]. These methods have made great progress, but the volumetric representations are difficult to scale to large multi-body systems and are poorly suited to modeling internal skeletons. Moreover, contact handling in recent differentiable physics frameworks [51, 30] often does not comply with Coulomb’s Law, which is central to plausible visual realism and correct physical behavior. In this paper, we design a powerful and accurate differentiable simulator for soft multi-body dynamics. Since our entire framework is differentiable, our method can be embedded with gradient-based optimization and learning algorithms, supporting gradient-based system identification, motion planning, and motor control. Within the simulator, we first use tetrahedral meshes to enable adaptive resolution and more accurate modeling. Next, to couple soft materials with articulated skeletons, we design a ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). top-down matrix assembly algorithm within the local steps of Projective Dynamics [5]. For accurate contact handling, we extend and generalize a dry friction model previously developed for cloth simulation [54] to soft solids and introduce a new matrix splitting strategy to stabilize the solver. In addition, our simulation framework incorporates actuator models widely used in robotics, including muscles [40], joint torques [75], and pneumatic actuators [12]. With the support of the articulated skeleton constraints, dry frictional contact, and versatile actuators, our novel differentiable algorithm can simulate soft articulated robots and compute gradients for a wide range of applications. The key contributions of this work are as follows. • A top-down matrix assembly algorithm within Projective Dynamics to make soft-body dynamics compatible with reduced-coordinate articulated systems (Sec. 4). • An extended and generalized dry friction model for soft solids with a new matrix splitting strategy to stabilize the solver (Sec. 5). • Analytical models of muscles, joint torques, and pneumatic actuators to enable more realistic and stable simulation results (Sec. 4.3 and Appendix C & D). • A unified differentiable framework that incorporates skeletons, contact, and actuators to enable gradient computation for learning and optimization (Sec. 6). • Experimental validation demonstrating that differentiable physics accelerates system identification and motion control with soft articulated bodies up to orders of magnitude (Sec. 6). In the following paper, Section 2 will discuss related papers in deformable body simulation and differentiable physics. Section 3 is basically a preliminary of projective dynamics to explain the high-level simulation framework and define notations that will be used later. Section 4 and 5 are about how to deal with articulated skeleton and contact in our method. Section 6 will show the ablation studies and comparisons results with other learning methods. Code is available on our project page: https://github.com/YilingQiao/diff_fem 2 Related Work Deformable body simulation using Finite Element Method (FEM) plays an important role in many scientific and engineering problems [32, 20, 57]. Previous works model soft bodies using different representations and methods for specific tasks. There are several kinds of approaches for modeling body actuation. Pneumatic-based methods [9] change the rest shape to produce reaction forces. Rigid bones attached within soft materials are also used to control the motion of deformable bodies [52, 38, 18, 45]. To further simulate biologically realistic motion, it is common to apply joint torques in articulated skeletons [36]. For example, [33, 76] use articulated body dynamics to govern the motion while handling collisions using soft contact. Inspired by animals, different designs of muscle-like actuators for soft-body simulations were also proposed [41, 1, 42]. Regarding contact modeling, spring-based penalty forces are widely used [58, 67, 27] for their simplicity. More advanced algorithms include inelastic projection [6, 24] and barrier-based repulsion [47, 48]. However, these methods do not always conform to Coulomb’s frictional law. We opt for a more realistic dry frictional model [44] to better handle collisions. Projective Dynamics [5] is widely used for its robustness and efficiency for implicit time integration. It has been extended to model muscles [59], rigid skeletons [46], realistic materials [53], and accurate contact forces [54]. Our method also adopts this framework for faster and more stable time integration. In contrast to the aforementioned methods using Projective Dynamics, our algorithm is the first to enable joint actuation in articulated skeletons together with a generalized dry frictional contact for soft body dynamics. Differentiable physics has recently been successfully applied to solve control and optimization problems. There are several types of physically-based simulations that are differentiable, including rigid bodies [10, 11], soft bodies [30, 29, 39, 19, 16], cloth [51, 62], articulated bodies [21, 74, 63], and fluids [71, 73, 28, 70]. Differentiable physics simulation can be used for system identification [68, 26], control [69], and design [14, 49]. For differentiable soft-body dynamics, Du et al. [16] propose a system for FEM simulation represented by volume mesh. This system has been applied to robot design [56] and control [15]. Different from this work [16], our approach uses tetrahedral meshes with adaptive resolution to model finer detail and scale better to complex articulated bodies. Hu et al. [30], Krishna Murthy et al. [39] use source code transformation to differentiate the dynamics, but their contact model does not follow Coulomb’s law. Geilinger et al. [19] simulate soft materials attached to rigid parts with penalty-based contact force, but their use of maximal coordinates makes it difficult to incorporate joint torques. In comparison, our model has realistic contact handling, versatile actuators, and skeletons with joint constraints, thereby enabling our method to simulate a much wider range of soft, multi-body systems not possible before. There are other works that approximate physical dynamics using neural networks [50, 2, 72, 65]. These methods are inherently differentiable but cannot guarantee physical correctness outside the training distribution. 3 Soft Body Simulation Using Projective Dynamics We use Projective Dynamics [5] to model the physics of soft, multi-body systems because its efficient implicit time integration can make the simulation more stable. We briefly introduce Projective Dynamics below. The dynamics model can be written as M(qn+1 − qn − hvn) = h2(∇E(qn+1) + fext), (1) with M being the mass matrix, qn the vertex locations at frame n, h the time step, vn the velocity, E the potential energy due to deformation, and fext the external forces. We choose implict Euler for a stable time integration, then the state qn+1 can be solved by qn+1 = arg min q 1 2h2 (q− sn)>M(q− sn) + E(q), (2) where sn = qn + hvn + h2M−1fext. Projective Dynamics reduces the computational cost by introducing an auxiliary variable p to represent the internal energy as the Euclidean distance between p and q after a projection G: E(q) = ∑ i ωi 2 ‖Giq− pi‖2F , (3) where ω is a scalar weight, and E contains internal energy from different sources, such as deformations, actuators, and constraints. The computation of ω, G, and p is dependent on the form of the energy. Combining all energy components into Eq. 2, we have qn+1 = arg min q 1 2 q> ( M h2 + L ) q + q> ( M h2 sn + Jp ) , (4) where L = ∑ ωiG > i Gi and J = ∑ ωiG > i Si, and Si is the selector matrix. Since this is a quadratic optimization without constraints, its optimal point is given by the solution of the following linear system: ( M h2 + L ) qn+1 = M h2 sn + Jp (5) Note that the estimation of p is based on the current values of q. Therefore we need to alternate between computing p and solving q until convergence. Luckily, both steps are fast and easy to solve: solving q in Eq. 5 is easy because it is a simple linear system, and computing p can be fast because it is local and can be parallelized. We show the generic Projective Dynamics method in Alg. 1. Algorithm 1 Soft body simulation using Projective Dynamics 1: x1 ← initial condition 2: for t = 1 to n− 1 do 3: while not converged do 4: Compute pi for all energy components i according to q (Local step) 5: Solve q in Eq. 5 according to pi (Global step) 6: end while 7: end for 4 Articulated Skeletons Skeletons are indispensable for vertebrate animals and nowadays articulated robots. However, adding skeletons into the soft body simulation is challenging. First, the rigid bones cannot be simply replaced by soft materials with large stiffness, since this can make the system unstable and unrealistic. Second, joint connections between bones must be physically valid at all times, and thus also cannot be modeled as soft constraints. Moreover, the formulation should support joint actuation as torques to drive the multi-body system like an articulated robot. Li et al. [45] proposed a method for passive articulated soft-body simulation with ball joint constraints. We extend this method to enable rotational/prismatic joints, torque actuation, and precise joint connections without introducing extra constraints. 4.1 Rigid Body System When integrated with hard skeletons, vertices on the rigid parts can be expressed as qk = QT r kVk, (6) where Q = (I 0) is the projection from homogeneous coordinates to 3D coordinates, Trk ∈ R4×4 the rigid transformation matrix, and Vk ∈ R4×mk the rest-pose homogeneous coordinates of the kth rigid body. During the global step in Projective Dynamics, we do not directly solve for Trk, but for the increment ∆zk in its degree-of-freedom (DoF), to avoid nonlinearity. This formulation restricts the changes of the rigid vertices to the tangent space yielded by the current Trk: qi+1k = q i k + ∆q i k ≈ qik + ∂qik ∂zk ∆zik, (7) where qik are the vertex locations of the k th body in the ith iteration step, and zk is the variable defining the DoF of the kth rigid body, including the rotation and the translation. The nonrigid part of the vertices can also be integrated with this formulation simply with ∂q i ∂z = I. Let B = ∂q i ∂z be the Jacobian of the concatenated variables. Eq. 4 can be rewritten as ∆zi = arg min ∆z 1 2 ∆z>B> ( M h2 + L ) B∆z + ∆z>B> (( M h2 + L ) qi − ( M h2 sn + Jp )) (8) After solving ∆zi, the new vertex states qi+1 are derived by the new rigid transformation matrix Tr′k using Eq. 6, which is subsequently computed in the local step, discussed below. Local step. The variable ∆zik is composed of the increment of rotation ωik and translation lik based on the current transformation Trk: ∆qik = [ −[qik]× I ] [ωik lik ] . (9) Here [qik]× is defined as the vertical stack of the cross product matrices of all vertices in the k th rigid body. During the local step, we compute the SVD of the new transformation matrix after integrating ωik and l i k, Tk = [ I + ωi∗k 0 0 1 ] Trk + [ 0 lik 0 0 ] = UΣV>, (10) and restrict it to SO(3) to obtain the new rigid transformation, Tr′k = UV >. The local step of Projective Dynamics for a single rigid body is the same as in [46]. However, we propose Eq. 7 to generalize the coupling to kinematic trees [55] with precise and actuated articulation. 4.2 Top-down Matrix Assembly for Articulated Body Systems The articulated body system formulation is similar to the rigid body one, except that the transformation matrix is now chained: Trk = ∏ u∈Uk Au, (11) where Uk contains all ancestor of the kth link (inclusive), and Au is the local transformation matrix defined by joint u. For rigid bodies, the vertex locations of a rigid body only depend on the body’s own DoF variables. In the articulated system, however, they are also affected by the body’s ancestors. Therefore, B changes from a block diagonal matrix to a block lower triangular matrix if the rigid body vertices are ordered by their kinematic tree depth. To compute the matrix B, we consider a link u with one of its non-root ancestor v. By the definitions in Sec. 4.1, the corresponding block in matrix B is Bu,v = ∂TruVu ∂zv = QPv ∂Av ∂zv Sv,uVu, (12) where Pv is the prefix product of the local transformation matrices of the link chain from root to v (exclusive), and Sv,u is the suffix product from v to u. In boundary cases where u = v, the formulation becomes Bu,u = QPu ∂Au ∂zu Vu. (13) When v is the root and thus has the same DoFs as a rigid body, using the results from Sec. 4.1, the formulation can be simplified to Bu,root = Q [−[qu]× I] . (14) Computing Eq. 12 requires the matrix products Pv and Sv,u of a link chain (v, u) in the tree. Straightforward approaches here could result in O(N3) complexity, where N is the number of links. However, by utilizing the kinematic tree and conducting the computation in top-down order, the complexity can be reduced to O(N2), which is optimal. The key observation is that the prefix and suffix products can be computed recursively: Pv = Pv′Av′ (15) Sv′,u = AvSv,u, (16) assuming v′ is the parent link of v. When we traverse the kinematic tree in a depth-first order, the prefix product can be computed in O(1). The suffix product is also obtained as we iterate along the path back to the root. Algorithm 2 shows the matrix assembly method starting from the root node: Algorithm 2 Matrix Assembly for the Articulated System 1: Input: tree link u 2: Compute Pu using Eq. 15 3: v ← u 4: while v is not root do 5: Compute Sv,u using Eq. 16 6: Compute Bu,v using Eq. 12 7: v ← parent(v) 8: end while 9: Compute Bu,root using Eq. 14 10: for s in descendants(u) do 11: Solve link s recursively 12: end for The transformation matrix A and the Jacobian of a joint depend on the joint type. This is further derived in Appendix C. 4.3 Articulated Joint Actuation Eq. 8 is a quadratic optimization, so the optimal ∆zi is given by the linear system H∆zi = k, (17) where H = B> ( M h2 + L ) B and k = −B> (( M h2 + L ) qi − ( M h2 sn + Jp )) . Reordering the vertices into sets of deformable and rigid ones yields the following partitioning of the matrix:[ Hd H > c Hc Hr ] [ ∆zid ∆zir ] = [ kd kr ] , (18) where ∗d and ∗r represents the deformable and the rigid parts, respectively. The joint actuation can be directly added to kr since the linear system is analogous to the basic formulation Ma = f where the right hand side represents the sum of forces and/or torques. The formulation of pneumatic and muscle actuators can be found in Appendix D. 5 Contact Modeling We handle the contact using Coulomb’s frictional law via a Jacobian. To compute the velocities after collisions, we split the left-hand side of Equation 5 into the diagonal mass matrix M and the constraint matrix h2L, and move the latter to the right-hand side: Mvi+1 = f − h2Lvi + ξi, (19) where f = Msn − (M + h2L)qn + h2Jp and the contact force ξi is determined according to f − h2Lvi (the current momentum) to enforce non-penetration and static/sliding friction. The idea here is to enforce Coulomb’s law at every iteration, which is ensured by solving vi+1 using the inverse of a diagonal matrix M. As long as the solver converges at the end, the final v and ξ will conform to the frictional law. This method works well for cloth contacts [54], but cannot be directly applied to soft bodies, because solid continuum is much stiffer than thin sheets, i.e. the elements in h2L on the right-hand side are much larger than those in M on the left-hand side, resulting in severe oscillation or even divergence during the iterative solve. We show that in order to guarantee the convergence of Equation 19, the time step h has to satisfy a certain condition: Proposition 1. Assuming f and ξ are fixed, Equation 19 converges if the time step h satisfies h2 < ρ 24 √ 3Tµ ∑3 k=1 ‖qk − q0‖22 (20) where ρ is the density, µ is the stiffness, T is the number of tetrahedra, and qi are the vertex positions. Details of the proof can be found in Appendix A. Using the setting in our experiments, where T ≈ 1000, µ ≈ 3× 105, ‖qk−q0‖2 ≈ 10−2, and ρ ≈ 1, we would need to set h < 1/1934 in order to ensure the convergence, which is too strict for the simulation to be useful in general applications. Splitting scheme. We improve this method to be compatible with soft body dynamics by introducing a new splitting scheme. Eq. 19 is reformulated as (M + h2D)vi+1 = f − h2(L−D)vi + ξi, (21) where D are the diagonals of L. Our key observation is that (a) the diagonals of L are necessary and sufficient to stabilize the Jacobian iteration, and (b) adding extra diagonal elements to the left-hand side will not break the Coulomb friction law. We show in Appendix B that under the same assumption as Proposition 1, our method is guaranteed to converge no matter how big h is. This improvement accelerates the simulation since larger time steps mean faster computation. We also note that the new splitting scheme will not modify the behavior of the collision response because the convergence point of Eq. 19 is the same as that of Eq. 21, and thus Coulomb’s Law is still satisfied at convergence. 6 Experiments In this section, we first introduce our implementation and then report ablation studies that demonstrate the importance of skeletons and collision contacts in soft-body dynamics. Subsequently, we use the gradients computed by our method to perform system identification; specifically, we estimate the physical parameters of bridges. Finally, we perform gradient-based learning of grasping and motion planning on robots with various actuators, including a pneumatic gripper, an octopus with muscles, and a skeleton-driven fish. Our method can converge more than an order of magnitude faster than reinforcement learning and derivative-free baselines. 6.1 Implementation Our simulator is written in C++, the learning algorithms are implemented in PyTorch [60], and Pybind [34] is used as the interface. We run our experiments on two desktops, one with an Intel Xeon W-2123 CPU @ 3.6GHz and the other with an Intel i9-10980XE @ 3.0GHz, respectively. For differentiation, the numerical data structure in our simulator is templatized and integrated into the C++ Eigen library, such that our method can conveniently interoperate with autodiff tools to differentiate the dynamics. Our method can also run in pure C++ to perform forward simulation. We refer to the open-source code from [59] (Apache-2.0), [54] (GNU GPL v3.0), and [45] (MPL2) during our implementation. More details can be found in our code in the supplement. To further improve the memory efficiency, we introduce a checkpointing scheme [7] into our pipeline. Instead of storing the entire simulation history, we only store the system’s state in each step. During the backward pass, we reload the saved state vector and resume all the intermediate variables before the backpropagation. This strategy can save a major part of the memory, compared to the brute-force implementation. We conduct an experiment to compare the memory consumption with and without this check- pointing scheme. The results are reported in Table 1. CppAD [3] is used to differentiate the simulation here. In this experiment, we simulate a bridge and estimate its material properties as shown in Figure 3(a). The results show that the memory footprint of the baseline scales linearly with simulation length, while our checkpointing scheme keeps memory consumption nearly constant. 6.2 Ablation Study Skeleton constraints. Controlling soft characters via skeletons is natural and convenient: vertebrate animals are soft, but are driven by piecewise-rigid skeletons. Our simulator supports skeletons and joint torques within soft bodies. This ablation study compares other designs with ours. In this experiment, a Baymax model [13] in its T-pose is released from above the ground, as shown in Figure 1. We embed 5 bones inside Baymax (4 in arms and legs, and 1 in the torso). When Baymax falls to the ground, we also add torques on its shoulders so it can lift its arms to a target Y-pose. More details of the setting and qualitative results can be found in Appendix E and the supplementary video. Three metrics, summarized in Figure 1, are used to measure realism. The metrics are averaged over 5 repetitions with different initial positions and velocities. For comparison, we simulate a ‘No skeleton’ Baymax without the support of rigid bones. Its bone error is non-zero because of the deformation. The Baymax in a differentiable rigid body simulator [62] is rigid, so the body length error is non-zero. Li et al. [45] simulate the ‘Passive’ skeleton case where there is no joint actuation and joint angles cannot be adjusted to the desired configuration. We also run Difftaichi-MPM [30] by converting the mesh model to the point-based MPM representation. ‘MPM’ does not have skeletons so the errors are high. The arms also detach from the body so the joint error is NaN. Our method attains the highest degree of physical realism and correctness overall. Contact handling. Good contact handling is critical for simulating multi-body systems that interact with their environment. In this experiment, we throw a 3D soft ball against a 2D thin sheet. Metrics in this experiment are penetration error and indicators of vertical compression and horizontal stretching. Zero penetration error is ideal. ‘Yes’ for compression/stretching indicates that the simulator can model the deformation of the soft ball correctly. The metrics are averaged over 5 experiments with different initial positions and velocities. The dry frictional contact model of Ly et al. [54] does not model the deformation of soft solids, and there could be penetration when the resolutions of the ball and cloth differ a lot due to the nodal collision handling scheme. The rigid differentiable simulator of Qiao et al. [62] can prevent interpenetration, but the ball remains rigid. MPM [30] can model the deformation of both the ball and the cloth, but the cloth is torn apart by the ball and penetration cannot be quantified. In contrast, our method accurately handles collision to avoid interpenetration and correctly simulates the deformation of the ball. 6.3 Applications System identification. Determining the material parameters of deformable objects can be challenging given their high dimensionality and complex dynamics. In this experiment, we use our differentiable simulator to identify the material property of each finite element cell within the soft body. As shown in Figure 3, there are two bridges with unknown materials: a suspension bridge with both ends fixed and the entire bridge being soft, and an arch bridge that has three piers attached to the ground. Given that the movement of the barycenter under gravity, compared to its rest pose, is ∆x = 8cm, we estimate Young’s modulus and Poisson’s ratio of each finite element cell in the bridge. The loss function is the distance from the actual barycenter to the target. The suspension bridge has n = 668 cells and the arch one has n = 2911 cells. The number of unknowns is 2n. We compare our method with four derivative-free methods (CMA-ES [25], LEAP [8], BOBYQA [61], and Nelder-Mead [66]). Each experiment is repeated 5 times with different random seeds. As shown in the figure, our method converges in ∼10 iterations while others fail to converge even after 100 iterations, indicating that derivative-free methods in this high-dimensional setting become too inefficient to converge to a reasonable solution. By making use of the gradients provided by our method, common gradient-based algorithms can quickly reach the target configuration. Motion planning. Controlling the motion of deformable bodies is challenging due to their flexible shapes. In this experiment, summarized in Figure 4, the task is to control robots with different actuator types. In general, given an initial state X0, control policy φθ1(·), and material parameters θ2, the simulator can generate a trajectory of the states at all time steps t, {simt(X0, φθ1(·), θ2)} = {X1, X2, ..., Xn}. If we want the system to reach a target state Xtarget at the end of the simulation, we can define an objective function L(X0, θ1, θ2) = ||(simN (X0, φθ1(·), θ2)−Xtarget)||2, where the optimization variable are θ1 and/or θ2. We compare our method with Reinforcement Learning algorithms (SAC [23], SQL [22], and PPO [64]), and the best derivative-free optimization method from the last experiment, CMA-ES. We also tried MBPO [35], but we found that this method takes too much memory and could not finish any test. All RL methods use the negative of the loss as the reward. The pneumatic gripper in Figure 4(a) has 56 pneumatic cells in four arms and is attached to an (invisible) drone as in [17]. The pneumatic activation can control the volume of a tetrahedron. When the cells inflate, the arms will move inwards and hold the ball tighter. We control the pneumatic activation as well as the movement of the drone to move the ball from the start (0, 0, 0) to our target (0, 0.3, 0) in 50 steps. The loss is the distance from the actual position to the target position. Our method converges in 10 episodes while CMA-ES and PPO gradually converge in 200 and 500 episodes, respectively. The muscle-driven octopus in Figure 4(b) has 8 legs, each with 2 muscles inside. It moves forward by actuating the muscles, being pushed by drag and thrust forces induced by the water on the octopus’s surface [59]. The octopus starts at (0, 0, 0) and our target location is (−0.4, 0.8,−0.4). We set the objective to be the distance between the current location and the desired location. The length of the simulation is 400 steps, and the control input in each step is 64-dimensional. In total, there are 64× 400 = 25600 variables to optimize. Our method converges in 50 episodes while other methods fail to converge in 500 episodes. The fish with an embedded skeleton in Figure 4(c) has 6 bones: 3 in its body, 2 in the fins, and 1 in the tail. The hydrodynamics in this environment is the same as in the octopus experiment. The fish starts at (0, 0, 0) in step 1 and the target location in step 100 is (0, 0, 0.15). The objective function is the distance from the actual location to the target location. For each step, there will be a torque vector of size 5 that represents the joint actuation level. In total, the optimization variable has 500 dimensions. Our method with gradient-based optimization can converge in roughly 50 episodes, while others cannot converge even after 500 episodes. In summary, gradient-free optimization methods and RL algorithms meet substantial difficulties when tackling problems with high dimensionality, such as soft, multi-body systems. Even when the action space is as small as the one in the gripper case, RL methods still fail to rapidly optimize the policy. By introducing the gradients of the simulation, simple gradient-based optimization outperforms other algorithms. This work hopefully may inspire improvements in RL algorithms that tackle such high-dimensional problems. 7 Conclusion Our paper has developed a differentiable physics framework for soft, articulated bodies with dry frictional contact. To make the simulation realistic and easy-to-use, we designed a recursive matrix assembly algorithm and a generalized dry frictional model for soft continuum with a new matrix splitting strategy. Integrated with joint, muscle, and pneumatic actuators, our method can simulate a variety of soft robots. Using our differentiable physics to enable gradient-based optimization, our method converges more than an order of magnitude faster than the baselines and other existing alternatives. There are some limitations in our contact handling and soft body dynamics. Currently, though our algorithm is more extensive and generalized than existing differentiable physics algorithms and our implementation handles the most commonly found contact configuration, vertex-face collisions, there could still be edge-edge penetration missed in some corner cases. Moreover, the Projective Dynamics pipeline limits the energy to have the form E = ‖Gq− p‖. Some nonlinear material models (e.g., neo-Hookean) are not captured in this framework and new models for differentiable physics will be required for handling nonlinear and heterogeneous materials. For future work, we aim to add edge-edge collision handling in the Projective Dynamics pipeline. The techniques in [53] can be used to incorporate addition material types. GPU or other parallel computing implementation can be used to boost the performance of gradient computation. Acknowledgements. This research is supported in part by Army Research Office, National Science Foundation, Dr. Barry Mersky and Capital One Endowed Professorship.
1. What is the focus and contribution of the paper on soft-body physics simulation? 2. What are the strengths of the proposed approach, particularly in terms of its end-to-end differentiability and support for efficient system identification and control? 3. What are the weaknesses of the paper regarding its experimental evaluations and limitations? 4. Do you have any concerns about the applicability of the proposed method in real-world scenarios with different physics than the simulator? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper presents a new soft-body physics simulator that is end-to-end differentiable, and shows how it supports efficient system identification and control. Experiments show the value of the simulator for a range of complex realistic situations -- although only in simulation. The experiments also report ablations of key model components (skeletons, collision contacts). Review I am not an expert in the technical aspect of differentiable simulation or soft-body physics, so my comments should be taken with an appropriate grain of salt. But I have collaborated as a senior co-author on several projects using related simulators for AI and machine learning problem. I liked this paper a lot: The topic is important, the writing is clear, and the contribution meaningful. The experiments are well-done and well-described. I can't judge originality like an expert in simulation could, but my impression is that the method is novel enough for a top conference contribution. I expect this work will be of interest to a number of different sub-communities at NeurIPS: RL, robotics, and maybe computer vision researchers interested in intuitive physics, and possibly some neuroscientists as well. The main thing I felt was missing is an evaluation of the system on scenarios that don't match the physics of the simulator: this could be either control of a simple real-world robot, or control or system identification in a simulator that has different physics than the differentiable simulator (i.e., simulating deployment of the model-based methods in the paper to real-world systems where there is some model mismatch). I don't see this as a fatal flaw but it would have been very nice to see. Anything like this that could be added in revision might lead me to raise my score.
NIPS
Title Differentiable Simulation of Soft Multi-body Systems Abstract We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots. 1 Introduction Soft articulated bodies have been studied and utilized in a number of important applications, such as microsurgery [32], underwater robots [37], and adaptive soft grippers [20]. Since the compliance of deformable materials can enable robots to operate more robustly and adaptively, soft biomimetic robots are drawing a lot of attention and have made considerable progress. A snailfish robot dives at a depth of 10,900 meters in the Mariana Trench [43]. Drones equipped with soft manipulators grasp and transmit objects with a 91.7% success rate [17]. Soft hands with pneumatic actuators are able to grasp objects of different shapes, including water bottles, eyeglasses, and sheets of cloth [12]. To enable rapid prototyping of soft robots and efficient design of control algorithms through virtual experiments, we aim to create a realistic deformable multi-body dynamics framework, in which soft articulated robots can be simulated to learn powerful control policies. Design and control of soft robots are challenging because of their nonlinear dynamics and many degrees of freedom. Differentiable physics has shown great promise to deal with such complex problems [4, 14, 31, 68]. One possibility is to treat soft bodies as volumes that are modeled as sets of particles or finite elements [29, 16]. These methods have made great progress, but the volumetric representations are difficult to scale to large multi-body systems and are poorly suited to modeling internal skeletons. Moreover, contact handling in recent differentiable physics frameworks [51, 30] often does not comply with Coulomb’s Law, which is central to plausible visual realism and correct physical behavior. In this paper, we design a powerful and accurate differentiable simulator for soft multi-body dynamics. Since our entire framework is differentiable, our method can be embedded with gradient-based optimization and learning algorithms, supporting gradient-based system identification, motion planning, and motor control. Within the simulator, we first use tetrahedral meshes to enable adaptive resolution and more accurate modeling. Next, to couple soft materials with articulated skeletons, we design a ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). top-down matrix assembly algorithm within the local steps of Projective Dynamics [5]. For accurate contact handling, we extend and generalize a dry friction model previously developed for cloth simulation [54] to soft solids and introduce a new matrix splitting strategy to stabilize the solver. In addition, our simulation framework incorporates actuator models widely used in robotics, including muscles [40], joint torques [75], and pneumatic actuators [12]. With the support of the articulated skeleton constraints, dry frictional contact, and versatile actuators, our novel differentiable algorithm can simulate soft articulated robots and compute gradients for a wide range of applications. The key contributions of this work are as follows. • A top-down matrix assembly algorithm within Projective Dynamics to make soft-body dynamics compatible with reduced-coordinate articulated systems (Sec. 4). • An extended and generalized dry friction model for soft solids with a new matrix splitting strategy to stabilize the solver (Sec. 5). • Analytical models of muscles, joint torques, and pneumatic actuators to enable more realistic and stable simulation results (Sec. 4.3 and Appendix C & D). • A unified differentiable framework that incorporates skeletons, contact, and actuators to enable gradient computation for learning and optimization (Sec. 6). • Experimental validation demonstrating that differentiable physics accelerates system identification and motion control with soft articulated bodies up to orders of magnitude (Sec. 6). In the following paper, Section 2 will discuss related papers in deformable body simulation and differentiable physics. Section 3 is basically a preliminary of projective dynamics to explain the high-level simulation framework and define notations that will be used later. Section 4 and 5 are about how to deal with articulated skeleton and contact in our method. Section 6 will show the ablation studies and comparisons results with other learning methods. Code is available on our project page: https://github.com/YilingQiao/diff_fem 2 Related Work Deformable body simulation using Finite Element Method (FEM) plays an important role in many scientific and engineering problems [32, 20, 57]. Previous works model soft bodies using different representations and methods for specific tasks. There are several kinds of approaches for modeling body actuation. Pneumatic-based methods [9] change the rest shape to produce reaction forces. Rigid bones attached within soft materials are also used to control the motion of deformable bodies [52, 38, 18, 45]. To further simulate biologically realistic motion, it is common to apply joint torques in articulated skeletons [36]. For example, [33, 76] use articulated body dynamics to govern the motion while handling collisions using soft contact. Inspired by animals, different designs of muscle-like actuators for soft-body simulations were also proposed [41, 1, 42]. Regarding contact modeling, spring-based penalty forces are widely used [58, 67, 27] for their simplicity. More advanced algorithms include inelastic projection [6, 24] and barrier-based repulsion [47, 48]. However, these methods do not always conform to Coulomb’s frictional law. We opt for a more realistic dry frictional model [44] to better handle collisions. Projective Dynamics [5] is widely used for its robustness and efficiency for implicit time integration. It has been extended to model muscles [59], rigid skeletons [46], realistic materials [53], and accurate contact forces [54]. Our method also adopts this framework for faster and more stable time integration. In contrast to the aforementioned methods using Projective Dynamics, our algorithm is the first to enable joint actuation in articulated skeletons together with a generalized dry frictional contact for soft body dynamics. Differentiable physics has recently been successfully applied to solve control and optimization problems. There are several types of physically-based simulations that are differentiable, including rigid bodies [10, 11], soft bodies [30, 29, 39, 19, 16], cloth [51, 62], articulated bodies [21, 74, 63], and fluids [71, 73, 28, 70]. Differentiable physics simulation can be used for system identification [68, 26], control [69], and design [14, 49]. For differentiable soft-body dynamics, Du et al. [16] propose a system for FEM simulation represented by volume mesh. This system has been applied to robot design [56] and control [15]. Different from this work [16], our approach uses tetrahedral meshes with adaptive resolution to model finer detail and scale better to complex articulated bodies. Hu et al. [30], Krishna Murthy et al. [39] use source code transformation to differentiate the dynamics, but their contact model does not follow Coulomb’s law. Geilinger et al. [19] simulate soft materials attached to rigid parts with penalty-based contact force, but their use of maximal coordinates makes it difficult to incorporate joint torques. In comparison, our model has realistic contact handling, versatile actuators, and skeletons with joint constraints, thereby enabling our method to simulate a much wider range of soft, multi-body systems not possible before. There are other works that approximate physical dynamics using neural networks [50, 2, 72, 65]. These methods are inherently differentiable but cannot guarantee physical correctness outside the training distribution. 3 Soft Body Simulation Using Projective Dynamics We use Projective Dynamics [5] to model the physics of soft, multi-body systems because its efficient implicit time integration can make the simulation more stable. We briefly introduce Projective Dynamics below. The dynamics model can be written as M(qn+1 − qn − hvn) = h2(∇E(qn+1) + fext), (1) with M being the mass matrix, qn the vertex locations at frame n, h the time step, vn the velocity, E the potential energy due to deformation, and fext the external forces. We choose implict Euler for a stable time integration, then the state qn+1 can be solved by qn+1 = arg min q 1 2h2 (q− sn)>M(q− sn) + E(q), (2) where sn = qn + hvn + h2M−1fext. Projective Dynamics reduces the computational cost by introducing an auxiliary variable p to represent the internal energy as the Euclidean distance between p and q after a projection G: E(q) = ∑ i ωi 2 ‖Giq− pi‖2F , (3) where ω is a scalar weight, and E contains internal energy from different sources, such as deformations, actuators, and constraints. The computation of ω, G, and p is dependent on the form of the energy. Combining all energy components into Eq. 2, we have qn+1 = arg min q 1 2 q> ( M h2 + L ) q + q> ( M h2 sn + Jp ) , (4) where L = ∑ ωiG > i Gi and J = ∑ ωiG > i Si, and Si is the selector matrix. Since this is a quadratic optimization without constraints, its optimal point is given by the solution of the following linear system: ( M h2 + L ) qn+1 = M h2 sn + Jp (5) Note that the estimation of p is based on the current values of q. Therefore we need to alternate between computing p and solving q until convergence. Luckily, both steps are fast and easy to solve: solving q in Eq. 5 is easy because it is a simple linear system, and computing p can be fast because it is local and can be parallelized. We show the generic Projective Dynamics method in Alg. 1. Algorithm 1 Soft body simulation using Projective Dynamics 1: x1 ← initial condition 2: for t = 1 to n− 1 do 3: while not converged do 4: Compute pi for all energy components i according to q (Local step) 5: Solve q in Eq. 5 according to pi (Global step) 6: end while 7: end for 4 Articulated Skeletons Skeletons are indispensable for vertebrate animals and nowadays articulated robots. However, adding skeletons into the soft body simulation is challenging. First, the rigid bones cannot be simply replaced by soft materials with large stiffness, since this can make the system unstable and unrealistic. Second, joint connections between bones must be physically valid at all times, and thus also cannot be modeled as soft constraints. Moreover, the formulation should support joint actuation as torques to drive the multi-body system like an articulated robot. Li et al. [45] proposed a method for passive articulated soft-body simulation with ball joint constraints. We extend this method to enable rotational/prismatic joints, torque actuation, and precise joint connections without introducing extra constraints. 4.1 Rigid Body System When integrated with hard skeletons, vertices on the rigid parts can be expressed as qk = QT r kVk, (6) where Q = (I 0) is the projection from homogeneous coordinates to 3D coordinates, Trk ∈ R4×4 the rigid transformation matrix, and Vk ∈ R4×mk the rest-pose homogeneous coordinates of the kth rigid body. During the global step in Projective Dynamics, we do not directly solve for Trk, but for the increment ∆zk in its degree-of-freedom (DoF), to avoid nonlinearity. This formulation restricts the changes of the rigid vertices to the tangent space yielded by the current Trk: qi+1k = q i k + ∆q i k ≈ qik + ∂qik ∂zk ∆zik, (7) where qik are the vertex locations of the k th body in the ith iteration step, and zk is the variable defining the DoF of the kth rigid body, including the rotation and the translation. The nonrigid part of the vertices can also be integrated with this formulation simply with ∂q i ∂z = I. Let B = ∂q i ∂z be the Jacobian of the concatenated variables. Eq. 4 can be rewritten as ∆zi = arg min ∆z 1 2 ∆z>B> ( M h2 + L ) B∆z + ∆z>B> (( M h2 + L ) qi − ( M h2 sn + Jp )) (8) After solving ∆zi, the new vertex states qi+1 are derived by the new rigid transformation matrix Tr′k using Eq. 6, which is subsequently computed in the local step, discussed below. Local step. The variable ∆zik is composed of the increment of rotation ωik and translation lik based on the current transformation Trk: ∆qik = [ −[qik]× I ] [ωik lik ] . (9) Here [qik]× is defined as the vertical stack of the cross product matrices of all vertices in the k th rigid body. During the local step, we compute the SVD of the new transformation matrix after integrating ωik and l i k, Tk = [ I + ωi∗k 0 0 1 ] Trk + [ 0 lik 0 0 ] = UΣV>, (10) and restrict it to SO(3) to obtain the new rigid transformation, Tr′k = UV >. The local step of Projective Dynamics for a single rigid body is the same as in [46]. However, we propose Eq. 7 to generalize the coupling to kinematic trees [55] with precise and actuated articulation. 4.2 Top-down Matrix Assembly for Articulated Body Systems The articulated body system formulation is similar to the rigid body one, except that the transformation matrix is now chained: Trk = ∏ u∈Uk Au, (11) where Uk contains all ancestor of the kth link (inclusive), and Au is the local transformation matrix defined by joint u. For rigid bodies, the vertex locations of a rigid body only depend on the body’s own DoF variables. In the articulated system, however, they are also affected by the body’s ancestors. Therefore, B changes from a block diagonal matrix to a block lower triangular matrix if the rigid body vertices are ordered by their kinematic tree depth. To compute the matrix B, we consider a link u with one of its non-root ancestor v. By the definitions in Sec. 4.1, the corresponding block in matrix B is Bu,v = ∂TruVu ∂zv = QPv ∂Av ∂zv Sv,uVu, (12) where Pv is the prefix product of the local transformation matrices of the link chain from root to v (exclusive), and Sv,u is the suffix product from v to u. In boundary cases where u = v, the formulation becomes Bu,u = QPu ∂Au ∂zu Vu. (13) When v is the root and thus has the same DoFs as a rigid body, using the results from Sec. 4.1, the formulation can be simplified to Bu,root = Q [−[qu]× I] . (14) Computing Eq. 12 requires the matrix products Pv and Sv,u of a link chain (v, u) in the tree. Straightforward approaches here could result in O(N3) complexity, where N is the number of links. However, by utilizing the kinematic tree and conducting the computation in top-down order, the complexity can be reduced to O(N2), which is optimal. The key observation is that the prefix and suffix products can be computed recursively: Pv = Pv′Av′ (15) Sv′,u = AvSv,u, (16) assuming v′ is the parent link of v. When we traverse the kinematic tree in a depth-first order, the prefix product can be computed in O(1). The suffix product is also obtained as we iterate along the path back to the root. Algorithm 2 shows the matrix assembly method starting from the root node: Algorithm 2 Matrix Assembly for the Articulated System 1: Input: tree link u 2: Compute Pu using Eq. 15 3: v ← u 4: while v is not root do 5: Compute Sv,u using Eq. 16 6: Compute Bu,v using Eq. 12 7: v ← parent(v) 8: end while 9: Compute Bu,root using Eq. 14 10: for s in descendants(u) do 11: Solve link s recursively 12: end for The transformation matrix A and the Jacobian of a joint depend on the joint type. This is further derived in Appendix C. 4.3 Articulated Joint Actuation Eq. 8 is a quadratic optimization, so the optimal ∆zi is given by the linear system H∆zi = k, (17) where H = B> ( M h2 + L ) B and k = −B> (( M h2 + L ) qi − ( M h2 sn + Jp )) . Reordering the vertices into sets of deformable and rigid ones yields the following partitioning of the matrix:[ Hd H > c Hc Hr ] [ ∆zid ∆zir ] = [ kd kr ] , (18) where ∗d and ∗r represents the deformable and the rigid parts, respectively. The joint actuation can be directly added to kr since the linear system is analogous to the basic formulation Ma = f where the right hand side represents the sum of forces and/or torques. The formulation of pneumatic and muscle actuators can be found in Appendix D. 5 Contact Modeling We handle the contact using Coulomb’s frictional law via a Jacobian. To compute the velocities after collisions, we split the left-hand side of Equation 5 into the diagonal mass matrix M and the constraint matrix h2L, and move the latter to the right-hand side: Mvi+1 = f − h2Lvi + ξi, (19) where f = Msn − (M + h2L)qn + h2Jp and the contact force ξi is determined according to f − h2Lvi (the current momentum) to enforce non-penetration and static/sliding friction. The idea here is to enforce Coulomb’s law at every iteration, which is ensured by solving vi+1 using the inverse of a diagonal matrix M. As long as the solver converges at the end, the final v and ξ will conform to the frictional law. This method works well for cloth contacts [54], but cannot be directly applied to soft bodies, because solid continuum is much stiffer than thin sheets, i.e. the elements in h2L on the right-hand side are much larger than those in M on the left-hand side, resulting in severe oscillation or even divergence during the iterative solve. We show that in order to guarantee the convergence of Equation 19, the time step h has to satisfy a certain condition: Proposition 1. Assuming f and ξ are fixed, Equation 19 converges if the time step h satisfies h2 < ρ 24 √ 3Tµ ∑3 k=1 ‖qk − q0‖22 (20) where ρ is the density, µ is the stiffness, T is the number of tetrahedra, and qi are the vertex positions. Details of the proof can be found in Appendix A. Using the setting in our experiments, where T ≈ 1000, µ ≈ 3× 105, ‖qk−q0‖2 ≈ 10−2, and ρ ≈ 1, we would need to set h < 1/1934 in order to ensure the convergence, which is too strict for the simulation to be useful in general applications. Splitting scheme. We improve this method to be compatible with soft body dynamics by introducing a new splitting scheme. Eq. 19 is reformulated as (M + h2D)vi+1 = f − h2(L−D)vi + ξi, (21) where D are the diagonals of L. Our key observation is that (a) the diagonals of L are necessary and sufficient to stabilize the Jacobian iteration, and (b) adding extra diagonal elements to the left-hand side will not break the Coulomb friction law. We show in Appendix B that under the same assumption as Proposition 1, our method is guaranteed to converge no matter how big h is. This improvement accelerates the simulation since larger time steps mean faster computation. We also note that the new splitting scheme will not modify the behavior of the collision response because the convergence point of Eq. 19 is the same as that of Eq. 21, and thus Coulomb’s Law is still satisfied at convergence. 6 Experiments In this section, we first introduce our implementation and then report ablation studies that demonstrate the importance of skeletons and collision contacts in soft-body dynamics. Subsequently, we use the gradients computed by our method to perform system identification; specifically, we estimate the physical parameters of bridges. Finally, we perform gradient-based learning of grasping and motion planning on robots with various actuators, including a pneumatic gripper, an octopus with muscles, and a skeleton-driven fish. Our method can converge more than an order of magnitude faster than reinforcement learning and derivative-free baselines. 6.1 Implementation Our simulator is written in C++, the learning algorithms are implemented in PyTorch [60], and Pybind [34] is used as the interface. We run our experiments on two desktops, one with an Intel Xeon W-2123 CPU @ 3.6GHz and the other with an Intel i9-10980XE @ 3.0GHz, respectively. For differentiation, the numerical data structure in our simulator is templatized and integrated into the C++ Eigen library, such that our method can conveniently interoperate with autodiff tools to differentiate the dynamics. Our method can also run in pure C++ to perform forward simulation. We refer to the open-source code from [59] (Apache-2.0), [54] (GNU GPL v3.0), and [45] (MPL2) during our implementation. More details can be found in our code in the supplement. To further improve the memory efficiency, we introduce a checkpointing scheme [7] into our pipeline. Instead of storing the entire simulation history, we only store the system’s state in each step. During the backward pass, we reload the saved state vector and resume all the intermediate variables before the backpropagation. This strategy can save a major part of the memory, compared to the brute-force implementation. We conduct an experiment to compare the memory consumption with and without this check- pointing scheme. The results are reported in Table 1. CppAD [3] is used to differentiate the simulation here. In this experiment, we simulate a bridge and estimate its material properties as shown in Figure 3(a). The results show that the memory footprint of the baseline scales linearly with simulation length, while our checkpointing scheme keeps memory consumption nearly constant. 6.2 Ablation Study Skeleton constraints. Controlling soft characters via skeletons is natural and convenient: vertebrate animals are soft, but are driven by piecewise-rigid skeletons. Our simulator supports skeletons and joint torques within soft bodies. This ablation study compares other designs with ours. In this experiment, a Baymax model [13] in its T-pose is released from above the ground, as shown in Figure 1. We embed 5 bones inside Baymax (4 in arms and legs, and 1 in the torso). When Baymax falls to the ground, we also add torques on its shoulders so it can lift its arms to a target Y-pose. More details of the setting and qualitative results can be found in Appendix E and the supplementary video. Three metrics, summarized in Figure 1, are used to measure realism. The metrics are averaged over 5 repetitions with different initial positions and velocities. For comparison, we simulate a ‘No skeleton’ Baymax without the support of rigid bones. Its bone error is non-zero because of the deformation. The Baymax in a differentiable rigid body simulator [62] is rigid, so the body length error is non-zero. Li et al. [45] simulate the ‘Passive’ skeleton case where there is no joint actuation and joint angles cannot be adjusted to the desired configuration. We also run Difftaichi-MPM [30] by converting the mesh model to the point-based MPM representation. ‘MPM’ does not have skeletons so the errors are high. The arms also detach from the body so the joint error is NaN. Our method attains the highest degree of physical realism and correctness overall. Contact handling. Good contact handling is critical for simulating multi-body systems that interact with their environment. In this experiment, we throw a 3D soft ball against a 2D thin sheet. Metrics in this experiment are penetration error and indicators of vertical compression and horizontal stretching. Zero penetration error is ideal. ‘Yes’ for compression/stretching indicates that the simulator can model the deformation of the soft ball correctly. The metrics are averaged over 5 experiments with different initial positions and velocities. The dry frictional contact model of Ly et al. [54] does not model the deformation of soft solids, and there could be penetration when the resolutions of the ball and cloth differ a lot due to the nodal collision handling scheme. The rigid differentiable simulator of Qiao et al. [62] can prevent interpenetration, but the ball remains rigid. MPM [30] can model the deformation of both the ball and the cloth, but the cloth is torn apart by the ball and penetration cannot be quantified. In contrast, our method accurately handles collision to avoid interpenetration and correctly simulates the deformation of the ball. 6.3 Applications System identification. Determining the material parameters of deformable objects can be challenging given their high dimensionality and complex dynamics. In this experiment, we use our differentiable simulator to identify the material property of each finite element cell within the soft body. As shown in Figure 3, there are two bridges with unknown materials: a suspension bridge with both ends fixed and the entire bridge being soft, and an arch bridge that has three piers attached to the ground. Given that the movement of the barycenter under gravity, compared to its rest pose, is ∆x = 8cm, we estimate Young’s modulus and Poisson’s ratio of each finite element cell in the bridge. The loss function is the distance from the actual barycenter to the target. The suspension bridge has n = 668 cells and the arch one has n = 2911 cells. The number of unknowns is 2n. We compare our method with four derivative-free methods (CMA-ES [25], LEAP [8], BOBYQA [61], and Nelder-Mead [66]). Each experiment is repeated 5 times with different random seeds. As shown in the figure, our method converges in ∼10 iterations while others fail to converge even after 100 iterations, indicating that derivative-free methods in this high-dimensional setting become too inefficient to converge to a reasonable solution. By making use of the gradients provided by our method, common gradient-based algorithms can quickly reach the target configuration. Motion planning. Controlling the motion of deformable bodies is challenging due to their flexible shapes. In this experiment, summarized in Figure 4, the task is to control robots with different actuator types. In general, given an initial state X0, control policy φθ1(·), and material parameters θ2, the simulator can generate a trajectory of the states at all time steps t, {simt(X0, φθ1(·), θ2)} = {X1, X2, ..., Xn}. If we want the system to reach a target state Xtarget at the end of the simulation, we can define an objective function L(X0, θ1, θ2) = ||(simN (X0, φθ1(·), θ2)−Xtarget)||2, where the optimization variable are θ1 and/or θ2. We compare our method with Reinforcement Learning algorithms (SAC [23], SQL [22], and PPO [64]), and the best derivative-free optimization method from the last experiment, CMA-ES. We also tried MBPO [35], but we found that this method takes too much memory and could not finish any test. All RL methods use the negative of the loss as the reward. The pneumatic gripper in Figure 4(a) has 56 pneumatic cells in four arms and is attached to an (invisible) drone as in [17]. The pneumatic activation can control the volume of a tetrahedron. When the cells inflate, the arms will move inwards and hold the ball tighter. We control the pneumatic activation as well as the movement of the drone to move the ball from the start (0, 0, 0) to our target (0, 0.3, 0) in 50 steps. The loss is the distance from the actual position to the target position. Our method converges in 10 episodes while CMA-ES and PPO gradually converge in 200 and 500 episodes, respectively. The muscle-driven octopus in Figure 4(b) has 8 legs, each with 2 muscles inside. It moves forward by actuating the muscles, being pushed by drag and thrust forces induced by the water on the octopus’s surface [59]. The octopus starts at (0, 0, 0) and our target location is (−0.4, 0.8,−0.4). We set the objective to be the distance between the current location and the desired location. The length of the simulation is 400 steps, and the control input in each step is 64-dimensional. In total, there are 64× 400 = 25600 variables to optimize. Our method converges in 50 episodes while other methods fail to converge in 500 episodes. The fish with an embedded skeleton in Figure 4(c) has 6 bones: 3 in its body, 2 in the fins, and 1 in the tail. The hydrodynamics in this environment is the same as in the octopus experiment. The fish starts at (0, 0, 0) in step 1 and the target location in step 100 is (0, 0, 0.15). The objective function is the distance from the actual location to the target location. For each step, there will be a torque vector of size 5 that represents the joint actuation level. In total, the optimization variable has 500 dimensions. Our method with gradient-based optimization can converge in roughly 50 episodes, while others cannot converge even after 500 episodes. In summary, gradient-free optimization methods and RL algorithms meet substantial difficulties when tackling problems with high dimensionality, such as soft, multi-body systems. Even when the action space is as small as the one in the gripper case, RL methods still fail to rapidly optimize the policy. By introducing the gradients of the simulation, simple gradient-based optimization outperforms other algorithms. This work hopefully may inspire improvements in RL algorithms that tackle such high-dimensional problems. 7 Conclusion Our paper has developed a differentiable physics framework for soft, articulated bodies with dry frictional contact. To make the simulation realistic and easy-to-use, we designed a recursive matrix assembly algorithm and a generalized dry frictional model for soft continuum with a new matrix splitting strategy. Integrated with joint, muscle, and pneumatic actuators, our method can simulate a variety of soft robots. Using our differentiable physics to enable gradient-based optimization, our method converges more than an order of magnitude faster than the baselines and other existing alternatives. There are some limitations in our contact handling and soft body dynamics. Currently, though our algorithm is more extensive and generalized than existing differentiable physics algorithms and our implementation handles the most commonly found contact configuration, vertex-face collisions, there could still be edge-edge penetration missed in some corner cases. Moreover, the Projective Dynamics pipeline limits the energy to have the form E = ‖Gq− p‖. Some nonlinear material models (e.g., neo-Hookean) are not captured in this framework and new models for differentiable physics will be required for handling nonlinear and heterogeneous materials. For future work, we aim to add edge-edge collision handling in the Projective Dynamics pipeline. The techniques in [53] can be used to incorporate addition material types. GPU or other parallel computing implementation can be used to boost the performance of gradient computation. Acknowledgements. This research is supported in part by Army Research Office, National Science Foundation, Dr. Barry Mersky and Capital One Endowed Professorship.
1. What is the focus and contribution of the paper regarding soft articulated bodies? 2. What are the strengths of the proposed method, particularly in terms of its technical soundness, writing quality, experiment setup, and presented results? 3. Are there any concerns or suggestions regarding the paper's content, such as the need for more clarification or motivation in certain sections? 4. How does the reviewer assess the impact and significance of the research direction addressed by the paper? 5. Do you have any questions or suggestions regarding the paper's discussion of related work and potential expansion of the results?
Summary Of The Paper Review
Summary Of The Paper The proposed paper introduces a novel method for solving the simulation of soft articulated bodies based on a differentiable solver. The key contribution of the introduced method are a top-down matrix assembly approach for projective dynamics, a generalized friction model, analytic models for muscles, and a unified framework that enables computing gradients in a differentiable manner. The method is evaluated based on a number of meaningful experiments and ablation studies. Review Overall, this is an interesting paper that has all things going for it. The method is technically sound, the paper is well-written, and the experiment setup along with the presented results are convincing. As also stated in the paper, the research direction the paper addresses is important for robotics and research in the direction of differentiable physics is potentially of high impact. W.r.t. the paper I specifically like the experiments towards system identification and motion planning. So, in summary, I am in support of accepting this work to NeurIPS. Specific comments: At the beginning of Section 3 it would be helpful to first formally define the geometric representation (tetrahedral meshes). Also, at the beginning of Section 3 it is not clear why vertices are defined per frame. Some introducing/clarifying sentences would be required here. It would also be helpful to discuss what implications exists due to relying on tetrahedral meshes. Can the method also be used for other (less constrained) geometric representations? Section 3: I would also suggest to move the introduction to Projective Dynamics to the Appendix. Equations (1)-(3) are either direct copies or reformulation and given that reference [5] is already provided, restating them here does not seem necessary. Similar for Algorithm 1. For Equation (4)-(5) it would be helpful to more clearly describe what the differences to [5] are. Section 4: Here it would also help to first motivate why it is necessary to add articulated skeletons prior to stating that it is challenging to add them. L148,L149: It is not clear what is meant by ' ... if the rigid body vertices are ordered 149 by their kinematic tree depth.'. L157: Please clearly state what 'the tree' is. Currently it is not defined. L180: Missing ' ' between 'friction.The idea' I appreciated the discussion in L306-311. It would even be interesting to further expand on the reasons as to why the algorithms perform as observed. Given the results shown, the statement that the 'method is able to simulate a variety of soft robots' seems somewhat unsubstantiated and should be toned-down. The authors may want to add the following papers to their discussion of related work on deformable body simulation and differentiable physics: T. Hädrich, B. Benes, O. Deussen, S. Pirk, Interactive Modeling and Authoring of Climbing Plants, Computer Graphics Forum (Proceedings of Eurographics), 2017 H. Shao, T. Kugelstadt, W. Palubicki, J. Bender, S. Pirk, D. L. Michels, Accurately Solving Physical Systems with Graph Learning, 2020
NIPS
Title Optimization over Continuous and Multi-dimensional Decisions with Observational Data Abstract We consider the optimization of an uncertain objective over continuous and multidimensional decision spaces in problems in which we are only provided with observational data. We propose a novel algorithmic framework that is tractable, asymptotically consistent, and superior to comparable methods on example problems. Our approach leverages predictive machine learning methods and incorporates information on the uncertainty of the predicted outcomes for the purpose of prescribing decisions. We demonstrate the efficacy of our method on examples involving both synthetic and real data sets. 1 Introduction We study the general problem in which a decision maker seeks to optimize a known objective function that depends on an uncertain quantity. The uncertain quantity has an unknown distribution, which may be affected by the action chosen by the decision maker. Many important problems across a variety of fields fit into this framework. In healthcare, for example, a doctor aims to prescribe drugs in specific dosages to regulate a patient’s vital signs. In revenue management, a store owner must decide how to price various products in order to maximize profit. In online retail, companies decide which products to display for a user to maximize sales. The general problem we study is characterized by the following components: • Decision variable: z ∈ Z ⊂ Rp, • Outcome: Y (z) ∈ Y (We adopt the potential outcomes framework [20], in which Y (z) denotes the (random) quantity that would have been observed had decision z been chosen.), • Auxiliary covariates (also called side-information or context): x ∈ X ⊂ Rd, • Cost function: c(z; y) : Z × Y → R. (This function is known a priori.) We allow the auxiliary covariates, decision variable, and outcome to take values on multi-dimensional, continuous sets. A decision-maker seeks to determine the action that minimizes the conditional expected cost: min z∈Z E[c(z;Y (z))|X = x]. (1) Of course, the distribution of Y (z) is unknown, so it is not possible to solve this problem exactly. However, we assume that we have access to observational data, consisting of n independent and identically distributed observations, (Xi, Zi, Yi) for i = 1, . . . , n. Each of these observations consists of an auxiliary covariate vector, a decision, and an observed outcome. This type of data presents two challenges that differentiate our problem from a predictive machine learning problem. First, it is incomplete. We only observe Yi := Yi(Zi), the outcome associated with the applied decision. We do 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. not observe what the outcome would have been under a different decision. Second, the decisions were not necessarily chosen independently of the outcomes, as they would have been in a randomized experiment, and we do not know how the decisions were assigned. Following common practice in the causal inference literature, we make the ignorability assumption of Hirano and Imbens [13]. Assumption 1 (Ignorability). Y (z) ⊥ Z | X ∀z ∈ Z In other words, we assume that historically the decision Z has been chosen as a function of the auxiliary covariates X . There were no unmeasured confounding variables that affected both the choice of decision and the outcome. Under this assumption, we are able to rewrite the objective of (1) as E[c(z;Y ) | X = x, Z = z]. This form of the objective is easier to learn because it depends only on the observed outcome, not on the counterfactual outcomes. A direct approach to solve this problem is to use a regression method to predict the cost as a function of x and z and then choose z to minimize this predicted cost. If the selected regression method is uniformly consistent in z, then the action chosen by this method will be asymptotically optimal under certain conditions. (We will formalize this later.) However, this requires choosing a regression method that ensures the optimization problem is tractable. For this work, we restrict our attention to linear and tree-based methods, such as CART [7] and random forests [6], as they are both effective and tractable for many practical problems. A key issue with the direct approach is that it tries to learn too much. It tries to learn the expected outcome under every possible decision, and the level of uncertainty associated with the predicted expected cost can vary between different decisions. This method can lead us to select a decision which has a small point estimate of the cost, but a large uncertainty interval. 1.1 Notation Throughout the paper, we use capital letters to refer to random quantities and lower case letters to refer to deterministic quantities. Thus, we use Z to refer to the decision randomly assigned by the (unknown) historical policy and z to refer to a specific action. For a given, auxiliary covariate vector, x, and a proposed decision, z, the conditional expectation E[c(z;Y )|X = z, Z = z] means the expectation of the cost function c(z;Y ) under the conditional measure in which X is fixed as x and Z is fixed as z. We ignore details of measurability throughout and assume this conditional expectation is well defined. Throughout, all norms are `2 norms unless otherwise specified. We use (X,Z) to denote vector concatenation. 1.2 Related Work Recent years have seen tremendous interest in the area of data-driven optimization. Much of this work combines ideas from the statistics and machine learning literature with techniques from mathematical optimization. Bertsimas and Kallus [4] developed a framework that uses nonparametric machine learning methods to solve data-driven optimization problems in the presence of auxiliary covariates. They take advantage of the fact that for many machine learning algorithms, the predictions are given by a linear combination of the training samples’ target variables. Kao et al. [17] and Elmachtoub and Grigas [11] developed algorithms that make predictions tailored for use in specific optimization problems. However, they all deal with the setting in which the decision does not affect the outcome. This is insufficient for many applications, such as pricing, in which the demand for a product is clearly affected by the price. Bertsimas and Kallus [5] later studied the limitations of predictive approaches to pricing problems. In particular, they demonstrated that confounding in the data between the decision and outcome can lead to large optimality gaps if ignored. They proposed a kernel-based method for data-driven optimization in this setting, but it does not scale well with the dimension of the decision space. Misic [19] developed an efficient mixed integer optimization formulation for problems in which the predicted cost is given by a tree ensemble model. This approach scales fairly well with the dimension of the decision space but does not consider the need for uncertainty penalization. Another relevant area of research is causal inference (see Rosenbaum [20] for an overview), which concerns the study of causal effects from observational data. Much of the work in this area has focused on determining whether a treatment has a significant effect on the population as a whole. However, a growing body of work has focused on learning optimal, personalized treatments from observational data. Athey and Wager [1] proposed an algorithm that achieves optimal (up to a constant factor) regret bounds in learning a treatment policy when there are two potential treatments. Kallus [14] proposed an algorithm to efficiently learn a treatment policy when there is a finite set of potential treatments. Bertsimas et al. [3] developed a tree-based algorithm that learns to personalize treatment assignments from observational data. It is based on the optimal trees machine learning method [2] and has performed well in experiments. Considerably less attention has been paid to problems with a continuous decision space. Hirano and Imbens [13] introduced the problem of inference with a continuous treatment, and Flores [12] studied the problem of learning an optimal policy in this setting. Recently, Kallus and Zhou [16] developed an approach to policy learning with a continuous decision variable that generalizes the idea of inverse propensity score weighting. Our approach differs in that we focus on regression-based methods, which we believe scale better with the dimension of the decision space and avoid the need for density estimation. The idea of uncertainty penalization has been explored as an alternative to empirical risk minimization in statistical learning, starting with Maurer and Pontil [18]. Swaminathan and Joachims [21] applied uncertainty penalization to the offline bandit setting. Their setting is similar to the one we study. An agent seeks to minimize the prediction error of his/her decision, but only observes the loss associated with the selected decision. They assumed that the policy used in the training data is known, which allowed them to use inverse propensity weighting methods. In contrast, we assume ignorability, but not knowledge of the historical policy, and we allow for more complex decision spaces. We note that our approach bears a superficial resemblance to the upper confidence bound (UCB) algorithms for multi-armed bandits (cf. Bubeck et al. [8]). These algorithms choose the action with the highest upper confidence bound on its predicted expected reward. Our approach, in contrast, chooses the action with the highest lower confidence bound on its predicted expected reward (or lowest upper confidence bound on predicted expected cost). The difference is that UCB algorithms choose actions with high upside to balance exploration and exploitation in the online bandit setting, whereas we work in the offline setting with a focus on solely exploitation. 1.3 Contributions Our primary contribution is an algorithmic framework for observational data driven optimization that allows the decision variable to take values on continuous and multidimensional sets. We consider applications in personalized medicine, in which the decision is the dose of Warfarin to prescribe to a patient, and in pricing, in which the action is the list of prices for several products in a store. 2 Approach In this section, we introduce the uncertainty penalization approach for optimization with observational data. Recall that the observational data consists of n i.i.d. observations, (X1, Z1, Y1), . . . , (Xn, Zn, Yn). For observation i, Xi represents the pertinent auxiliary covariates, Zi is the decision that was applied, and Yi is the observed response. The first step of the approach is to train a predictive machine learning model to estimate E[c(z;Y )|X = x, Z = z]. When training the predictive model, the feature space is the cartesian product of the auxiliary covariate space and the decision space, X × Z . We have several options for how to train the predictive model. We can train the model to predict Y , the cost c(Z, Y ), or a combination of these two responses. In general, we denote the prediction of the ML algorithm as a linear combination of the cost function evaluated at the training examples, µ̂(x, z) := n∑ i=1 wi(x, z)c(z;Yi). We require the predictive model to satisfy a generalization of the honesty property of Wager and Athey [23]. Assumption 2 (Honesty). The model trained on (X1, Z1, Y1), . . . , (Xn, Zn, Yn) is honest, i.e., the weights, wi(x, z), are determined independently of the outcomes, Y1, . . . , Yn. This honesty assumption reduces the bias of the predictions of the cost. We also enforce several restrictions on the weight functions. Assumption 3 (Weights). For all (x, z) ∈ X × Z , ∑n i=1 wi(x, z) = 1 and for all i, wi(x, z) ∈ [0, 1/γn]. In addition, X ×Z can be partitioned into Γn regions such that if (x, z) and (x, z′) are in the same region, ||w(x, z)− w(x, z′)||1 ≤ α||z − z′||2. The direct approach to solving (1) amounts to choosing z ∈ Z that minimizes µ̂(x, z), for each new instance of auxiliary covariates, x. However, the variance of the predicted cost, µ̂(x, z), can vary with the decision variable, z. Especially with a small training sample size, the direct approach, minimizing µ̂(x, z), can give a decision with a small, but highly uncertain, predicted cost. We can reduce the expected regret of our action by adding a penalty term for the variance of the selected decision. If Assumption 2 holds, the conditional variance of µ̂(x, z) given (X1, Z1), . . . , (Xn, Zn) is given by V (x, z) := ∑ i w2i (x, z)Var(c(z;Yi)|Xi, Zi). In addition, µ̂(x, z) may not be an unbiased predictor, so we also introduce a term that penalizes the conditional bias of the predicted cost given (X1, Z1), . . . , (Xn, Zn). Since the true cost is unknown, it is not possible to exactly compute this bias. Instead, we compute an upper bound under a Lipschitz assumption (details in Section 3). B(x, z) := ∑ i wi(x, z)||(Xi, Zi)− (x, z)||2. Overall, given a new vector of auxiliary covariates, x ∈ X , our approach makes a decision by solving min z∈Z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z), (2) where λ1 and λ2 are tuning parameters. As a concrete example, we can use the CART algorithm of Breiman et al. [7] or the optimal regression tree algorithm of Bertsimas and Dunn [2] as the predictive method. These algorithms work by partitioning the training examples into clusters, i.e., the leaves of the tree. For a new observation, a prediction of the response variable is made by averaging the responses of the training examples that are contained in the same leaf. wi(x, z) = { 1 N(x,z) , (x, z) ∈ l(x, z), 0, otherwise, where l(x, z) denotes the set of training examples that are contained in the same leaf of the tree as (x, z), and N(x, z) = |l(x, z)|. The variance term will be small when the leaf has a large number of training examples, and the bias term will be small when the diameter of the leaf is small. Assumption 2 can be satisfied by ignoring the outcomes when selecting the splits or by dividing the training data into two sets, one for making splits and one for making predictions. Assumption 3 is satisfied with α = 0 if the minimum number of training samples in each leaf is γn and the maximum number of leaves in the tree is Γn. 2.1 Parameter Tuning Before proceeding, we note that the variance terms, Var(c(z;Yi) | Xi, Zi), are often unknown in practice. In the absence of further knowledge, we assume homoscedasticity, i.e., Var(Yi|Xi, Zi) is constant. It is possible to estimate this value by training a machine learning model to predict Yi as a function of (Xi, Zi) and computing the mean squared error on the training set. However, it may be advantageous to include this value with the tuning parameter λ1. We have several options for tuning parameters λ1 and λ2 (and whatever other parameters are associated with the predictive model). Because the counterfactual outcomes are unknown, it is not possible to use the standard approach of holding out a validation set during training and evaluating the error of the model on that validation set for each combination of possible parameters. One option is to tune the predictive model’s parameters using cross validation to maximize predictive accuracy and then select λ1 and λ2 using the theory we present in Section 3. Another option is to split the data into a training and validation set and train a predictive model on the validation data to impute the counterfactual outcomes. We then select the model that minimizes the predicted cost on the validation set. For the examples in Section 4, we use a combination of these two ideas. We train a random forest model on the validation set (in order to impute counterfactual outcomes), and we then select the model that minimizes the sum of the mean squared error and the predicted cost on the validation data. In the supplementary materials, we include computations that demonstrate, for the Warfarin example of Section 4.2, the method is not too sensitive to the choice of λ1 and λ2. 3 Theory In this section, we describe the theoretical motivation for our approach and provide finite-sample generalization and regret bounds. For notational convenience, we define µ(x, z) := E[c(z;Y (z))|X = x] = E[c(z;Y )|X = x, Z = z], where the second equality follows from the ignorability assumption. Before presenting the results, we first present a few additional assumptions. Assumption 4 (Regularity). The set X × Z is nonempty, closed, and bounded with diameter D. Assumption 5 (Objective Conditions). The objective function satisfies the following properties: 1. |c(z; y)| ≤ 1 ∀z, y. 2. For all y ∈ Y , c(·; y) is L-Lipschitz. 3. For any x, x′ ∈ X and any z, z′ ∈ Z , |µ(x, z)− µ(x′, z′)| ≤ L||(x, z)− (x′, z′)||. These assumptions provide some conditions under which the generalization and regret bounds hold, but similar results hold under alternative sets of assumptions (e.g. if c(z;Y )|Z is subexponential instead of bounded). With these additional assumptions, we have the following generalization bound. All proofs are contained in the supplementary materials. Theorem 1. Suppose assumptions 1-5 hold. Then, with probability at least 1− δ, µ(x, z)− µ̂(x, z) ≤ 4 3γn ln(Kn/δ) + 2 √ V (x, z) ln(Kn/δ) + L ·B(x, z) ∀z ∈ Z, where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . This result uniformly bounds, with high probability, the true cost of action z by the predicted cost, µ̂(x, z), a term depending on the uncertainty of that predicted cost, V (x, z), and a term proportional to the bias associated with that predicted cost, B(x, z). It is easy to see how this result motivates the approach described in (2). One can also verify that the generalization bound still holds if (X1, Z1), . . . , (Xn, Zn) are chosen deterministically, as long as Y1, . . . , Yn are still independent. Using Theorem 1, we are able to derive a finite-sample regret bound. Theorem 2. Suppose assumptions 1-5 hold. Define z∗ ∈ arg min z µ(x, z), ẑ ∈ arg min z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z). If λ1 = 2 √ ln(2Kn/δ) and λ2 = L, then with probability at least 1− δ, µ(x, ẑ)− µ(x, z∗) ≤ 2 γn ln(2Kn/δ) + 4 √ V (x, z∗) ln(2Kn/δ) + 2L ·B(x, z∗), where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . By this result, the regret of the approach defined in (2) depends only on the variance and bias terms of the optimal action, z∗. Because the predicted cost is penalized by V (x, z) and B(x, z), it does not matter how poor the prediction of cost is at suboptimal actions. Theorem 2 immediately implies the following asymptotic result, assuming the auxiliary feature space and decision space are fixed as the training sample size grows to infinity. Corollary 1. In the setting of Theorem 2, if γn = Ω(nβ) for some β > 0, Γn = O(n), and B(x, z∗)→p 0 as n→∞, then µ(x, ẑ)→p µ(x, z∗) as n→∞. The assumptions can be satisfied, for example, with CART or random forest as the learning algorithm with parameters set in accordance with Lemma 2 of Wager and Athey [23]. This next example demonstrates that there exist problems for which the regret of the uncertainty penalized method is strictly better, asymptotically, than the regret of predicted cost minimization. Example 1. Suppose there are m+ 1 different actions and two possible, equally probable states of the world. In one state, action 0 has a cost that is deterministically 1, and all other actions have a random cost that is drawn from N (0, 1) distribution. In the other state, action 0 has a cost that is deterministically 0, and all other actions have a random cost, drawn from a N (1, 1) distribution. Suppose the training data consists of m trials of each action. If µ̂(j) is the empirical average cost of action j, then the predicted cost minimization algorithm selects the action that minimizes µ̂(j). The uncertainty penalization algorithm adds a penalty of the form suggested by Theorem 2, λ √ σ2j lnm m . If λ ≥ √ 2, the (Bayesian) expected regret of the uncertainty penalization algorithm is asymptotically strictly less than the expected regret of the predicted cost minimization algorithm, ERUP = o(ERPCM ), where the expectations are taken over both the training data and the unknown state of the world. This example is simple but demonstrates that there exist settings in which predicted cost minimization is asymptotically suboptimal to the method we have described. In addition, the proof illustrates how one can construct tighter regret bounds than the one in Theorem 2 for problems with specific structure. 3.1 Tractability The tractability of (2) depends on the algorithm that is used as the predictive model. For many kernel-based methods, the resulting optimization problems are highly nonlinear and do not scale well when the dimension of the decision space is more than 2 or 3. For this reason, we advocate using tree-based and linear models as the predictive model. Tree based models partition the space X × Z into Γn leaves, so there are only Γn possible values of w(x, z). Therefore, we can solve (2) separately for each leaf. For j = 1, . . . ,Γn, we solve min µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z) s.t. z ∈ Z (x, z) ∈ Lj , (3) where Lj denotes the subset of X × Z that makes up leaf j of the tree. Because each split in the tree is a hyperplane, Lj is defined by an intersection of hyperplanes and thus is a polyhedral set. Clearly, B(x, z) is a convex function in z, as it is a nonnegative linear combination of convex functions. If we assume homoscedasticity, then V (x, z) is constant for all (x, z) ∈ Lj . If c(z; y) is convex in z and Z is a convex set, (3) is a convex optimization problem and can be solved by convex optimization techniques. Furthermore, since the Γn instances of (3) are all independent, we can solve them in parallel. Once (3) has been solved for all leaves, we select the solution from the leaf with the overall minimal objective value. For tree ensemble methods, such as random forest [6] or xgboost [9], optimization is more difficult. We compute optimal decisions using a coordinate descent heuristic. From a random starting action, we cycle through holding all decision variables fixed except for one and optimize that decision using discretization. We repeat this until convergence from several different random starting decisions. For linear predictive models, the resulting problem is often a second order conic optimization problem, which can be handled by off-the-shelf solvers (details given in the supplementary materials). 4 Results In this section, we demonstrate the effectiveness of our approach with two examples. In the first, we consider pricing problem with synthetic data, while in the second, we use real patient data for personalized Warfarin dosing. 50000 52500 55000 57500 60000 0 500 1000 1500 2000 Number of Training Examples Ex pe ct ed R ev en ue RF UP−CART UP−Lasso UP−RF (a) Pricing example. ! ! ! ! ! ! ! ! ! 200 300 400 500 600 1000 2000 3000 4000 Number of Training Examples M SE ! CART Constant CRM Lasso LB RF UP−CART UP−Lasso UP−RF (b) Warfarin example. Figure 1 4.1 Pricing In this example, the decision variable, z ∈ R5, is a vector of prices for a collection of products. The outcome, Y , is a vector of demands for those products. The auxiliary covariates may contain data on the weather and other exogenous factors that may affect demand. The objective is to select prices to maximize revenue for a given vector of auxiliary covariates. The demand for a single product is affected by the auxiliary covariates, the price of that product, and the price of one or more of the other products, but the mapping is unknown to the algorithm. The details on the data generation process can be found in the supplementary materials. In Figure 1a, we compare the expected revenues of the strategies produced by several algorithms. CART, RF, and Lasso refer to the direct methods of training, respectively, a decision tree, a random forest, and a lasso regression [22] to predict revenue, as a function of the auxiliary covariates and prices, and choosing prices, for each vector of auxiliary covariates in the test set, that maximize predicted revenue. (Note that the revenues for CART and Lasso were too small to be displayed on the plot. Unsurprisingly, the linear model performs poorly because revenue does not vary linearly with price. We restrict all prices to be at most 50 to ensure the optimization problems are bounded.) UP-CART, UP-RF, and UP-Lasso refer to the uncertainty penalized analogues in which the variance and bias terms are included in the objective. For each training sample size, n, we average our results over one hundred separate training sets of size n. At a training size of 2000, the uncertainty penalized random forest method improves expected revenue by an average of $270 compared to the direct RF method. This improvement is statistically significant at the 0.05 significance level by the Wilcoxon signed-rank test (p-value 4.4× 10−18, testing the null hypothesis that mean improvement is 0 across 100 different training sets). 4.2 Warfarin Dosing Warfarin is a commonly prescribed anticoagulant that is used to treat patients who have had blood clots or who have a high risk of stroke. Determining the optimal maintenance dose of Warfarin presents a challenge as the appropriate dose varies significantly from patient to patient and is potentially affected by many factors including age, gender, weight, health history, and genetics. However, this is a crucial task because a dose that is too low or too high can put the patient at risk for clotting or bleeding. The effect of a Warfarin dose on a patient is measured by the International Normalilzed Ratio (INR). Physicians typically aim for patients to have an INR in a target range of 2-3. In this example, we test the efficacy of our approach in learning optimal Warfarin dosing with data from Consortium et al. [10]. This publicly available data set contains the optimal stable dose, found by experimentation, for a diverse set of 5410 patients. In addition, the data set contains a variety of covariates for each patient, including demographic information, reason for treatment, medical history, current medications, and the genotype variant at CYP2C9 and VKORC1. It is unique because it contains the optimal dose for each patient, permitting the use of off-the-shelf machine learning methods to predict this optimal dose as a function of patient covariates. We instead use this data to construct a problem with observational data, which resembles the common problem practitioners face. Our access to the true optimal dose for each patient allows us to evaluate the performance of our method out-of-sample. This is a commonly used technique, and the resulting data set is sometimes called semi-synthetic. Several researchers have used the Warfarin data for developing personalized approaches to medical treatments. In particular, Kallus [15] and Bertsimas et al. [3] tested algorithms that learned to treat patients from semi-synthetic observational data. However, they both discretized the dosage into three categories, whereas we treat the dosage as a continuous decision variable. To begin, we split the data into a training set of 4000 patients and a test set of 1410 patients. We keep this split fixed throughout all of our experiments to prevent cheating by using insights gained by visualization and exploration on the training set. Similar to Kallus [15], we assume physicians prescribe Warfarin as a function of BMI. We assume the response that the physicians observe is related to the difference between the dose a patient was given and the true optimal dose for that patient. It is a noisy observation, but it, on average, gives directional information (whether the dose was too high or too low) and information on the magnitude of the distance from the optimal dose. The precise details of how we generate the data are given in the supplementary materials. For all methods, we repeat our work across 100 randomizations of assigned training doses and responses. To measure the performance of our methods, we compute, on the test set, the mean squared error (MSE) of the prescribed doses relative to the true optimal doses. Using the notation described in Section 1, Xi ∈ R99 represents the auxiliary covariates for patient i. We work in normalized units so the covariates all contribute equally to the bias penalty term. Zi ∈ R represents the assigned dose for patient i, and Yi ∈ R represents the observed response for patient i. The objective in this problem is to minimize (E[Y (z)|X = x])2 with respect to the dose, z.1 Figure 1b displays the results of several algorithms as a function of the number of training examples. We compare CART, without any penalization, to CART with uncertainty penalization (UP-CART), and we see that uncertainty penalization offers a consistent improvement. This improvement is greatest when the training sample size is smallest. (Note: for CART with no penalization, when multiple doses give the same optimal predicted response, we select the mean.) Similarly, when we compare the random forest and Lasso methods with their uncertainty-penalizing analogues, we again see consistent improvements in MSE. The “Constant” line in the plot measures the performance of a baseline heuristic that assigns a fixed dose of 35 mg/week to all patients. The “LB” line provides an unattainable lower bound on the performance of all methods that use the observational data. For this method, we train a random forest to predict the optimal dose as a function of the patient covariates. We also compare our methods with the Counterfactual Risk Minimization (CRM) method of Swaminathan and Joachims [21]. We allow their method access to the true propensity scores that generated the data and optimize over all regularized linear policies for which the proposed dose is a linear function of the auxiliary covariates. We tried multiple combinations of tuning parameters, but the method always performed poorly out-of-sample. We suspect this is due to the size of the policy space. Our lasso based method works best on this data set when the number of training samples is large, but the random forest based method is best for smaller sample sizes. With the maximal training set size of 4000, the improvements of the CART, random forest, and lasso uncertainty penalized methods over their unpenalized analogues (2.2%, 8.6%, 0.5% respectively) are all statistically significant at the 0.05 family-wise error rate level by the Wilcoxon signed-rank test with Bonferroni correction (adjusted p-values 2.1× 10−4, 4.3× 10−16, 1.2× 10−6 respectively). 5 Conclusions In this paper, we introduced a data-driven framework that combines ideas from predictive machine learning and causal inference to optimize an uncertain objective using observational data. Unlike 1This objective differs slightly from the setting described in Section 3 in which the objective was to minimize the conditional expectation of a cost function. However, it is straightforward to modify the results to obtain the same regret bound (save a few constant factors) when minimizing g(E[c(z;Y (z))|X = x]) for a Lipschitz function, g. most existing algorithms, our approach handles continuous and multi-dimensional decision variables by introducing terms that penalize the uncertainty associated with the predicted costs. We proved finite sample generalization and regret bounds and provided a sufficient set of conditions under which the resulting decisions are asymptotically optimal. We demonstrated, both theoretically and with real-world examples, the tractability of the approach and the benefit of the approach over unpenalized predicted cost minimization.
1. What are the strengths and weaknesses of the proposed algorithm for decision-making policies? 2. How does the paper extend prior works in learning from logged interventions? 3. What are the real-world applications of the problem setting addressed by the paper? 4. Are there any concerns regarding the reproducibility of the results? 5. How could the paper be improved further through additional analyses or modifications?
Review
Review The paper proposes an algorithm that can learn good decision-making policies over a continuous set of decisions using only access to observational data. The problem is well-motivated, but the paper can be substantially strengthened. Quality: Ok The paper clearly motivates a weakness of direct regression (e.g. from context and decision, to predict expected cost). The regression models may have different uncertainty for different decisions, and so it is useful to include an empirical estimate of variance and bias of the regression when selecting decisions. The paper will be more informative by highlighting several choices of regression models, each with different V (variance) and B (bias), and observing how lambda_1 lambda_2 are tuned to pick low-cost decisions with high probability. Clarity: Ok The high level message is clear, but several improvements can be made. E.g. Till Line 123, I assumed that the cost c(z,y) is unknown and is being modeled. Equation for hat{mu} however seems to imply that the costs are known and only the outcome distribution is unknown (so that hat{mu} can be written as a combination of training responses c(z, Y_i) for each decision z). Knowing the cost function can be a substantially simpler problem than the bandit feedback setting. Originality: Ok The paper extends prior work for learning from logged interventions in two important ways -- what if the decision set is continuous multi-dimensional, and what if the intervention policy is unknown. I expected semi-synthetic experiment comparisons with [16] (which assumed the intervention policy is known), and the straightforward modification of [16] with imputed generalized propensity scores to address the paper's problem setting. Significance: Good Several real-world problems (e.g. the semi-synthetic setup for Warfarin dosing) can benefit from better algorithms for the problem studied in the paper. Reproducibility: Not clear what hyper-parameters ended up getting selected in each of the two experiments. I would be more confident if there was code accompanying the paper (the datasets are public anyway). I have read the other reviews and the author feedback. I maintain my borderline accept rating for the paper. The paper can be much stronger with a more careful analysis of the V(x,z) and B(x,z) terms in the objective. If there was a toy problem where we knew bias in closed form, do we indeed see that penalizing bias helps (or, is the Lipschitz upper-bound on bias very loose and actually achieving a different regularization)? If we have a problem with heterogenous outcomes, is the homoscedastic V estimate helping or hurting? There are heuristics to get variance estimates from regression models (or even conformal regression methods that provide an estimate and confidence interval). Do they work ok or is the method reliant on estimate of V using MSE on the training set? With such an analysis, the paper will be substantially stronger.
NIPS
Title Optimization over Continuous and Multi-dimensional Decisions with Observational Data Abstract We consider the optimization of an uncertain objective over continuous and multidimensional decision spaces in problems in which we are only provided with observational data. We propose a novel algorithmic framework that is tractable, asymptotically consistent, and superior to comparable methods on example problems. Our approach leverages predictive machine learning methods and incorporates information on the uncertainty of the predicted outcomes for the purpose of prescribing decisions. We demonstrate the efficacy of our method on examples involving both synthetic and real data sets. 1 Introduction We study the general problem in which a decision maker seeks to optimize a known objective function that depends on an uncertain quantity. The uncertain quantity has an unknown distribution, which may be affected by the action chosen by the decision maker. Many important problems across a variety of fields fit into this framework. In healthcare, for example, a doctor aims to prescribe drugs in specific dosages to regulate a patient’s vital signs. In revenue management, a store owner must decide how to price various products in order to maximize profit. In online retail, companies decide which products to display for a user to maximize sales. The general problem we study is characterized by the following components: • Decision variable: z ∈ Z ⊂ Rp, • Outcome: Y (z) ∈ Y (We adopt the potential outcomes framework [20], in which Y (z) denotes the (random) quantity that would have been observed had decision z been chosen.), • Auxiliary covariates (also called side-information or context): x ∈ X ⊂ Rd, • Cost function: c(z; y) : Z × Y → R. (This function is known a priori.) We allow the auxiliary covariates, decision variable, and outcome to take values on multi-dimensional, continuous sets. A decision-maker seeks to determine the action that minimizes the conditional expected cost: min z∈Z E[c(z;Y (z))|X = x]. (1) Of course, the distribution of Y (z) is unknown, so it is not possible to solve this problem exactly. However, we assume that we have access to observational data, consisting of n independent and identically distributed observations, (Xi, Zi, Yi) for i = 1, . . . , n. Each of these observations consists of an auxiliary covariate vector, a decision, and an observed outcome. This type of data presents two challenges that differentiate our problem from a predictive machine learning problem. First, it is incomplete. We only observe Yi := Yi(Zi), the outcome associated with the applied decision. We do 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. not observe what the outcome would have been under a different decision. Second, the decisions were not necessarily chosen independently of the outcomes, as they would have been in a randomized experiment, and we do not know how the decisions were assigned. Following common practice in the causal inference literature, we make the ignorability assumption of Hirano and Imbens [13]. Assumption 1 (Ignorability). Y (z) ⊥ Z | X ∀z ∈ Z In other words, we assume that historically the decision Z has been chosen as a function of the auxiliary covariates X . There were no unmeasured confounding variables that affected both the choice of decision and the outcome. Under this assumption, we are able to rewrite the objective of (1) as E[c(z;Y ) | X = x, Z = z]. This form of the objective is easier to learn because it depends only on the observed outcome, not on the counterfactual outcomes. A direct approach to solve this problem is to use a regression method to predict the cost as a function of x and z and then choose z to minimize this predicted cost. If the selected regression method is uniformly consistent in z, then the action chosen by this method will be asymptotically optimal under certain conditions. (We will formalize this later.) However, this requires choosing a regression method that ensures the optimization problem is tractable. For this work, we restrict our attention to linear and tree-based methods, such as CART [7] and random forests [6], as they are both effective and tractable for many practical problems. A key issue with the direct approach is that it tries to learn too much. It tries to learn the expected outcome under every possible decision, and the level of uncertainty associated with the predicted expected cost can vary between different decisions. This method can lead us to select a decision which has a small point estimate of the cost, but a large uncertainty interval. 1.1 Notation Throughout the paper, we use capital letters to refer to random quantities and lower case letters to refer to deterministic quantities. Thus, we use Z to refer to the decision randomly assigned by the (unknown) historical policy and z to refer to a specific action. For a given, auxiliary covariate vector, x, and a proposed decision, z, the conditional expectation E[c(z;Y )|X = z, Z = z] means the expectation of the cost function c(z;Y ) under the conditional measure in which X is fixed as x and Z is fixed as z. We ignore details of measurability throughout and assume this conditional expectation is well defined. Throughout, all norms are `2 norms unless otherwise specified. We use (X,Z) to denote vector concatenation. 1.2 Related Work Recent years have seen tremendous interest in the area of data-driven optimization. Much of this work combines ideas from the statistics and machine learning literature with techniques from mathematical optimization. Bertsimas and Kallus [4] developed a framework that uses nonparametric machine learning methods to solve data-driven optimization problems in the presence of auxiliary covariates. They take advantage of the fact that for many machine learning algorithms, the predictions are given by a linear combination of the training samples’ target variables. Kao et al. [17] and Elmachtoub and Grigas [11] developed algorithms that make predictions tailored for use in specific optimization problems. However, they all deal with the setting in which the decision does not affect the outcome. This is insufficient for many applications, such as pricing, in which the demand for a product is clearly affected by the price. Bertsimas and Kallus [5] later studied the limitations of predictive approaches to pricing problems. In particular, they demonstrated that confounding in the data between the decision and outcome can lead to large optimality gaps if ignored. They proposed a kernel-based method for data-driven optimization in this setting, but it does not scale well with the dimension of the decision space. Misic [19] developed an efficient mixed integer optimization formulation for problems in which the predicted cost is given by a tree ensemble model. This approach scales fairly well with the dimension of the decision space but does not consider the need for uncertainty penalization. Another relevant area of research is causal inference (see Rosenbaum [20] for an overview), which concerns the study of causal effects from observational data. Much of the work in this area has focused on determining whether a treatment has a significant effect on the population as a whole. However, a growing body of work has focused on learning optimal, personalized treatments from observational data. Athey and Wager [1] proposed an algorithm that achieves optimal (up to a constant factor) regret bounds in learning a treatment policy when there are two potential treatments. Kallus [14] proposed an algorithm to efficiently learn a treatment policy when there is a finite set of potential treatments. Bertsimas et al. [3] developed a tree-based algorithm that learns to personalize treatment assignments from observational data. It is based on the optimal trees machine learning method [2] and has performed well in experiments. Considerably less attention has been paid to problems with a continuous decision space. Hirano and Imbens [13] introduced the problem of inference with a continuous treatment, and Flores [12] studied the problem of learning an optimal policy in this setting. Recently, Kallus and Zhou [16] developed an approach to policy learning with a continuous decision variable that generalizes the idea of inverse propensity score weighting. Our approach differs in that we focus on regression-based methods, which we believe scale better with the dimension of the decision space and avoid the need for density estimation. The idea of uncertainty penalization has been explored as an alternative to empirical risk minimization in statistical learning, starting with Maurer and Pontil [18]. Swaminathan and Joachims [21] applied uncertainty penalization to the offline bandit setting. Their setting is similar to the one we study. An agent seeks to minimize the prediction error of his/her decision, but only observes the loss associated with the selected decision. They assumed that the policy used in the training data is known, which allowed them to use inverse propensity weighting methods. In contrast, we assume ignorability, but not knowledge of the historical policy, and we allow for more complex decision spaces. We note that our approach bears a superficial resemblance to the upper confidence bound (UCB) algorithms for multi-armed bandits (cf. Bubeck et al. [8]). These algorithms choose the action with the highest upper confidence bound on its predicted expected reward. Our approach, in contrast, chooses the action with the highest lower confidence bound on its predicted expected reward (or lowest upper confidence bound on predicted expected cost). The difference is that UCB algorithms choose actions with high upside to balance exploration and exploitation in the online bandit setting, whereas we work in the offline setting with a focus on solely exploitation. 1.3 Contributions Our primary contribution is an algorithmic framework for observational data driven optimization that allows the decision variable to take values on continuous and multidimensional sets. We consider applications in personalized medicine, in which the decision is the dose of Warfarin to prescribe to a patient, and in pricing, in which the action is the list of prices for several products in a store. 2 Approach In this section, we introduce the uncertainty penalization approach for optimization with observational data. Recall that the observational data consists of n i.i.d. observations, (X1, Z1, Y1), . . . , (Xn, Zn, Yn). For observation i, Xi represents the pertinent auxiliary covariates, Zi is the decision that was applied, and Yi is the observed response. The first step of the approach is to train a predictive machine learning model to estimate E[c(z;Y )|X = x, Z = z]. When training the predictive model, the feature space is the cartesian product of the auxiliary covariate space and the decision space, X × Z . We have several options for how to train the predictive model. We can train the model to predict Y , the cost c(Z, Y ), or a combination of these two responses. In general, we denote the prediction of the ML algorithm as a linear combination of the cost function evaluated at the training examples, µ̂(x, z) := n∑ i=1 wi(x, z)c(z;Yi). We require the predictive model to satisfy a generalization of the honesty property of Wager and Athey [23]. Assumption 2 (Honesty). The model trained on (X1, Z1, Y1), . . . , (Xn, Zn, Yn) is honest, i.e., the weights, wi(x, z), are determined independently of the outcomes, Y1, . . . , Yn. This honesty assumption reduces the bias of the predictions of the cost. We also enforce several restrictions on the weight functions. Assumption 3 (Weights). For all (x, z) ∈ X × Z , ∑n i=1 wi(x, z) = 1 and for all i, wi(x, z) ∈ [0, 1/γn]. In addition, X ×Z can be partitioned into Γn regions such that if (x, z) and (x, z′) are in the same region, ||w(x, z)− w(x, z′)||1 ≤ α||z − z′||2. The direct approach to solving (1) amounts to choosing z ∈ Z that minimizes µ̂(x, z), for each new instance of auxiliary covariates, x. However, the variance of the predicted cost, µ̂(x, z), can vary with the decision variable, z. Especially with a small training sample size, the direct approach, minimizing µ̂(x, z), can give a decision with a small, but highly uncertain, predicted cost. We can reduce the expected regret of our action by adding a penalty term for the variance of the selected decision. If Assumption 2 holds, the conditional variance of µ̂(x, z) given (X1, Z1), . . . , (Xn, Zn) is given by V (x, z) := ∑ i w2i (x, z)Var(c(z;Yi)|Xi, Zi). In addition, µ̂(x, z) may not be an unbiased predictor, so we also introduce a term that penalizes the conditional bias of the predicted cost given (X1, Z1), . . . , (Xn, Zn). Since the true cost is unknown, it is not possible to exactly compute this bias. Instead, we compute an upper bound under a Lipschitz assumption (details in Section 3). B(x, z) := ∑ i wi(x, z)||(Xi, Zi)− (x, z)||2. Overall, given a new vector of auxiliary covariates, x ∈ X , our approach makes a decision by solving min z∈Z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z), (2) where λ1 and λ2 are tuning parameters. As a concrete example, we can use the CART algorithm of Breiman et al. [7] or the optimal regression tree algorithm of Bertsimas and Dunn [2] as the predictive method. These algorithms work by partitioning the training examples into clusters, i.e., the leaves of the tree. For a new observation, a prediction of the response variable is made by averaging the responses of the training examples that are contained in the same leaf. wi(x, z) = { 1 N(x,z) , (x, z) ∈ l(x, z), 0, otherwise, where l(x, z) denotes the set of training examples that are contained in the same leaf of the tree as (x, z), and N(x, z) = |l(x, z)|. The variance term will be small when the leaf has a large number of training examples, and the bias term will be small when the diameter of the leaf is small. Assumption 2 can be satisfied by ignoring the outcomes when selecting the splits or by dividing the training data into two sets, one for making splits and one for making predictions. Assumption 3 is satisfied with α = 0 if the minimum number of training samples in each leaf is γn and the maximum number of leaves in the tree is Γn. 2.1 Parameter Tuning Before proceeding, we note that the variance terms, Var(c(z;Yi) | Xi, Zi), are often unknown in practice. In the absence of further knowledge, we assume homoscedasticity, i.e., Var(Yi|Xi, Zi) is constant. It is possible to estimate this value by training a machine learning model to predict Yi as a function of (Xi, Zi) and computing the mean squared error on the training set. However, it may be advantageous to include this value with the tuning parameter λ1. We have several options for tuning parameters λ1 and λ2 (and whatever other parameters are associated with the predictive model). Because the counterfactual outcomes are unknown, it is not possible to use the standard approach of holding out a validation set during training and evaluating the error of the model on that validation set for each combination of possible parameters. One option is to tune the predictive model’s parameters using cross validation to maximize predictive accuracy and then select λ1 and λ2 using the theory we present in Section 3. Another option is to split the data into a training and validation set and train a predictive model on the validation data to impute the counterfactual outcomes. We then select the model that minimizes the predicted cost on the validation set. For the examples in Section 4, we use a combination of these two ideas. We train a random forest model on the validation set (in order to impute counterfactual outcomes), and we then select the model that minimizes the sum of the mean squared error and the predicted cost on the validation data. In the supplementary materials, we include computations that demonstrate, for the Warfarin example of Section 4.2, the method is not too sensitive to the choice of λ1 and λ2. 3 Theory In this section, we describe the theoretical motivation for our approach and provide finite-sample generalization and regret bounds. For notational convenience, we define µ(x, z) := E[c(z;Y (z))|X = x] = E[c(z;Y )|X = x, Z = z], where the second equality follows from the ignorability assumption. Before presenting the results, we first present a few additional assumptions. Assumption 4 (Regularity). The set X × Z is nonempty, closed, and bounded with diameter D. Assumption 5 (Objective Conditions). The objective function satisfies the following properties: 1. |c(z; y)| ≤ 1 ∀z, y. 2. For all y ∈ Y , c(·; y) is L-Lipschitz. 3. For any x, x′ ∈ X and any z, z′ ∈ Z , |µ(x, z)− µ(x′, z′)| ≤ L||(x, z)− (x′, z′)||. These assumptions provide some conditions under which the generalization and regret bounds hold, but similar results hold under alternative sets of assumptions (e.g. if c(z;Y )|Z is subexponential instead of bounded). With these additional assumptions, we have the following generalization bound. All proofs are contained in the supplementary materials. Theorem 1. Suppose assumptions 1-5 hold. Then, with probability at least 1− δ, µ(x, z)− µ̂(x, z) ≤ 4 3γn ln(Kn/δ) + 2 √ V (x, z) ln(Kn/δ) + L ·B(x, z) ∀z ∈ Z, where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . This result uniformly bounds, with high probability, the true cost of action z by the predicted cost, µ̂(x, z), a term depending on the uncertainty of that predicted cost, V (x, z), and a term proportional to the bias associated with that predicted cost, B(x, z). It is easy to see how this result motivates the approach described in (2). One can also verify that the generalization bound still holds if (X1, Z1), . . . , (Xn, Zn) are chosen deterministically, as long as Y1, . . . , Yn are still independent. Using Theorem 1, we are able to derive a finite-sample regret bound. Theorem 2. Suppose assumptions 1-5 hold. Define z∗ ∈ arg min z µ(x, z), ẑ ∈ arg min z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z). If λ1 = 2 √ ln(2Kn/δ) and λ2 = L, then with probability at least 1− δ, µ(x, ẑ)− µ(x, z∗) ≤ 2 γn ln(2Kn/δ) + 4 √ V (x, z∗) ln(2Kn/δ) + 2L ·B(x, z∗), where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . By this result, the regret of the approach defined in (2) depends only on the variance and bias terms of the optimal action, z∗. Because the predicted cost is penalized by V (x, z) and B(x, z), it does not matter how poor the prediction of cost is at suboptimal actions. Theorem 2 immediately implies the following asymptotic result, assuming the auxiliary feature space and decision space are fixed as the training sample size grows to infinity. Corollary 1. In the setting of Theorem 2, if γn = Ω(nβ) for some β > 0, Γn = O(n), and B(x, z∗)→p 0 as n→∞, then µ(x, ẑ)→p µ(x, z∗) as n→∞. The assumptions can be satisfied, for example, with CART or random forest as the learning algorithm with parameters set in accordance with Lemma 2 of Wager and Athey [23]. This next example demonstrates that there exist problems for which the regret of the uncertainty penalized method is strictly better, asymptotically, than the regret of predicted cost minimization. Example 1. Suppose there are m+ 1 different actions and two possible, equally probable states of the world. In one state, action 0 has a cost that is deterministically 1, and all other actions have a random cost that is drawn from N (0, 1) distribution. In the other state, action 0 has a cost that is deterministically 0, and all other actions have a random cost, drawn from a N (1, 1) distribution. Suppose the training data consists of m trials of each action. If µ̂(j) is the empirical average cost of action j, then the predicted cost minimization algorithm selects the action that minimizes µ̂(j). The uncertainty penalization algorithm adds a penalty of the form suggested by Theorem 2, λ √ σ2j lnm m . If λ ≥ √ 2, the (Bayesian) expected regret of the uncertainty penalization algorithm is asymptotically strictly less than the expected regret of the predicted cost minimization algorithm, ERUP = o(ERPCM ), where the expectations are taken over both the training data and the unknown state of the world. This example is simple but demonstrates that there exist settings in which predicted cost minimization is asymptotically suboptimal to the method we have described. In addition, the proof illustrates how one can construct tighter regret bounds than the one in Theorem 2 for problems with specific structure. 3.1 Tractability The tractability of (2) depends on the algorithm that is used as the predictive model. For many kernel-based methods, the resulting optimization problems are highly nonlinear and do not scale well when the dimension of the decision space is more than 2 or 3. For this reason, we advocate using tree-based and linear models as the predictive model. Tree based models partition the space X × Z into Γn leaves, so there are only Γn possible values of w(x, z). Therefore, we can solve (2) separately for each leaf. For j = 1, . . . ,Γn, we solve min µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z) s.t. z ∈ Z (x, z) ∈ Lj , (3) where Lj denotes the subset of X × Z that makes up leaf j of the tree. Because each split in the tree is a hyperplane, Lj is defined by an intersection of hyperplanes and thus is a polyhedral set. Clearly, B(x, z) is a convex function in z, as it is a nonnegative linear combination of convex functions. If we assume homoscedasticity, then V (x, z) is constant for all (x, z) ∈ Lj . If c(z; y) is convex in z and Z is a convex set, (3) is a convex optimization problem and can be solved by convex optimization techniques. Furthermore, since the Γn instances of (3) are all independent, we can solve them in parallel. Once (3) has been solved for all leaves, we select the solution from the leaf with the overall minimal objective value. For tree ensemble methods, such as random forest [6] or xgboost [9], optimization is more difficult. We compute optimal decisions using a coordinate descent heuristic. From a random starting action, we cycle through holding all decision variables fixed except for one and optimize that decision using discretization. We repeat this until convergence from several different random starting decisions. For linear predictive models, the resulting problem is often a second order conic optimization problem, which can be handled by off-the-shelf solvers (details given in the supplementary materials). 4 Results In this section, we demonstrate the effectiveness of our approach with two examples. In the first, we consider pricing problem with synthetic data, while in the second, we use real patient data for personalized Warfarin dosing. 50000 52500 55000 57500 60000 0 500 1000 1500 2000 Number of Training Examples Ex pe ct ed R ev en ue RF UP−CART UP−Lasso UP−RF (a) Pricing example. ! ! ! ! ! ! ! ! ! 200 300 400 500 600 1000 2000 3000 4000 Number of Training Examples M SE ! CART Constant CRM Lasso LB RF UP−CART UP−Lasso UP−RF (b) Warfarin example. Figure 1 4.1 Pricing In this example, the decision variable, z ∈ R5, is a vector of prices for a collection of products. The outcome, Y , is a vector of demands for those products. The auxiliary covariates may contain data on the weather and other exogenous factors that may affect demand. The objective is to select prices to maximize revenue for a given vector of auxiliary covariates. The demand for a single product is affected by the auxiliary covariates, the price of that product, and the price of one or more of the other products, but the mapping is unknown to the algorithm. The details on the data generation process can be found in the supplementary materials. In Figure 1a, we compare the expected revenues of the strategies produced by several algorithms. CART, RF, and Lasso refer to the direct methods of training, respectively, a decision tree, a random forest, and a lasso regression [22] to predict revenue, as a function of the auxiliary covariates and prices, and choosing prices, for each vector of auxiliary covariates in the test set, that maximize predicted revenue. (Note that the revenues for CART and Lasso were too small to be displayed on the plot. Unsurprisingly, the linear model performs poorly because revenue does not vary linearly with price. We restrict all prices to be at most 50 to ensure the optimization problems are bounded.) UP-CART, UP-RF, and UP-Lasso refer to the uncertainty penalized analogues in which the variance and bias terms are included in the objective. For each training sample size, n, we average our results over one hundred separate training sets of size n. At a training size of 2000, the uncertainty penalized random forest method improves expected revenue by an average of $270 compared to the direct RF method. This improvement is statistically significant at the 0.05 significance level by the Wilcoxon signed-rank test (p-value 4.4× 10−18, testing the null hypothesis that mean improvement is 0 across 100 different training sets). 4.2 Warfarin Dosing Warfarin is a commonly prescribed anticoagulant that is used to treat patients who have had blood clots or who have a high risk of stroke. Determining the optimal maintenance dose of Warfarin presents a challenge as the appropriate dose varies significantly from patient to patient and is potentially affected by many factors including age, gender, weight, health history, and genetics. However, this is a crucial task because a dose that is too low or too high can put the patient at risk for clotting or bleeding. The effect of a Warfarin dose on a patient is measured by the International Normalilzed Ratio (INR). Physicians typically aim for patients to have an INR in a target range of 2-3. In this example, we test the efficacy of our approach in learning optimal Warfarin dosing with data from Consortium et al. [10]. This publicly available data set contains the optimal stable dose, found by experimentation, for a diverse set of 5410 patients. In addition, the data set contains a variety of covariates for each patient, including demographic information, reason for treatment, medical history, current medications, and the genotype variant at CYP2C9 and VKORC1. It is unique because it contains the optimal dose for each patient, permitting the use of off-the-shelf machine learning methods to predict this optimal dose as a function of patient covariates. We instead use this data to construct a problem with observational data, which resembles the common problem practitioners face. Our access to the true optimal dose for each patient allows us to evaluate the performance of our method out-of-sample. This is a commonly used technique, and the resulting data set is sometimes called semi-synthetic. Several researchers have used the Warfarin data for developing personalized approaches to medical treatments. In particular, Kallus [15] and Bertsimas et al. [3] tested algorithms that learned to treat patients from semi-synthetic observational data. However, they both discretized the dosage into three categories, whereas we treat the dosage as a continuous decision variable. To begin, we split the data into a training set of 4000 patients and a test set of 1410 patients. We keep this split fixed throughout all of our experiments to prevent cheating by using insights gained by visualization and exploration on the training set. Similar to Kallus [15], we assume physicians prescribe Warfarin as a function of BMI. We assume the response that the physicians observe is related to the difference between the dose a patient was given and the true optimal dose for that patient. It is a noisy observation, but it, on average, gives directional information (whether the dose was too high or too low) and information on the magnitude of the distance from the optimal dose. The precise details of how we generate the data are given in the supplementary materials. For all methods, we repeat our work across 100 randomizations of assigned training doses and responses. To measure the performance of our methods, we compute, on the test set, the mean squared error (MSE) of the prescribed doses relative to the true optimal doses. Using the notation described in Section 1, Xi ∈ R99 represents the auxiliary covariates for patient i. We work in normalized units so the covariates all contribute equally to the bias penalty term. Zi ∈ R represents the assigned dose for patient i, and Yi ∈ R represents the observed response for patient i. The objective in this problem is to minimize (E[Y (z)|X = x])2 with respect to the dose, z.1 Figure 1b displays the results of several algorithms as a function of the number of training examples. We compare CART, without any penalization, to CART with uncertainty penalization (UP-CART), and we see that uncertainty penalization offers a consistent improvement. This improvement is greatest when the training sample size is smallest. (Note: for CART with no penalization, when multiple doses give the same optimal predicted response, we select the mean.) Similarly, when we compare the random forest and Lasso methods with their uncertainty-penalizing analogues, we again see consistent improvements in MSE. The “Constant” line in the plot measures the performance of a baseline heuristic that assigns a fixed dose of 35 mg/week to all patients. The “LB” line provides an unattainable lower bound on the performance of all methods that use the observational data. For this method, we train a random forest to predict the optimal dose as a function of the patient covariates. We also compare our methods with the Counterfactual Risk Minimization (CRM) method of Swaminathan and Joachims [21]. We allow their method access to the true propensity scores that generated the data and optimize over all regularized linear policies for which the proposed dose is a linear function of the auxiliary covariates. We tried multiple combinations of tuning parameters, but the method always performed poorly out-of-sample. We suspect this is due to the size of the policy space. Our lasso based method works best on this data set when the number of training samples is large, but the random forest based method is best for smaller sample sizes. With the maximal training set size of 4000, the improvements of the CART, random forest, and lasso uncertainty penalized methods over their unpenalized analogues (2.2%, 8.6%, 0.5% respectively) are all statistically significant at the 0.05 family-wise error rate level by the Wilcoxon signed-rank test with Bonferroni correction (adjusted p-values 2.1× 10−4, 4.3× 10−16, 1.2× 10−6 respectively). 5 Conclusions In this paper, we introduced a data-driven framework that combines ideas from predictive machine learning and causal inference to optimize an uncertain objective using observational data. Unlike 1This objective differs slightly from the setting described in Section 3 in which the objective was to minimize the conditional expectation of a cost function. However, it is straightforward to modify the results to obtain the same regret bound (save a few constant factors) when minimizing g(E[c(z;Y (z))|X = x]) for a Lipschitz function, g. most existing algorithms, our approach handles continuous and multi-dimensional decision variables by introducing terms that penalize the uncertainty associated with the predicted costs. We proved finite sample generalization and regret bounds and provided a sufficient set of conditions under which the resulting decisions are asymptotically optimal. We demonstrated, both theoretically and with real-world examples, the tractability of the approach and the benefit of the approach over unpenalized predicted cost minimization.
1. What is the focus of the paper regarding off-policy learning? 2. Are there any concerns about the novelty of the proposed approach compared to prior works such as Swaminathan and Joachims [21]? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the minor points raised by the reviewer regarding nomenclature and definitions? 5. Is there any confusion regarding the cost function and its relationship with the predictive model?
Review
Review This paper explores off-policy learning. It proposes an optimization framework that incorporates information on the uncertainty of the predicted outcomes to learn better policies. It is based on the fact that minimizing only the cost function would result in a low bias but high variance estimator. Their idea is to add the estimator’s variance and bias as penalty terms to the objective function to trade-off between learning estimators with small cost and those with high variance/bias. However, it is not clear whether the findings are novel, as Swaminathan and Joachims [21] also consider incorporating the variance of response into the objective function to attain models that exhibit lower variance. While the authors also mention that Swaminathan and Joachims [21] assume that the underlying mechanism of action selection (i.e., propensities) is known, their proposed Counterfactual Risk Minimization framework is independent of how the response variable (u in [21]) is defined (i.e., whether it requires the propensity score) and therefore does not rely on this assumption. The authors needs to better explain how their work differs from that of [21]. Unfortunately, many parts of the paper are quite hard to follow. The authors should clarify the following points: L22: Is the cost function known/given? e.g., Is it c = (y - \hat{y})^2, where y is the true outcome and \hat{y} is the predicted outcome by a regression fit? Or is it determined arbitrarily? It is very important to define the cost clearly, as the rest of the equations are heavily dependent on it … The experiments do not shed light on what the cost function is either. L117: “The first step …” → Are there any next steps? My understanding is that learning the predictive model is intertwined with optimizing for z since there is nothing else to learn other than c (and consequently w due to the way it’s defined in L149.5) in equation (2). L169-170: If we could find a model that predicts the counterfactual outcomes accurately, we could use that to select the best z; the problem is that, due to sample selection bias (i.e., dependence of z on x), we cannot be sure that the regression fit would give accurate estimates of the counterfactuals. Therefore, it is wrong to base the model selection on this. Unless the above mentioned points are clarified, it is hard to judge the quality of this work. Minor points: L40: parameters are “learned”; objective function is “optimized”. L120-123: Unclear nomenclature: what is the “response variable”? What is “prediction of the algorithm”? and what is the “training response”? L128: “reduces bias of the predictions” → predictions of what? outcome or cost? L142.5: Did you mean B(x,z) := \sum_{i} || \hat{\mu(x,z)} - \mu(x,z) ||_2 ? L160-161: Isn’t this the role of the predictive model? Isn’t this already done? Typos: L124: satisfies → to satisfy L211: probably → probable ==== In light of the authors’ discussion in the rebuttal, I am convinced that the contribution of this paper is novel (compared to [21]). Regarding my L169-170 comment -- although the authors indicated that they get good empirical results: I am still not convinced that this way of tuning parameters is valid. That is, tuning hyperparameters based on predicted counterfactuals, especially when the sample selection bias is not accounted for in prediction procedure. I expect this method to fail in case of a high selection bias in data and suspect that this was not the case in the paper’s experiments. In general, it was difficult to understand some main parts of the submission. I believe the paper would benefit from being modified for more clarity and better flow. Given the author’s comments, I am willing to revise my score to 5.
NIPS
Title Optimization over Continuous and Multi-dimensional Decisions with Observational Data Abstract We consider the optimization of an uncertain objective over continuous and multidimensional decision spaces in problems in which we are only provided with observational data. We propose a novel algorithmic framework that is tractable, asymptotically consistent, and superior to comparable methods on example problems. Our approach leverages predictive machine learning methods and incorporates information on the uncertainty of the predicted outcomes for the purpose of prescribing decisions. We demonstrate the efficacy of our method on examples involving both synthetic and real data sets. 1 Introduction We study the general problem in which a decision maker seeks to optimize a known objective function that depends on an uncertain quantity. The uncertain quantity has an unknown distribution, which may be affected by the action chosen by the decision maker. Many important problems across a variety of fields fit into this framework. In healthcare, for example, a doctor aims to prescribe drugs in specific dosages to regulate a patient’s vital signs. In revenue management, a store owner must decide how to price various products in order to maximize profit. In online retail, companies decide which products to display for a user to maximize sales. The general problem we study is characterized by the following components: • Decision variable: z ∈ Z ⊂ Rp, • Outcome: Y (z) ∈ Y (We adopt the potential outcomes framework [20], in which Y (z) denotes the (random) quantity that would have been observed had decision z been chosen.), • Auxiliary covariates (also called side-information or context): x ∈ X ⊂ Rd, • Cost function: c(z; y) : Z × Y → R. (This function is known a priori.) We allow the auxiliary covariates, decision variable, and outcome to take values on multi-dimensional, continuous sets. A decision-maker seeks to determine the action that minimizes the conditional expected cost: min z∈Z E[c(z;Y (z))|X = x]. (1) Of course, the distribution of Y (z) is unknown, so it is not possible to solve this problem exactly. However, we assume that we have access to observational data, consisting of n independent and identically distributed observations, (Xi, Zi, Yi) for i = 1, . . . , n. Each of these observations consists of an auxiliary covariate vector, a decision, and an observed outcome. This type of data presents two challenges that differentiate our problem from a predictive machine learning problem. First, it is incomplete. We only observe Yi := Yi(Zi), the outcome associated with the applied decision. We do 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. not observe what the outcome would have been under a different decision. Second, the decisions were not necessarily chosen independently of the outcomes, as they would have been in a randomized experiment, and we do not know how the decisions were assigned. Following common practice in the causal inference literature, we make the ignorability assumption of Hirano and Imbens [13]. Assumption 1 (Ignorability). Y (z) ⊥ Z | X ∀z ∈ Z In other words, we assume that historically the decision Z has been chosen as a function of the auxiliary covariates X . There were no unmeasured confounding variables that affected both the choice of decision and the outcome. Under this assumption, we are able to rewrite the objective of (1) as E[c(z;Y ) | X = x, Z = z]. This form of the objective is easier to learn because it depends only on the observed outcome, not on the counterfactual outcomes. A direct approach to solve this problem is to use a regression method to predict the cost as a function of x and z and then choose z to minimize this predicted cost. If the selected regression method is uniformly consistent in z, then the action chosen by this method will be asymptotically optimal under certain conditions. (We will formalize this later.) However, this requires choosing a regression method that ensures the optimization problem is tractable. For this work, we restrict our attention to linear and tree-based methods, such as CART [7] and random forests [6], as they are both effective and tractable for many practical problems. A key issue with the direct approach is that it tries to learn too much. It tries to learn the expected outcome under every possible decision, and the level of uncertainty associated with the predicted expected cost can vary between different decisions. This method can lead us to select a decision which has a small point estimate of the cost, but a large uncertainty interval. 1.1 Notation Throughout the paper, we use capital letters to refer to random quantities and lower case letters to refer to deterministic quantities. Thus, we use Z to refer to the decision randomly assigned by the (unknown) historical policy and z to refer to a specific action. For a given, auxiliary covariate vector, x, and a proposed decision, z, the conditional expectation E[c(z;Y )|X = z, Z = z] means the expectation of the cost function c(z;Y ) under the conditional measure in which X is fixed as x and Z is fixed as z. We ignore details of measurability throughout and assume this conditional expectation is well defined. Throughout, all norms are `2 norms unless otherwise specified. We use (X,Z) to denote vector concatenation. 1.2 Related Work Recent years have seen tremendous interest in the area of data-driven optimization. Much of this work combines ideas from the statistics and machine learning literature with techniques from mathematical optimization. Bertsimas and Kallus [4] developed a framework that uses nonparametric machine learning methods to solve data-driven optimization problems in the presence of auxiliary covariates. They take advantage of the fact that for many machine learning algorithms, the predictions are given by a linear combination of the training samples’ target variables. Kao et al. [17] and Elmachtoub and Grigas [11] developed algorithms that make predictions tailored for use in specific optimization problems. However, they all deal with the setting in which the decision does not affect the outcome. This is insufficient for many applications, such as pricing, in which the demand for a product is clearly affected by the price. Bertsimas and Kallus [5] later studied the limitations of predictive approaches to pricing problems. In particular, they demonstrated that confounding in the data between the decision and outcome can lead to large optimality gaps if ignored. They proposed a kernel-based method for data-driven optimization in this setting, but it does not scale well with the dimension of the decision space. Misic [19] developed an efficient mixed integer optimization formulation for problems in which the predicted cost is given by a tree ensemble model. This approach scales fairly well with the dimension of the decision space but does not consider the need for uncertainty penalization. Another relevant area of research is causal inference (see Rosenbaum [20] for an overview), which concerns the study of causal effects from observational data. Much of the work in this area has focused on determining whether a treatment has a significant effect on the population as a whole. However, a growing body of work has focused on learning optimal, personalized treatments from observational data. Athey and Wager [1] proposed an algorithm that achieves optimal (up to a constant factor) regret bounds in learning a treatment policy when there are two potential treatments. Kallus [14] proposed an algorithm to efficiently learn a treatment policy when there is a finite set of potential treatments. Bertsimas et al. [3] developed a tree-based algorithm that learns to personalize treatment assignments from observational data. It is based on the optimal trees machine learning method [2] and has performed well in experiments. Considerably less attention has been paid to problems with a continuous decision space. Hirano and Imbens [13] introduced the problem of inference with a continuous treatment, and Flores [12] studied the problem of learning an optimal policy in this setting. Recently, Kallus and Zhou [16] developed an approach to policy learning with a continuous decision variable that generalizes the idea of inverse propensity score weighting. Our approach differs in that we focus on regression-based methods, which we believe scale better with the dimension of the decision space and avoid the need for density estimation. The idea of uncertainty penalization has been explored as an alternative to empirical risk minimization in statistical learning, starting with Maurer and Pontil [18]. Swaminathan and Joachims [21] applied uncertainty penalization to the offline bandit setting. Their setting is similar to the one we study. An agent seeks to minimize the prediction error of his/her decision, but only observes the loss associated with the selected decision. They assumed that the policy used in the training data is known, which allowed them to use inverse propensity weighting methods. In contrast, we assume ignorability, but not knowledge of the historical policy, and we allow for more complex decision spaces. We note that our approach bears a superficial resemblance to the upper confidence bound (UCB) algorithms for multi-armed bandits (cf. Bubeck et al. [8]). These algorithms choose the action with the highest upper confidence bound on its predicted expected reward. Our approach, in contrast, chooses the action with the highest lower confidence bound on its predicted expected reward (or lowest upper confidence bound on predicted expected cost). The difference is that UCB algorithms choose actions with high upside to balance exploration and exploitation in the online bandit setting, whereas we work in the offline setting with a focus on solely exploitation. 1.3 Contributions Our primary contribution is an algorithmic framework for observational data driven optimization that allows the decision variable to take values on continuous and multidimensional sets. We consider applications in personalized medicine, in which the decision is the dose of Warfarin to prescribe to a patient, and in pricing, in which the action is the list of prices for several products in a store. 2 Approach In this section, we introduce the uncertainty penalization approach for optimization with observational data. Recall that the observational data consists of n i.i.d. observations, (X1, Z1, Y1), . . . , (Xn, Zn, Yn). For observation i, Xi represents the pertinent auxiliary covariates, Zi is the decision that was applied, and Yi is the observed response. The first step of the approach is to train a predictive machine learning model to estimate E[c(z;Y )|X = x, Z = z]. When training the predictive model, the feature space is the cartesian product of the auxiliary covariate space and the decision space, X × Z . We have several options for how to train the predictive model. We can train the model to predict Y , the cost c(Z, Y ), or a combination of these two responses. In general, we denote the prediction of the ML algorithm as a linear combination of the cost function evaluated at the training examples, µ̂(x, z) := n∑ i=1 wi(x, z)c(z;Yi). We require the predictive model to satisfy a generalization of the honesty property of Wager and Athey [23]. Assumption 2 (Honesty). The model trained on (X1, Z1, Y1), . . . , (Xn, Zn, Yn) is honest, i.e., the weights, wi(x, z), are determined independently of the outcomes, Y1, . . . , Yn. This honesty assumption reduces the bias of the predictions of the cost. We also enforce several restrictions on the weight functions. Assumption 3 (Weights). For all (x, z) ∈ X × Z , ∑n i=1 wi(x, z) = 1 and for all i, wi(x, z) ∈ [0, 1/γn]. In addition, X ×Z can be partitioned into Γn regions such that if (x, z) and (x, z′) are in the same region, ||w(x, z)− w(x, z′)||1 ≤ α||z − z′||2. The direct approach to solving (1) amounts to choosing z ∈ Z that minimizes µ̂(x, z), for each new instance of auxiliary covariates, x. However, the variance of the predicted cost, µ̂(x, z), can vary with the decision variable, z. Especially with a small training sample size, the direct approach, minimizing µ̂(x, z), can give a decision with a small, but highly uncertain, predicted cost. We can reduce the expected regret of our action by adding a penalty term for the variance of the selected decision. If Assumption 2 holds, the conditional variance of µ̂(x, z) given (X1, Z1), . . . , (Xn, Zn) is given by V (x, z) := ∑ i w2i (x, z)Var(c(z;Yi)|Xi, Zi). In addition, µ̂(x, z) may not be an unbiased predictor, so we also introduce a term that penalizes the conditional bias of the predicted cost given (X1, Z1), . . . , (Xn, Zn). Since the true cost is unknown, it is not possible to exactly compute this bias. Instead, we compute an upper bound under a Lipschitz assumption (details in Section 3). B(x, z) := ∑ i wi(x, z)||(Xi, Zi)− (x, z)||2. Overall, given a new vector of auxiliary covariates, x ∈ X , our approach makes a decision by solving min z∈Z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z), (2) where λ1 and λ2 are tuning parameters. As a concrete example, we can use the CART algorithm of Breiman et al. [7] or the optimal regression tree algorithm of Bertsimas and Dunn [2] as the predictive method. These algorithms work by partitioning the training examples into clusters, i.e., the leaves of the tree. For a new observation, a prediction of the response variable is made by averaging the responses of the training examples that are contained in the same leaf. wi(x, z) = { 1 N(x,z) , (x, z) ∈ l(x, z), 0, otherwise, where l(x, z) denotes the set of training examples that are contained in the same leaf of the tree as (x, z), and N(x, z) = |l(x, z)|. The variance term will be small when the leaf has a large number of training examples, and the bias term will be small when the diameter of the leaf is small. Assumption 2 can be satisfied by ignoring the outcomes when selecting the splits or by dividing the training data into two sets, one for making splits and one for making predictions. Assumption 3 is satisfied with α = 0 if the minimum number of training samples in each leaf is γn and the maximum number of leaves in the tree is Γn. 2.1 Parameter Tuning Before proceeding, we note that the variance terms, Var(c(z;Yi) | Xi, Zi), are often unknown in practice. In the absence of further knowledge, we assume homoscedasticity, i.e., Var(Yi|Xi, Zi) is constant. It is possible to estimate this value by training a machine learning model to predict Yi as a function of (Xi, Zi) and computing the mean squared error on the training set. However, it may be advantageous to include this value with the tuning parameter λ1. We have several options for tuning parameters λ1 and λ2 (and whatever other parameters are associated with the predictive model). Because the counterfactual outcomes are unknown, it is not possible to use the standard approach of holding out a validation set during training and evaluating the error of the model on that validation set for each combination of possible parameters. One option is to tune the predictive model’s parameters using cross validation to maximize predictive accuracy and then select λ1 and λ2 using the theory we present in Section 3. Another option is to split the data into a training and validation set and train a predictive model on the validation data to impute the counterfactual outcomes. We then select the model that minimizes the predicted cost on the validation set. For the examples in Section 4, we use a combination of these two ideas. We train a random forest model on the validation set (in order to impute counterfactual outcomes), and we then select the model that minimizes the sum of the mean squared error and the predicted cost on the validation data. In the supplementary materials, we include computations that demonstrate, for the Warfarin example of Section 4.2, the method is not too sensitive to the choice of λ1 and λ2. 3 Theory In this section, we describe the theoretical motivation for our approach and provide finite-sample generalization and regret bounds. For notational convenience, we define µ(x, z) := E[c(z;Y (z))|X = x] = E[c(z;Y )|X = x, Z = z], where the second equality follows from the ignorability assumption. Before presenting the results, we first present a few additional assumptions. Assumption 4 (Regularity). The set X × Z is nonempty, closed, and bounded with diameter D. Assumption 5 (Objective Conditions). The objective function satisfies the following properties: 1. |c(z; y)| ≤ 1 ∀z, y. 2. For all y ∈ Y , c(·; y) is L-Lipschitz. 3. For any x, x′ ∈ X and any z, z′ ∈ Z , |µ(x, z)− µ(x′, z′)| ≤ L||(x, z)− (x′, z′)||. These assumptions provide some conditions under which the generalization and regret bounds hold, but similar results hold under alternative sets of assumptions (e.g. if c(z;Y )|Z is subexponential instead of bounded). With these additional assumptions, we have the following generalization bound. All proofs are contained in the supplementary materials. Theorem 1. Suppose assumptions 1-5 hold. Then, with probability at least 1− δ, µ(x, z)− µ̂(x, z) ≤ 4 3γn ln(Kn/δ) + 2 √ V (x, z) ln(Kn/δ) + L ·B(x, z) ∀z ∈ Z, where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . This result uniformly bounds, with high probability, the true cost of action z by the predicted cost, µ̂(x, z), a term depending on the uncertainty of that predicted cost, V (x, z), and a term proportional to the bias associated with that predicted cost, B(x, z). It is easy to see how this result motivates the approach described in (2). One can also verify that the generalization bound still holds if (X1, Z1), . . . , (Xn, Zn) are chosen deterministically, as long as Y1, . . . , Yn are still independent. Using Theorem 1, we are able to derive a finite-sample regret bound. Theorem 2. Suppose assumptions 1-5 hold. Define z∗ ∈ arg min z µ(x, z), ẑ ∈ arg min z µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z). If λ1 = 2 √ ln(2Kn/δ) and λ2 = L, then with probability at least 1− δ, µ(x, ẑ)− µ(x, z∗) ≤ 2 γn ln(2Kn/δ) + 4 √ V (x, z∗) ln(2Kn/δ) + 2L ·B(x, z∗), where Kn = Γn ( 9Dγn ( α(LD + 1 + √ 2) + L( √ 2 + 3) ))p . By this result, the regret of the approach defined in (2) depends only on the variance and bias terms of the optimal action, z∗. Because the predicted cost is penalized by V (x, z) and B(x, z), it does not matter how poor the prediction of cost is at suboptimal actions. Theorem 2 immediately implies the following asymptotic result, assuming the auxiliary feature space and decision space are fixed as the training sample size grows to infinity. Corollary 1. In the setting of Theorem 2, if γn = Ω(nβ) for some β > 0, Γn = O(n), and B(x, z∗)→p 0 as n→∞, then µ(x, ẑ)→p µ(x, z∗) as n→∞. The assumptions can be satisfied, for example, with CART or random forest as the learning algorithm with parameters set in accordance with Lemma 2 of Wager and Athey [23]. This next example demonstrates that there exist problems for which the regret of the uncertainty penalized method is strictly better, asymptotically, than the regret of predicted cost minimization. Example 1. Suppose there are m+ 1 different actions and two possible, equally probable states of the world. In one state, action 0 has a cost that is deterministically 1, and all other actions have a random cost that is drawn from N (0, 1) distribution. In the other state, action 0 has a cost that is deterministically 0, and all other actions have a random cost, drawn from a N (1, 1) distribution. Suppose the training data consists of m trials of each action. If µ̂(j) is the empirical average cost of action j, then the predicted cost minimization algorithm selects the action that minimizes µ̂(j). The uncertainty penalization algorithm adds a penalty of the form suggested by Theorem 2, λ √ σ2j lnm m . If λ ≥ √ 2, the (Bayesian) expected regret of the uncertainty penalization algorithm is asymptotically strictly less than the expected regret of the predicted cost minimization algorithm, ERUP = o(ERPCM ), where the expectations are taken over both the training data and the unknown state of the world. This example is simple but demonstrates that there exist settings in which predicted cost minimization is asymptotically suboptimal to the method we have described. In addition, the proof illustrates how one can construct tighter regret bounds than the one in Theorem 2 for problems with specific structure. 3.1 Tractability The tractability of (2) depends on the algorithm that is used as the predictive model. For many kernel-based methods, the resulting optimization problems are highly nonlinear and do not scale well when the dimension of the decision space is more than 2 or 3. For this reason, we advocate using tree-based and linear models as the predictive model. Tree based models partition the space X × Z into Γn leaves, so there are only Γn possible values of w(x, z). Therefore, we can solve (2) separately for each leaf. For j = 1, . . . ,Γn, we solve min µ̂(x, z) + λ1 √ V (x, z) + λ2B(x, z) s.t. z ∈ Z (x, z) ∈ Lj , (3) where Lj denotes the subset of X × Z that makes up leaf j of the tree. Because each split in the tree is a hyperplane, Lj is defined by an intersection of hyperplanes and thus is a polyhedral set. Clearly, B(x, z) is a convex function in z, as it is a nonnegative linear combination of convex functions. If we assume homoscedasticity, then V (x, z) is constant for all (x, z) ∈ Lj . If c(z; y) is convex in z and Z is a convex set, (3) is a convex optimization problem and can be solved by convex optimization techniques. Furthermore, since the Γn instances of (3) are all independent, we can solve them in parallel. Once (3) has been solved for all leaves, we select the solution from the leaf with the overall minimal objective value. For tree ensemble methods, such as random forest [6] or xgboost [9], optimization is more difficult. We compute optimal decisions using a coordinate descent heuristic. From a random starting action, we cycle through holding all decision variables fixed except for one and optimize that decision using discretization. We repeat this until convergence from several different random starting decisions. For linear predictive models, the resulting problem is often a second order conic optimization problem, which can be handled by off-the-shelf solvers (details given in the supplementary materials). 4 Results In this section, we demonstrate the effectiveness of our approach with two examples. In the first, we consider pricing problem with synthetic data, while in the second, we use real patient data for personalized Warfarin dosing. 50000 52500 55000 57500 60000 0 500 1000 1500 2000 Number of Training Examples Ex pe ct ed R ev en ue RF UP−CART UP−Lasso UP−RF (a) Pricing example. ! ! ! ! ! ! ! ! ! 200 300 400 500 600 1000 2000 3000 4000 Number of Training Examples M SE ! CART Constant CRM Lasso LB RF UP−CART UP−Lasso UP−RF (b) Warfarin example. Figure 1 4.1 Pricing In this example, the decision variable, z ∈ R5, is a vector of prices for a collection of products. The outcome, Y , is a vector of demands for those products. The auxiliary covariates may contain data on the weather and other exogenous factors that may affect demand. The objective is to select prices to maximize revenue for a given vector of auxiliary covariates. The demand for a single product is affected by the auxiliary covariates, the price of that product, and the price of one or more of the other products, but the mapping is unknown to the algorithm. The details on the data generation process can be found in the supplementary materials. In Figure 1a, we compare the expected revenues of the strategies produced by several algorithms. CART, RF, and Lasso refer to the direct methods of training, respectively, a decision tree, a random forest, and a lasso regression [22] to predict revenue, as a function of the auxiliary covariates and prices, and choosing prices, for each vector of auxiliary covariates in the test set, that maximize predicted revenue. (Note that the revenues for CART and Lasso were too small to be displayed on the plot. Unsurprisingly, the linear model performs poorly because revenue does not vary linearly with price. We restrict all prices to be at most 50 to ensure the optimization problems are bounded.) UP-CART, UP-RF, and UP-Lasso refer to the uncertainty penalized analogues in which the variance and bias terms are included in the objective. For each training sample size, n, we average our results over one hundred separate training sets of size n. At a training size of 2000, the uncertainty penalized random forest method improves expected revenue by an average of $270 compared to the direct RF method. This improvement is statistically significant at the 0.05 significance level by the Wilcoxon signed-rank test (p-value 4.4× 10−18, testing the null hypothesis that mean improvement is 0 across 100 different training sets). 4.2 Warfarin Dosing Warfarin is a commonly prescribed anticoagulant that is used to treat patients who have had blood clots or who have a high risk of stroke. Determining the optimal maintenance dose of Warfarin presents a challenge as the appropriate dose varies significantly from patient to patient and is potentially affected by many factors including age, gender, weight, health history, and genetics. However, this is a crucial task because a dose that is too low or too high can put the patient at risk for clotting or bleeding. The effect of a Warfarin dose on a patient is measured by the International Normalilzed Ratio (INR). Physicians typically aim for patients to have an INR in a target range of 2-3. In this example, we test the efficacy of our approach in learning optimal Warfarin dosing with data from Consortium et al. [10]. This publicly available data set contains the optimal stable dose, found by experimentation, for a diverse set of 5410 patients. In addition, the data set contains a variety of covariates for each patient, including demographic information, reason for treatment, medical history, current medications, and the genotype variant at CYP2C9 and VKORC1. It is unique because it contains the optimal dose for each patient, permitting the use of off-the-shelf machine learning methods to predict this optimal dose as a function of patient covariates. We instead use this data to construct a problem with observational data, which resembles the common problem practitioners face. Our access to the true optimal dose for each patient allows us to evaluate the performance of our method out-of-sample. This is a commonly used technique, and the resulting data set is sometimes called semi-synthetic. Several researchers have used the Warfarin data for developing personalized approaches to medical treatments. In particular, Kallus [15] and Bertsimas et al. [3] tested algorithms that learned to treat patients from semi-synthetic observational data. However, they both discretized the dosage into three categories, whereas we treat the dosage as a continuous decision variable. To begin, we split the data into a training set of 4000 patients and a test set of 1410 patients. We keep this split fixed throughout all of our experiments to prevent cheating by using insights gained by visualization and exploration on the training set. Similar to Kallus [15], we assume physicians prescribe Warfarin as a function of BMI. We assume the response that the physicians observe is related to the difference between the dose a patient was given and the true optimal dose for that patient. It is a noisy observation, but it, on average, gives directional information (whether the dose was too high or too low) and information on the magnitude of the distance from the optimal dose. The precise details of how we generate the data are given in the supplementary materials. For all methods, we repeat our work across 100 randomizations of assigned training doses and responses. To measure the performance of our methods, we compute, on the test set, the mean squared error (MSE) of the prescribed doses relative to the true optimal doses. Using the notation described in Section 1, Xi ∈ R99 represents the auxiliary covariates for patient i. We work in normalized units so the covariates all contribute equally to the bias penalty term. Zi ∈ R represents the assigned dose for patient i, and Yi ∈ R represents the observed response for patient i. The objective in this problem is to minimize (E[Y (z)|X = x])2 with respect to the dose, z.1 Figure 1b displays the results of several algorithms as a function of the number of training examples. We compare CART, without any penalization, to CART with uncertainty penalization (UP-CART), and we see that uncertainty penalization offers a consistent improvement. This improvement is greatest when the training sample size is smallest. (Note: for CART with no penalization, when multiple doses give the same optimal predicted response, we select the mean.) Similarly, when we compare the random forest and Lasso methods with their uncertainty-penalizing analogues, we again see consistent improvements in MSE. The “Constant” line in the plot measures the performance of a baseline heuristic that assigns a fixed dose of 35 mg/week to all patients. The “LB” line provides an unattainable lower bound on the performance of all methods that use the observational data. For this method, we train a random forest to predict the optimal dose as a function of the patient covariates. We also compare our methods with the Counterfactual Risk Minimization (CRM) method of Swaminathan and Joachims [21]. We allow their method access to the true propensity scores that generated the data and optimize over all regularized linear policies for which the proposed dose is a linear function of the auxiliary covariates. We tried multiple combinations of tuning parameters, but the method always performed poorly out-of-sample. We suspect this is due to the size of the policy space. Our lasso based method works best on this data set when the number of training samples is large, but the random forest based method is best for smaller sample sizes. With the maximal training set size of 4000, the improvements of the CART, random forest, and lasso uncertainty penalized methods over their unpenalized analogues (2.2%, 8.6%, 0.5% respectively) are all statistically significant at the 0.05 family-wise error rate level by the Wilcoxon signed-rank test with Bonferroni correction (adjusted p-values 2.1× 10−4, 4.3× 10−16, 1.2× 10−6 respectively). 5 Conclusions In this paper, we introduced a data-driven framework that combines ideas from predictive machine learning and causal inference to optimize an uncertain objective using observational data. Unlike 1This objective differs slightly from the setting described in Section 3 in which the objective was to minimize the conditional expectation of a cost function. However, it is straightforward to modify the results to obtain the same regret bound (save a few constant factors) when minimizing g(E[c(z;Y (z))|X = x]) for a Lipschitz function, g. most existing algorithms, our approach handles continuous and multi-dimensional decision variables by introducing terms that penalize the uncertainty associated with the predicted costs. We proved finite sample generalization and regret bounds and provided a sufficient set of conditions under which the resulting decisions are asymptotically optimal. We demonstrated, both theoretically and with real-world examples, the tractability of the approach and the benefit of the approach over unpenalized predicted cost minimization.
1. What is the main contribution of the paper in the context of optimal treatment rules? 2. What is the significance of considering a continuous treatment choice instead of a two-arm version? 3. How does the proposed approach address the problem of sub-optimal actions with difficult-to-estimate rewards? 4. Can you explain how the method penalizes certain actions and prefers others based on estimation uncertainty? 5. How does the paper provide theoretical and empirical support for the proposed method? 6. In what ways does the paper build upon previous research in uncertainty penalization? 7. What are some potential advantages of using this method in real-world scenarios?
Review
Review This paper is about learning optimal treatment rules with a continuous treatment, in the potential outcomes setup with ignorability. There is a considerable amount of literature on the two-arm version of this problem; however, the present setup (with a continuous treatment choice) has been less explored. Call mu(x, z) the expected cost of deploying decision z for a sample with covariates x. A simple approach would be to to estimate \hat{mu}(x, z), and then choose \hat{z}(x) to optimize the estimated function. A problem with this approach, however, is that if the rewards from some sub-optimal actions are hard to estimate, we might accidentally pick a very bad action. The proposal of the paper is to penalize actions z(x) for which \hat{mu}(x, z) is very variable or biased and, in case of uncertainty, to prefer actions whose rewards can be accurately estimated. The authors provide both theoretical and empirical backing for this method. Conceptually, the paper builds on work from Maurer & Pontil and Swaminathan & Joachims on uncertainty penalization; the setup here is different in that the data-collection policy may not be known and the action space may be continuous. Overall, this method is principled, and may work well in many areas. The paper is also very well written. One advantage in particular is that, in many situations, we might have most data from the “status quo” policy, and this method will correctly preserve the status quo unless there is strong evidence that another policy is better.
NIPS
Title FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence Abstract Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model’s predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch’s success. The code is available at https://github.com/google-research/fixmatch. 1 Introduction Deep neural networks have become the de facto model for computer vision applications. Their success is partially attributable to their scalability, i.e., the empirical observation that training them on larger datasets produces better performance [30, 20, 42, 55, 41, 21]. Deep networks often achieve their strong performance through supervised learning, which requires a labeled dataset. The performance benefit conferred by the use of a larger dataset can therefore come at a significant cost since labeling data often requires human labor. This cost can be particularly extreme when labeling must be done by an expert (for example, a doctor in medical applications). A powerful approach for training models on a large amount of data without requiring a large amount of labels is semi-supervised learning (SSL). SSL mitigates the requirement for labeled data by providing a means of leveraging unlabeled data. Since unlabeled data can often be obtained with minimal human labor, any performance boost conferred by SSL often comes with low cost. This has led to a plethora of SSL methods that are designed for deep networks [33, 46, 24, 51, 4, 54, 3, 25, 45, 52]. A popular class of SSL methods can be viewed as producing an artificial label for unlabeled images and training the model to predict the artificial label when fed unlabeled images as input. For example, pseudo-labeling [25] (also called self-training [32, 55, 44, 47]) uses the model’s class prediction as a label to train against. Similarly, consistency regularization [2, 46, 24] obtains an artificial label using the model’s predicted distribution after randomly modifying the input or model function. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we break the trend of recent state-of-the-art methods that combine increasingly complex mechanisms [4, 54, 3] and produce a method that is simpler, but also more accurate. Our algorithm, FixMatch, produces artificial labels using both consistency regularization and pseudo-labeling. Crucially, the artificial label is produced based on a weakly-augmented unlabeled image (e.g., using only flip-and-shift data augmentation) which is used as a target when the model is fed a stronglyaugmented version of the same image. Inspired by UDA [54] and ReMixMatch [3], we leverage Cutout [14], CTAugment [3], and RandAugment [11] for strong augmentation, which all produce heavily-distorted versions of a given image. Following the approach of pseudo-labeling [25], we only retain an artificial label if the model assigns a high probability to one of the possible classes. A diagram of FixMatch is shown in fig. 1. Despite its simplicity, we show that FixMatch obtains state-of-the-art performance on the most commonly-studied SSL benchmarks. For example, FixMatch achieves 94.93% accuracy on CIFAR-10 with 250 labeled examples compared to the previous state-of-the-art of 93.73% [3] in the standard experimental setting from [36]. We also explore the limits of our approach by applying it in the extremely-scarce-labels regime, obtaining 88.61% accuracy on CIFAR-10 with only 4 labels per class. Since FixMatch is a simplification of existing approaches but achieves substantially better performance, we include an extensive ablation study to determine which factors contribute the most to its success. A key benefit of FixMatch being a simplification of existing methods is that it requires many fewer additional hyperparameters. As such, it allows us to perform an extensive ablation study of each of them. Our ablation study also includes basic fully-supervised learning experimental choices that are often ignored or not reported when new SSL methods are proposed (such as the optimizer or learning rate schedule). 2 FixMatch FixMatch is a combination of two approaches to SSL: Consistency regularization and pseudo-labeling. Its main novelty comes from the combination of these two ingredients as well as the use of a separate weak and strong augmentation when performing consistency regularization. In this section, we first review consistency regularization and pseudo-labeling before describing FixMatch in detail. We also describe the other factors, such as regularization, which contribute to FixMatch’s empirical success. For an L-class classification problem, let X = { (xb, pb) : b ∈ (1, . . . , B) } be a batch of B labeled examples, where xb are the training examples and pb are one-hot labels. Let U = { ub : b ∈ (1, . . . , µB) } be a batch of µB unlabeled examples where µ is a hyperparameter that determines the relative sizes of X and U . Let pm(y | x) be the predicted class distribution produced by the model for input x. We denote the cross-entropy between two probability distributions p and q as H(p, q). We perform two types of augmentations as part of FixMatch: strong and weak, denoted by A(·) and α(·) respectively. We describe the form of augmentation we use for A and α in section 2.3. 2.1 Background Consistency regularization is an important component of recent state-of-the-art SSL algorithms. Consistency regularization utilizes unlabeled data by relying on the assumption that the model should output similar predictions when fed perturbed versions of the same image. This idea was first proposed in [2] and popularized by [46, 24], where the model is trained both via a standard supervised classification loss and on unlabeled data via the loss function µB∑ b=1 ‖pm(y|α(ub))− pm(y|α(ub))‖22 (1) Note that both α and pm are stochastic functions, so the two terms in eq. (1) will indeed have different values. Extensions to this idea include using an adversarial transformation in place of α [33], using a running average or past model predictions for one invocation of pm [51, 24], using a cross-entropy loss in place of the squared `2 loss [33, 54, 3], using stronger forms of augmentation [54, 3], and using consistency regularization as a component in a larger SSL pipeline [4, 3]. Pseudo-labeling leverages the idea of using the model itself to obtain artificial labels for unlabeled data [32, 47]. Specifically, this refers to the use of “hard” labels (i.e., the arg max of the model’s output) and only retaining artificial labels whose largest class probability fall above a predefined threshold [25]. Letting qb = pm(y|ub), pseudo-labeling uses the following loss function: 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, qb) (2) where q̂b = arg max(qb) and τ is the threshold. For simplicity, we assume that arg max applied to a probability distribution produces a valid “one-hot” probability distribution. The use of a hard label makes pseudo-labeling closely related to entropy minimization [17, 45], where the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unlabeled data. 2.2 Our Algorithm: FixMatch The loss function for FixMatch consists of two cross-entropy loss terms: a supervised loss `s applied to labeled data and an unsupervised loss `u. Specifically, `s is just the standard cross-entropy loss on weakly augmented labeled examples: `s = 1 B B∑ b=1 H(pb, pm(y | α(xb))) (3) FixMatch computes an artificial label for each unlabeled example2 which is then used in a standard cross-entropy loss. To obtain an artificial label, we first compute the model’s predicted class distribution given a weakly-augmented version of a given unlabeled image: qb = pm(y | α(ub)). Then, we use q̂b = arg max(qb) as a pseudo-label, except we enforce the cross-entropy loss against the model’s output for a strongly-augmented version of ub: `u = 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, pm(y | A(ub))) (4) where τ is a scalar hyperparameter denoting the threshold above which we retain a pseudo-label. The loss minimized by FixMatch is simply `s + λu`u where λu is a fixed scalar hyperparameter denoting the relative weight of the unlabeled loss. We present a complete algorithm for FixMatch in algorithm 1 of the supplementary material. While eq. (4) is similar to the pseudo-labeling loss in eq. (2), it is crucially different in that the artificial label is computed based on a weakly-augmented image and the loss is enforced against the model’s output for a strongly-augmented image. This introduces a form of consistency regularization which, as we will show in section 5, is crucial to FixMatch’s success. We also note that it is typical in modern SSL algorithms to increase the weight of the unlabeled loss term (λu) during training [51, 24, 4, 3, 36]. We found that this was unnecessary for FixMatch, which may be due to the fact 2In practice, we include all labeled data as part of unlabeled data without their labels when constructing U . that max(qb) is typically less than τ early in training. As training progresses, the model’s predictions become more confident and it is more frequently the case that max(qb) > τ . This suggests that pseudo-labeling may produce a natural curriculum “for free”. Similar justifications have been used in the past for ignoring low-confidence predictions in visual domain adaptation [15]. 2.3 Augmentation in FixMatch FixMatch leverages two kinds of augmentations: “weak” and “strong”. In all of our experiments, weak augmentation is a standard flip-and-shift augmentation strategy. Specifically, we randomly flip images horizontally with a probability of 50% on all datasets except SVHN and we randomly translate images by up to 12.5% vertically and horizontally. For “strong” augmentation, we experiment with two methods based on AutoAugment [10], which are then followed by the Cutout [14]. AutoAugment uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library.3 This requires labeled data to learn the augmentation strategy, making it problematic to use in SSL settings where limited labeled data is available. As a result, variants of AutoAugment which do not require the augmentation strategy to be learned ahead of time with labeled data, such as RandAugment [11] and CTAugment [3], have been proposed. Instead of using a learned strategy, both RandAugment and CTAugment randomly select transformations for each sample. For RandAugment, the magnitude that controls the severity of all distortions is randomly sampled from a pre-defined range (RandAugment with random magnitude was also used for UDA by [54]), whereas the magnitudes of individual transformations are learned on-the-fly for CTAugment. Refer to appendix E for more details. 2.4 Additional important factors Semi-supervised performance can be substantially impacted by factors other than the SSL algorithm used because considerations like the amount of regularization can be particularly important in the low-label regime. This is compounded by the fact that the performance of deep networks trained for image classification can heavily depend on the architecture, optimizer, training schedule, etc. These factors are typically not emphasized when new SSL algorithms are introduced. Instead, we endeavor to quantify their importance and highlight which ones have a significant impact on performance. Most analysis is performed in section 5. In this section we identify a few key considerations. First, as mentioned above, we find that regularization is particularly important. In all of our models and experiments, we use simple weight decay regularization. We also found that using the Adam optimizer [22] resulted in worse performance and instead use standard SGD with momentum [50, 40, 34]. We did not find a substantial difference between standard and Nesterov momentum. For a learning rate schedule, we use a cosine learning rate decay [28] which sets the learning rate to η cos ( 7πk 16K ) where η is the initial learning rate, k is the current training step, and K is the total number of training steps. Finally, we report final performance using an exponential moving average of model parameters. 2.5 Extensions of FixMatch Due to its simplicity, FixMatch can be readily extended with techniques in SSL literature. For example, both Augmentation Anchoring (where M strong augmentations are used for consistency regularization for each unlabeled example) and Distribution Alignment (which encourages the model predictions to have the same class distribution as the labeled set) from ReMixMatch [3] can be straightforwardly applied to FixMatch. Moreover, one may replace strong augmentations in FixMatch with modality-agnostic augmentation strategies, such as MixUp [59] or adversarial perturbations [33]. We present some exploration and experiments with these extensions in appendix D. 3 Related work Semi-supervised learning is a mature field with a huge diversity of approaches. In this review, we focus on methods closely related to FixMatch. Broader introductions are provided in [60, 61, 6]. The idea behind self-training has been around for decades [47, 32]. The generality of self-training (i.e., using a model’s predictions to obtain artificial labels for unlabeled data) has led it to be applied in many domains including NLP [31], object detection [44], image classification [25, 55], domain 3https://www.pythonware.com/products/pil/ adaptation [62], to name a few. Pseudo-labeling refers to a specific variant where model predictions are converted to hard labels [25], which is often used along with a confidence-based thresholding that retains unlabeled examples only when the classifier is sufficiently confident (e.g., [44]). While some studies have suggested that pseudo-labeling is not competitive against other modern SSL algorithms on its own [36], recent SSL algorithms have used pseudo-labeling as a part of their pipeline to produce better results [1, 39]. As mentioned above, pseudo-labeling results in a form of entropy minimization [17] which has been used as a component for many SSL techniques [33]. Consistency regularization was first proposed by [2] and later referred to as “Transformation/Stability” (or TS for short) [46] or the “Π-Model” [43]. Early extensions included using an exponential moving average of model parameters [51] or using previous model checkpoints [24] when producing artificial labels. Several methods have been used to produce random perturbations including data augmentation [15], stochastic regularization (e.g. Dropout [49]) [46, 24], and adversarial perturbations [33]. More recently, it has been shown that using strong data augmentation can produce better results [54, 3]. These heavily-augmented examples are almost certainly outside of the data distribution, which has in fact been shown to be beneficial for SSL [12]. Noisy Student [55] has integrated these techniques into a self-training framework and demonstrated impressive performance on ImageNet with additional massive amount of unlabeled data. Of the aforementioned work, FixMatch bears the closest resemblance to two recent methods: Unsupervised Data Augmentation (UDA) [54] and ReMixMatch [3]. They both use a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented examples. Neither of them uses pseudo-labeling, but both approaches “sharpen” the artificial label to encourage the model to produce high-confidence predictions. UDA in particular also only enforces consistency when the highest probability in the predicted class distribution for the artificial label is above a threshold. The thresholded pseudo-labeling of FixMatch has a similar effect to sharpening. In addition, ReMixMatch anneals the weight of the unlabeled data loss, which we omit from FixMatch because we posit that the thresholding used in pseudo-labeling has a similar effect (as mentioned in section 2.2). These similarities suggest that FixMatch can be viewed as a substantially simplified version of UDA and ReMixMatch, where we have combined two common techniques (pseudo-labeling and consistency regularization) while removing many components (sharpening, training signal annealing from UDA, distribution alignment and the rotation loss from ReMixMatch, etc.). Since the core of FixMatch is a simple combination of two existing techniques, it also bears substantial similarities to many previously-proposed SSL algorithms. We provide a concise comparison of each of these techniques in table 1 where we list the augmentation used for the artificial label, the model’s prediction, and any post-processing applied to the artificial label. A more thorough comparison of these different algorithms and their constituent approaches is provided in the following section. 4 Experiments We evaluate the efficacy of FixMatch on several SSL image classification benchmarks. Specifically, we perform experiments with varying amounts of labeled data and augmentation strategies on CIFAR10/100 [23], SVHN [35], STL-10 [9], and ImageNet [13], following standard SSL evaluation protocols [36, 4, 3]. In many cases, we perform experiments with fewer labels than previously considered since FixMatch shows promise in extremely label-scarce settings. Note that we use an identical set of hyperparameters (λu = 1, η= 0.03, β= 0.9, τ = 0.95, µ= 7, B= 64, K = 220)4 across all amounts of labeled examples and datasets other than ImageNet. A complete list of hyperparameters is reported in appendix B.1. We include an extensive ablation study in section 5 to tease apart the importance of the different components and hyperparameters of FixMatch, including factors that are not explicitly part of the SSL algorithm such as the optimizer and learning rate. 4.1 CIFAR-10, CIFAR-100, and SVHN We compare FixMatch to various existing methods on the standard CIFAR-10, CIFAR-100, and SVHN benchmarks. As suggested by [36], we reimplemented all existing baselines and performed all experiments using the same codebase. In particular, we use the same network architecture and training protocol, including the optimizer, learning rate schedule, data preprocessing, etc. across all SSL methods. Following [4], we used a Wide ResNet-28-2 [56] with 1.5M parameters for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10. For baselines, we consider methods that are similar to FixMatch and/or are state-of-the-art: Π-Model [43], Mean Teacher [51], Pseudo-Label [25], MixMatch [4], UDA [54], and ReMixMatch [3]. Besides [3], previous work has not considered fewer than 25 labels per class on these benchmarks. Performing better with less supervision is the central goal of SSL in practice since it alleviates the need for labeled data. We also consider the setting where only 4 labeled images are given for each class on each dataset. As far as we are aware, we are the first to run any experiments at 4 labels per class on CIFAR-100. We report the performance of all baselines along with FixMatch in table 2. We compute the mean and variance of accuracy when training on 5 different “folds” of labeled data. We omit results with 4 labels per class for Π-Model, Mean Teacher, and Pseudo-Labeling since the performance was poor at 250 labels. MixMatch, ReMixMatch, and UDA all perform reasonably well with 40 and 250 labels, but we find that FixMatch substantially outperforms each of these methods while nevertheless being simpler. For example, FixMatch achieves an average error rate of 11.39% on CIFAR-10 with 4 labels per class. As a point of reference, among the methods studied in [36] (where the same network architecture was used), the lowest error rate achieved on CIFAR-10 with 400 labels per class was 13.13%. Our results also compare favorably to recent state-of-the-art results achieved by ReMixMatch [3], despite the fact that we omit various components such as the self-supervised loss. Our results are state-of-the-art on all datasets except for CIFAR-100 where ReMixMatch performs a bit better. To understand why ReMixMatch performs better than FixMatch, we experimented with a few variants of FixMatch which copy various components of ReMixMatch into FixMatch. We find that the most important term is Distribution Alignment (DA), which encourages the model predictions to have the same class distribution as the labeled set. Combining FixMatch with DA reaches a 40.14% error rate with 400 labeled examples, which is substantially better than the 44.28% achieved by ReMixMatch. We find that in most cases the performance of FixMatch using CTAugment and RandAugment is similar, except in the settings where we have 4 labels per class. This may be explained by the fact that these results are particularly high-variance. For example, the variance over 5 different folds for CIFAR-10 with 4 labels per class is 3.35%, which is significantly higher than that with 25 labels per class (0.33%). The error rates are also affected significantly by the random seeds when the number of labeled examples per class is extremely small, as shown in table 8 of supplementary material. 4β refers to a momentum in SGD optimizer. The definition of other hyperparameters are found in section 2. 4.2 STL-10 The STL-10 dataset contains 5,000 labeled images of size 96×96 from 10 classes and 100,000 unlabeled images. There exist out-of-distribution images in the unlabeled set, making it a more realistic and challenging test of SSL performance. We test SSL algorithms on five of the predefined folds of 1,000 labeled images each. Following [4], we use a WRN-37-2 network (comprising 5.9M parameters).5 As in table 2, FixMatch achieves the state-of-the-art performance of ReMixMatch [3] despite being significantly simpler. 4.3 ImageNet We evaluate FixMatch on ImageNet to verify that it performs well on a larger and more complex dataset. Following [54], we use 10% of the training data as labeled and treat the rest as unlabeled examples. We use a ResNet-50 network architecture and RandAugment [11] as strong augmentation for this experiment. We include additional implementation details in appendix C. FixMatch achieves a top-1 error rate of 28.54± 0.52%, which is 2.68% better than UDA [54]. Our top-5 error rate is 10.87± 0.28%. While S4L [57] holds state-of-the-art on semi-supervised ImageNet with a 26.79% error rate, it leverages 2 additional training phases (pseudo-label re-training and supervised finetuning) to significantly lower the error rate from 30.27% after the first phase. FixMatch outperforms S4L after its first phase, and it is possible that a similar performance gain could be achieved by incorporating these techniques into FixMatch. 4.4 Barely Supervised Learning To test the limits of our proposed approach, we applied FixMatch to CIFAR-10 with only one example per class.6 We conduct two sets of experiments. First, we create four datasets by randomly selecting one example per class. We train on each dataset four times and reach between 48.58% and 85.32% test accuracy with a median of 64.28%. The inter-dataset variance is much lower, however; for example, the four models trained on the first dataset all reach between 61% and 67% accuracy, and the second dataset reaches between 68% and 75%. We hypothesize that this variability is caused by the quality of the 10 labeled examples comprising each dataset and that sampling low-quality examples might make it more difficult for the model to learn some particular class effectively. To test this, we construct eight new training datasets with examples ranging in “prototypicality” (i.e., representative of the underlying class). Specifically, we take the ordering of the CIFAR-10 training set from [5] that sorts examples from those that are most representative to those that are least. This example ordering was determined after training many CIFAR-10 models with all labeled data. We thus do not envision this as a practical method for choosing examples for use in SSL, but rather to experimentally verify that examples that are more representative are better suited for low-label training. We divide this ordering evenly into eight buckets (so all of the most representative examples are in the first bucket, and all of the outliers in the last). We then create eight labeled training sets by randomly selecting one labeled example of each class from the same bucket. Using the same hyperparameters, the model trained only on the most prototypical examples reaches a median of 78% accuracy (with a maximum of 84% accuracy); training on the middle of the distribution reaches 65% accuracy; and training on only the outliers fails to converge completely, with 10% accuracy. Figure 2 shows the full labeled training dataset for the split where FixMatch achieved a median accuracy of 78%. Further analysis is presented in Appendix B.7. 5We clarify that both FixMatch and ReMixMatch [3], which has reported an incorrect number of network parameters (23.8M), are tested with the same network architecture containing 5.9M parameters. 6The experimental protocol of barely supervised learning (BSL) shares similarities to those of few-shot learning (FSL) [37] as they both assume a limited availability (e.g., 1 or 5) of labeled examples from categories of interest. However, two protocols have a critical difference, where for FSL one is provided with extra labeled training examples from regular classes, whereas for BSL one is given additional unlabeled training examples. 5 Ablation Study Since FixMatch comprises a simple combination of two existing techniques, we perform an extensive ablation study to better understand why it is able to obtain state-of-the-art results. Due to the number of experiments in our ablation study, we focus on studying with a single 250 label split from CIFAR10 and only report results using CTAugment. Note that FixMatch with default parameters achieves 4.84% error rate on this particular split. We present complete ablation results, including optimizer (appendix B.3), learning rate decay schedule (appendix B.4), weight decay (appendix B.6), labeled to unlabeled data ratio µ (appendix B.5), in the supplementary material. 5.1 Sharpening and Thresholding A “soft” version of pseudo-labeling can be designed by sharpening the predicted distribution. This formulation appears in UDA and is of general interest since other methods such as MixMatch and ReMixMatch also make use of sharpening (albeit without thresholding). Using sharpening instead of an arg max introduces a hyper-parameter: the temperature T [4, 54, 3]. We study the interactions between the temperature T and the confidence threshold τ . Note that pseudo-labeling in FixMatch is recovered as T → 0. The results are presented in fig. 3a and fig. 3b. The threshold value of 0.95 shows the lowest error rate, though increasing it to 0.97 or 0.99 did not hurt much. In contrast, accuracy drops by more than 1.5% when using a small threshold value. Note that the threshold value controls the trade-off between the quality and the quantity of pseudo-labels. As discussed in appendix B.2, the accuracy of pseudo-labels for unlabeled data increases with higher threshold values, while the amount of unlabeled data contributing to `u in eq. (4) decreases. This suggests that the quality of pseudo-labels is more important than the quantity for reaching a high accuracy. Sharpening, on the other hand, did not show a significant difference in performance when a confidence threshold is used. In summary, we observe that swapping pseudo-labeling for sharpening and thresholding would introduce a new hyperparameter while achieving no better performance. 5.2 Augmentation Strategy We conduct an ablation study on different strong data augmentation policies as it plays a key role in FixMatch. Specifically, we chose RandAugment [11] and CTAugment [3], which have been used for state-of-the-art SSL algorithms such as UDA [54] and ReMixMatch [4] respectively. On CIFAR-10, CIFAR-100, and SVHN we observed highly comparable results between the two policies, whereas in STL-10 (table 2), we observe a significant gain by using CTAugment. We measure the effect of Cutout in table 3, which is used by default after strong augmentation in both RandAugment and CTAugment. We find that both Cutout and CTAugment are required to obtain the best performance; removing either results in a significant increase in error rate. We also study different combinations of weak and strong augmentations for pseudo-label generation and prediction (i.e., the upper and lower paths in fig. 1). When we replaced the weak augmentation for label guessing with strong augmentation, we found that the model diverged early in training. Conversely, when replacing weak augmentation with no augmentation, the model overfits the guessed unlabeled labels. Using weak augmentation in place of strong augmentation to generate the model’s prediction for training peaked at 45% accuracy but was not stable and progressively collapsed to 12%, suggesting the importance of strong data augmentation. This observation is well-aligned with those from supervised learning [10]. 6 Conclusion There has been rapid recent progress in SSL. Unfortunately, much of this progress comes at the cost of increasingly complicated learning algorithms with sophisticated loss terms and numerous difficult-totune hyper-parameters. We introduce FixMatch, a simpler SSL algorithm that achieves state-of-the-art results across many datasets. We show how FixMatch can begin to bridge the gap between low-label semi-supervised learning and few-shot learning or clustering: we obtain surprisingly-high accuracy with just one label per class. Using only standard cross-entropy losses on both labeled and unlabeled data, FixMatch’s training objective can be written in just a few lines of code. Because of this simplicity, we are able to thoroughly investigate how FixMatch works. We find that certain design choices are important (and often underemphasized) – most importantly, weight decay and the choice of optimizer. The importance of these factors means that even when controlling for model architecture as is recommended in [36], the same technique can not always be directly compared across different implementations. On the whole, we believe that the existence of such simple but performant semi-supervised machine learning algorithms will help to allow machine learning to be deployed in increasingly many practical domains where labels are expensive or difficult to obtain. Broader Impact FixMatch helps democratize machine learning in two ways: first, its simplicity makes it available to a wider audience, and second, its accuracy with only a few labels means that it can be applied to domains where previously machine learning was not feasible. The flip side of democratization of machine learning research is that it becomes easy for both good and bad actors to apply. We hope that this ability will be used for good—for example, obtaining medical scans is often far cheaper than paying an expert doctor to label every image. However, it is possible that more advanced techniques for semi-supervised learning will allow for more advanced surveillance: for example, the efficacy of our one-shot classification might allow for more accurate person identification from a few images. Broadly speaking, any progress on semi-supervised learning will have these same consequences. Funding Disclosure Google is the sole source of funding for this work. Acknowledgment We thank Qizhe Xie, Avital Oliver, Quoc V. Le, and Sercan Arik for their feedback on this paper.
1. What is the focus and contribution of the paper on semi-supervised learning? 2. What are the strengths of the proposed approach, particularly in its simplicity and efficiency? 3. What are the weaknesses of the paper, especially regarding its novelty and applicability to other domains? 4. Do you have any concerns about the reliance on specific domain knowledge for the success of the proposed method? 5. How realistic is the scenario considered in the paper for practical applications?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a rather simple but efficient algorithm for semi-supervised learning. The algorithm is based on the previously proposed teacher-student architecture. The novelty is 1) that the teacher always receives weakly augmented samples (flip and shift) and the student receives strongly augmented samples (RandAugment and CTAugment proposed previously); 2) instead of computing the loss for all unlabeled examples, the loss is computed only for unlabeled examples for which the teacher is confident, the target for the student is one-hot coded label instead of the distribution (as was done previously). Strengths - The proposed algorithm is simple and efficient. The experimental results are good. - It is very nice that one does not have to use ramp-ups for the weight of the consistency cost. The proposed solution is more elegant. - I liked the ablation study on optimisers (Table 7 in the appendix). That study shows quite high sensitivity of the SSL performance on the parameters of the optimizer. Weaknesses - The results are similar to the previous state of the art. The proposed method seems like a small modification of the existing algorithms. Below are some points that apply to many SSL papers published recently: - It is unclear whether the proposed algorithm can be extended to other domains (not image classification). - I wonder how much of the success of this and other similar SSL methods is due to our knowledge of the domain (image classification) that comes from training on large labeled data sets. Specifically we know what architectural choices work and do not work on particular datasets when training with many labels. One indicator of that is the usage of different hyperparameters for the smaller datasets and ImageNet in the paper. - Is the scenario considered in the paper realistic for many practical applications? I think that in most applications, the unlabeled samples do not come from the same classes.
NIPS
Title FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence Abstract Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model’s predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch’s success. The code is available at https://github.com/google-research/fixmatch. 1 Introduction Deep neural networks have become the de facto model for computer vision applications. Their success is partially attributable to their scalability, i.e., the empirical observation that training them on larger datasets produces better performance [30, 20, 42, 55, 41, 21]. Deep networks often achieve their strong performance through supervised learning, which requires a labeled dataset. The performance benefit conferred by the use of a larger dataset can therefore come at a significant cost since labeling data often requires human labor. This cost can be particularly extreme when labeling must be done by an expert (for example, a doctor in medical applications). A powerful approach for training models on a large amount of data without requiring a large amount of labels is semi-supervised learning (SSL). SSL mitigates the requirement for labeled data by providing a means of leveraging unlabeled data. Since unlabeled data can often be obtained with minimal human labor, any performance boost conferred by SSL often comes with low cost. This has led to a plethora of SSL methods that are designed for deep networks [33, 46, 24, 51, 4, 54, 3, 25, 45, 52]. A popular class of SSL methods can be viewed as producing an artificial label for unlabeled images and training the model to predict the artificial label when fed unlabeled images as input. For example, pseudo-labeling [25] (also called self-training [32, 55, 44, 47]) uses the model’s class prediction as a label to train against. Similarly, consistency regularization [2, 46, 24] obtains an artificial label using the model’s predicted distribution after randomly modifying the input or model function. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we break the trend of recent state-of-the-art methods that combine increasingly complex mechanisms [4, 54, 3] and produce a method that is simpler, but also more accurate. Our algorithm, FixMatch, produces artificial labels using both consistency regularization and pseudo-labeling. Crucially, the artificial label is produced based on a weakly-augmented unlabeled image (e.g., using only flip-and-shift data augmentation) which is used as a target when the model is fed a stronglyaugmented version of the same image. Inspired by UDA [54] and ReMixMatch [3], we leverage Cutout [14], CTAugment [3], and RandAugment [11] for strong augmentation, which all produce heavily-distorted versions of a given image. Following the approach of pseudo-labeling [25], we only retain an artificial label if the model assigns a high probability to one of the possible classes. A diagram of FixMatch is shown in fig. 1. Despite its simplicity, we show that FixMatch obtains state-of-the-art performance on the most commonly-studied SSL benchmarks. For example, FixMatch achieves 94.93% accuracy on CIFAR-10 with 250 labeled examples compared to the previous state-of-the-art of 93.73% [3] in the standard experimental setting from [36]. We also explore the limits of our approach by applying it in the extremely-scarce-labels regime, obtaining 88.61% accuracy on CIFAR-10 with only 4 labels per class. Since FixMatch is a simplification of existing approaches but achieves substantially better performance, we include an extensive ablation study to determine which factors contribute the most to its success. A key benefit of FixMatch being a simplification of existing methods is that it requires many fewer additional hyperparameters. As such, it allows us to perform an extensive ablation study of each of them. Our ablation study also includes basic fully-supervised learning experimental choices that are often ignored or not reported when new SSL methods are proposed (such as the optimizer or learning rate schedule). 2 FixMatch FixMatch is a combination of two approaches to SSL: Consistency regularization and pseudo-labeling. Its main novelty comes from the combination of these two ingredients as well as the use of a separate weak and strong augmentation when performing consistency regularization. In this section, we first review consistency regularization and pseudo-labeling before describing FixMatch in detail. We also describe the other factors, such as regularization, which contribute to FixMatch’s empirical success. For an L-class classification problem, let X = { (xb, pb) : b ∈ (1, . . . , B) } be a batch of B labeled examples, where xb are the training examples and pb are one-hot labels. Let U = { ub : b ∈ (1, . . . , µB) } be a batch of µB unlabeled examples where µ is a hyperparameter that determines the relative sizes of X and U . Let pm(y | x) be the predicted class distribution produced by the model for input x. We denote the cross-entropy between two probability distributions p and q as H(p, q). We perform two types of augmentations as part of FixMatch: strong and weak, denoted by A(·) and α(·) respectively. We describe the form of augmentation we use for A and α in section 2.3. 2.1 Background Consistency regularization is an important component of recent state-of-the-art SSL algorithms. Consistency regularization utilizes unlabeled data by relying on the assumption that the model should output similar predictions when fed perturbed versions of the same image. This idea was first proposed in [2] and popularized by [46, 24], where the model is trained both via a standard supervised classification loss and on unlabeled data via the loss function µB∑ b=1 ‖pm(y|α(ub))− pm(y|α(ub))‖22 (1) Note that both α and pm are stochastic functions, so the two terms in eq. (1) will indeed have different values. Extensions to this idea include using an adversarial transformation in place of α [33], using a running average or past model predictions for one invocation of pm [51, 24], using a cross-entropy loss in place of the squared `2 loss [33, 54, 3], using stronger forms of augmentation [54, 3], and using consistency regularization as a component in a larger SSL pipeline [4, 3]. Pseudo-labeling leverages the idea of using the model itself to obtain artificial labels for unlabeled data [32, 47]. Specifically, this refers to the use of “hard” labels (i.e., the arg max of the model’s output) and only retaining artificial labels whose largest class probability fall above a predefined threshold [25]. Letting qb = pm(y|ub), pseudo-labeling uses the following loss function: 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, qb) (2) where q̂b = arg max(qb) and τ is the threshold. For simplicity, we assume that arg max applied to a probability distribution produces a valid “one-hot” probability distribution. The use of a hard label makes pseudo-labeling closely related to entropy minimization [17, 45], where the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unlabeled data. 2.2 Our Algorithm: FixMatch The loss function for FixMatch consists of two cross-entropy loss terms: a supervised loss `s applied to labeled data and an unsupervised loss `u. Specifically, `s is just the standard cross-entropy loss on weakly augmented labeled examples: `s = 1 B B∑ b=1 H(pb, pm(y | α(xb))) (3) FixMatch computes an artificial label for each unlabeled example2 which is then used in a standard cross-entropy loss. To obtain an artificial label, we first compute the model’s predicted class distribution given a weakly-augmented version of a given unlabeled image: qb = pm(y | α(ub)). Then, we use q̂b = arg max(qb) as a pseudo-label, except we enforce the cross-entropy loss against the model’s output for a strongly-augmented version of ub: `u = 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, pm(y | A(ub))) (4) where τ is a scalar hyperparameter denoting the threshold above which we retain a pseudo-label. The loss minimized by FixMatch is simply `s + λu`u where λu is a fixed scalar hyperparameter denoting the relative weight of the unlabeled loss. We present a complete algorithm for FixMatch in algorithm 1 of the supplementary material. While eq. (4) is similar to the pseudo-labeling loss in eq. (2), it is crucially different in that the artificial label is computed based on a weakly-augmented image and the loss is enforced against the model’s output for a strongly-augmented image. This introduces a form of consistency regularization which, as we will show in section 5, is crucial to FixMatch’s success. We also note that it is typical in modern SSL algorithms to increase the weight of the unlabeled loss term (λu) during training [51, 24, 4, 3, 36]. We found that this was unnecessary for FixMatch, which may be due to the fact 2In practice, we include all labeled data as part of unlabeled data without their labels when constructing U . that max(qb) is typically less than τ early in training. As training progresses, the model’s predictions become more confident and it is more frequently the case that max(qb) > τ . This suggests that pseudo-labeling may produce a natural curriculum “for free”. Similar justifications have been used in the past for ignoring low-confidence predictions in visual domain adaptation [15]. 2.3 Augmentation in FixMatch FixMatch leverages two kinds of augmentations: “weak” and “strong”. In all of our experiments, weak augmentation is a standard flip-and-shift augmentation strategy. Specifically, we randomly flip images horizontally with a probability of 50% on all datasets except SVHN and we randomly translate images by up to 12.5% vertically and horizontally. For “strong” augmentation, we experiment with two methods based on AutoAugment [10], which are then followed by the Cutout [14]. AutoAugment uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library.3 This requires labeled data to learn the augmentation strategy, making it problematic to use in SSL settings where limited labeled data is available. As a result, variants of AutoAugment which do not require the augmentation strategy to be learned ahead of time with labeled data, such as RandAugment [11] and CTAugment [3], have been proposed. Instead of using a learned strategy, both RandAugment and CTAugment randomly select transformations for each sample. For RandAugment, the magnitude that controls the severity of all distortions is randomly sampled from a pre-defined range (RandAugment with random magnitude was also used for UDA by [54]), whereas the magnitudes of individual transformations are learned on-the-fly for CTAugment. Refer to appendix E for more details. 2.4 Additional important factors Semi-supervised performance can be substantially impacted by factors other than the SSL algorithm used because considerations like the amount of regularization can be particularly important in the low-label regime. This is compounded by the fact that the performance of deep networks trained for image classification can heavily depend on the architecture, optimizer, training schedule, etc. These factors are typically not emphasized when new SSL algorithms are introduced. Instead, we endeavor to quantify their importance and highlight which ones have a significant impact on performance. Most analysis is performed in section 5. In this section we identify a few key considerations. First, as mentioned above, we find that regularization is particularly important. In all of our models and experiments, we use simple weight decay regularization. We also found that using the Adam optimizer [22] resulted in worse performance and instead use standard SGD with momentum [50, 40, 34]. We did not find a substantial difference between standard and Nesterov momentum. For a learning rate schedule, we use a cosine learning rate decay [28] which sets the learning rate to η cos ( 7πk 16K ) where η is the initial learning rate, k is the current training step, and K is the total number of training steps. Finally, we report final performance using an exponential moving average of model parameters. 2.5 Extensions of FixMatch Due to its simplicity, FixMatch can be readily extended with techniques in SSL literature. For example, both Augmentation Anchoring (where M strong augmentations are used for consistency regularization for each unlabeled example) and Distribution Alignment (which encourages the model predictions to have the same class distribution as the labeled set) from ReMixMatch [3] can be straightforwardly applied to FixMatch. Moreover, one may replace strong augmentations in FixMatch with modality-agnostic augmentation strategies, such as MixUp [59] or adversarial perturbations [33]. We present some exploration and experiments with these extensions in appendix D. 3 Related work Semi-supervised learning is a mature field with a huge diversity of approaches. In this review, we focus on methods closely related to FixMatch. Broader introductions are provided in [60, 61, 6]. The idea behind self-training has been around for decades [47, 32]. The generality of self-training (i.e., using a model’s predictions to obtain artificial labels for unlabeled data) has led it to be applied in many domains including NLP [31], object detection [44], image classification [25, 55], domain 3https://www.pythonware.com/products/pil/ adaptation [62], to name a few. Pseudo-labeling refers to a specific variant where model predictions are converted to hard labels [25], which is often used along with a confidence-based thresholding that retains unlabeled examples only when the classifier is sufficiently confident (e.g., [44]). While some studies have suggested that pseudo-labeling is not competitive against other modern SSL algorithms on its own [36], recent SSL algorithms have used pseudo-labeling as a part of their pipeline to produce better results [1, 39]. As mentioned above, pseudo-labeling results in a form of entropy minimization [17] which has been used as a component for many SSL techniques [33]. Consistency regularization was first proposed by [2] and later referred to as “Transformation/Stability” (or TS for short) [46] or the “Π-Model” [43]. Early extensions included using an exponential moving average of model parameters [51] or using previous model checkpoints [24] when producing artificial labels. Several methods have been used to produce random perturbations including data augmentation [15], stochastic regularization (e.g. Dropout [49]) [46, 24], and adversarial perturbations [33]. More recently, it has been shown that using strong data augmentation can produce better results [54, 3]. These heavily-augmented examples are almost certainly outside of the data distribution, which has in fact been shown to be beneficial for SSL [12]. Noisy Student [55] has integrated these techniques into a self-training framework and demonstrated impressive performance on ImageNet with additional massive amount of unlabeled data. Of the aforementioned work, FixMatch bears the closest resemblance to two recent methods: Unsupervised Data Augmentation (UDA) [54] and ReMixMatch [3]. They both use a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented examples. Neither of them uses pseudo-labeling, but both approaches “sharpen” the artificial label to encourage the model to produce high-confidence predictions. UDA in particular also only enforces consistency when the highest probability in the predicted class distribution for the artificial label is above a threshold. The thresholded pseudo-labeling of FixMatch has a similar effect to sharpening. In addition, ReMixMatch anneals the weight of the unlabeled data loss, which we omit from FixMatch because we posit that the thresholding used in pseudo-labeling has a similar effect (as mentioned in section 2.2). These similarities suggest that FixMatch can be viewed as a substantially simplified version of UDA and ReMixMatch, where we have combined two common techniques (pseudo-labeling and consistency regularization) while removing many components (sharpening, training signal annealing from UDA, distribution alignment and the rotation loss from ReMixMatch, etc.). Since the core of FixMatch is a simple combination of two existing techniques, it also bears substantial similarities to many previously-proposed SSL algorithms. We provide a concise comparison of each of these techniques in table 1 where we list the augmentation used for the artificial label, the model’s prediction, and any post-processing applied to the artificial label. A more thorough comparison of these different algorithms and their constituent approaches is provided in the following section. 4 Experiments We evaluate the efficacy of FixMatch on several SSL image classification benchmarks. Specifically, we perform experiments with varying amounts of labeled data and augmentation strategies on CIFAR10/100 [23], SVHN [35], STL-10 [9], and ImageNet [13], following standard SSL evaluation protocols [36, 4, 3]. In many cases, we perform experiments with fewer labels than previously considered since FixMatch shows promise in extremely label-scarce settings. Note that we use an identical set of hyperparameters (λu = 1, η= 0.03, β= 0.9, τ = 0.95, µ= 7, B= 64, K = 220)4 across all amounts of labeled examples and datasets other than ImageNet. A complete list of hyperparameters is reported in appendix B.1. We include an extensive ablation study in section 5 to tease apart the importance of the different components and hyperparameters of FixMatch, including factors that are not explicitly part of the SSL algorithm such as the optimizer and learning rate. 4.1 CIFAR-10, CIFAR-100, and SVHN We compare FixMatch to various existing methods on the standard CIFAR-10, CIFAR-100, and SVHN benchmarks. As suggested by [36], we reimplemented all existing baselines and performed all experiments using the same codebase. In particular, we use the same network architecture and training protocol, including the optimizer, learning rate schedule, data preprocessing, etc. across all SSL methods. Following [4], we used a Wide ResNet-28-2 [56] with 1.5M parameters for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10. For baselines, we consider methods that are similar to FixMatch and/or are state-of-the-art: Π-Model [43], Mean Teacher [51], Pseudo-Label [25], MixMatch [4], UDA [54], and ReMixMatch [3]. Besides [3], previous work has not considered fewer than 25 labels per class on these benchmarks. Performing better with less supervision is the central goal of SSL in practice since it alleviates the need for labeled data. We also consider the setting where only 4 labeled images are given for each class on each dataset. As far as we are aware, we are the first to run any experiments at 4 labels per class on CIFAR-100. We report the performance of all baselines along with FixMatch in table 2. We compute the mean and variance of accuracy when training on 5 different “folds” of labeled data. We omit results with 4 labels per class for Π-Model, Mean Teacher, and Pseudo-Labeling since the performance was poor at 250 labels. MixMatch, ReMixMatch, and UDA all perform reasonably well with 40 and 250 labels, but we find that FixMatch substantially outperforms each of these methods while nevertheless being simpler. For example, FixMatch achieves an average error rate of 11.39% on CIFAR-10 with 4 labels per class. As a point of reference, among the methods studied in [36] (where the same network architecture was used), the lowest error rate achieved on CIFAR-10 with 400 labels per class was 13.13%. Our results also compare favorably to recent state-of-the-art results achieved by ReMixMatch [3], despite the fact that we omit various components such as the self-supervised loss. Our results are state-of-the-art on all datasets except for CIFAR-100 where ReMixMatch performs a bit better. To understand why ReMixMatch performs better than FixMatch, we experimented with a few variants of FixMatch which copy various components of ReMixMatch into FixMatch. We find that the most important term is Distribution Alignment (DA), which encourages the model predictions to have the same class distribution as the labeled set. Combining FixMatch with DA reaches a 40.14% error rate with 400 labeled examples, which is substantially better than the 44.28% achieved by ReMixMatch. We find that in most cases the performance of FixMatch using CTAugment and RandAugment is similar, except in the settings where we have 4 labels per class. This may be explained by the fact that these results are particularly high-variance. For example, the variance over 5 different folds for CIFAR-10 with 4 labels per class is 3.35%, which is significantly higher than that with 25 labels per class (0.33%). The error rates are also affected significantly by the random seeds when the number of labeled examples per class is extremely small, as shown in table 8 of supplementary material. 4β refers to a momentum in SGD optimizer. The definition of other hyperparameters are found in section 2. 4.2 STL-10 The STL-10 dataset contains 5,000 labeled images of size 96×96 from 10 classes and 100,000 unlabeled images. There exist out-of-distribution images in the unlabeled set, making it a more realistic and challenging test of SSL performance. We test SSL algorithms on five of the predefined folds of 1,000 labeled images each. Following [4], we use a WRN-37-2 network (comprising 5.9M parameters).5 As in table 2, FixMatch achieves the state-of-the-art performance of ReMixMatch [3] despite being significantly simpler. 4.3 ImageNet We evaluate FixMatch on ImageNet to verify that it performs well on a larger and more complex dataset. Following [54], we use 10% of the training data as labeled and treat the rest as unlabeled examples. We use a ResNet-50 network architecture and RandAugment [11] as strong augmentation for this experiment. We include additional implementation details in appendix C. FixMatch achieves a top-1 error rate of 28.54± 0.52%, which is 2.68% better than UDA [54]. Our top-5 error rate is 10.87± 0.28%. While S4L [57] holds state-of-the-art on semi-supervised ImageNet with a 26.79% error rate, it leverages 2 additional training phases (pseudo-label re-training and supervised finetuning) to significantly lower the error rate from 30.27% after the first phase. FixMatch outperforms S4L after its first phase, and it is possible that a similar performance gain could be achieved by incorporating these techniques into FixMatch. 4.4 Barely Supervised Learning To test the limits of our proposed approach, we applied FixMatch to CIFAR-10 with only one example per class.6 We conduct two sets of experiments. First, we create four datasets by randomly selecting one example per class. We train on each dataset four times and reach between 48.58% and 85.32% test accuracy with a median of 64.28%. The inter-dataset variance is much lower, however; for example, the four models trained on the first dataset all reach between 61% and 67% accuracy, and the second dataset reaches between 68% and 75%. We hypothesize that this variability is caused by the quality of the 10 labeled examples comprising each dataset and that sampling low-quality examples might make it more difficult for the model to learn some particular class effectively. To test this, we construct eight new training datasets with examples ranging in “prototypicality” (i.e., representative of the underlying class). Specifically, we take the ordering of the CIFAR-10 training set from [5] that sorts examples from those that are most representative to those that are least. This example ordering was determined after training many CIFAR-10 models with all labeled data. We thus do not envision this as a practical method for choosing examples for use in SSL, but rather to experimentally verify that examples that are more representative are better suited for low-label training. We divide this ordering evenly into eight buckets (so all of the most representative examples are in the first bucket, and all of the outliers in the last). We then create eight labeled training sets by randomly selecting one labeled example of each class from the same bucket. Using the same hyperparameters, the model trained only on the most prototypical examples reaches a median of 78% accuracy (with a maximum of 84% accuracy); training on the middle of the distribution reaches 65% accuracy; and training on only the outliers fails to converge completely, with 10% accuracy. Figure 2 shows the full labeled training dataset for the split where FixMatch achieved a median accuracy of 78%. Further analysis is presented in Appendix B.7. 5We clarify that both FixMatch and ReMixMatch [3], which has reported an incorrect number of network parameters (23.8M), are tested with the same network architecture containing 5.9M parameters. 6The experimental protocol of barely supervised learning (BSL) shares similarities to those of few-shot learning (FSL) [37] as they both assume a limited availability (e.g., 1 or 5) of labeled examples from categories of interest. However, two protocols have a critical difference, where for FSL one is provided with extra labeled training examples from regular classes, whereas for BSL one is given additional unlabeled training examples. 5 Ablation Study Since FixMatch comprises a simple combination of two existing techniques, we perform an extensive ablation study to better understand why it is able to obtain state-of-the-art results. Due to the number of experiments in our ablation study, we focus on studying with a single 250 label split from CIFAR10 and only report results using CTAugment. Note that FixMatch with default parameters achieves 4.84% error rate on this particular split. We present complete ablation results, including optimizer (appendix B.3), learning rate decay schedule (appendix B.4), weight decay (appendix B.6), labeled to unlabeled data ratio µ (appendix B.5), in the supplementary material. 5.1 Sharpening and Thresholding A “soft” version of pseudo-labeling can be designed by sharpening the predicted distribution. This formulation appears in UDA and is of general interest since other methods such as MixMatch and ReMixMatch also make use of sharpening (albeit without thresholding). Using sharpening instead of an arg max introduces a hyper-parameter: the temperature T [4, 54, 3]. We study the interactions between the temperature T and the confidence threshold τ . Note that pseudo-labeling in FixMatch is recovered as T → 0. The results are presented in fig. 3a and fig. 3b. The threshold value of 0.95 shows the lowest error rate, though increasing it to 0.97 or 0.99 did not hurt much. In contrast, accuracy drops by more than 1.5% when using a small threshold value. Note that the threshold value controls the trade-off between the quality and the quantity of pseudo-labels. As discussed in appendix B.2, the accuracy of pseudo-labels for unlabeled data increases with higher threshold values, while the amount of unlabeled data contributing to `u in eq. (4) decreases. This suggests that the quality of pseudo-labels is more important than the quantity for reaching a high accuracy. Sharpening, on the other hand, did not show a significant difference in performance when a confidence threshold is used. In summary, we observe that swapping pseudo-labeling for sharpening and thresholding would introduce a new hyperparameter while achieving no better performance. 5.2 Augmentation Strategy We conduct an ablation study on different strong data augmentation policies as it plays a key role in FixMatch. Specifically, we chose RandAugment [11] and CTAugment [3], which have been used for state-of-the-art SSL algorithms such as UDA [54] and ReMixMatch [4] respectively. On CIFAR-10, CIFAR-100, and SVHN we observed highly comparable results between the two policies, whereas in STL-10 (table 2), we observe a significant gain by using CTAugment. We measure the effect of Cutout in table 3, which is used by default after strong augmentation in both RandAugment and CTAugment. We find that both Cutout and CTAugment are required to obtain the best performance; removing either results in a significant increase in error rate. We also study different combinations of weak and strong augmentations for pseudo-label generation and prediction (i.e., the upper and lower paths in fig. 1). When we replaced the weak augmentation for label guessing with strong augmentation, we found that the model diverged early in training. Conversely, when replacing weak augmentation with no augmentation, the model overfits the guessed unlabeled labels. Using weak augmentation in place of strong augmentation to generate the model’s prediction for training peaked at 45% accuracy but was not stable and progressively collapsed to 12%, suggesting the importance of strong data augmentation. This observation is well-aligned with those from supervised learning [10]. 6 Conclusion There has been rapid recent progress in SSL. Unfortunately, much of this progress comes at the cost of increasingly complicated learning algorithms with sophisticated loss terms and numerous difficult-totune hyper-parameters. We introduce FixMatch, a simpler SSL algorithm that achieves state-of-the-art results across many datasets. We show how FixMatch can begin to bridge the gap between low-label semi-supervised learning and few-shot learning or clustering: we obtain surprisingly-high accuracy with just one label per class. Using only standard cross-entropy losses on both labeled and unlabeled data, FixMatch’s training objective can be written in just a few lines of code. Because of this simplicity, we are able to thoroughly investigate how FixMatch works. We find that certain design choices are important (and often underemphasized) – most importantly, weight decay and the choice of optimizer. The importance of these factors means that even when controlling for model architecture as is recommended in [36], the same technique can not always be directly compared across different implementations. On the whole, we believe that the existence of such simple but performant semi-supervised machine learning algorithms will help to allow machine learning to be deployed in increasingly many practical domains where labels are expensive or difficult to obtain. Broader Impact FixMatch helps democratize machine learning in two ways: first, its simplicity makes it available to a wider audience, and second, its accuracy with only a few labels means that it can be applied to domains where previously machine learning was not feasible. The flip side of democratization of machine learning research is that it becomes easy for both good and bad actors to apply. We hope that this ability will be used for good—for example, obtaining medical scans is often far cheaper than paying an expert doctor to label every image. However, it is possible that more advanced techniques for semi-supervised learning will allow for more advanced surveillance: for example, the efficacy of our one-shot classification might allow for more accurate person identification from a few images. Broadly speaking, any progress on semi-supervised learning will have these same consequences. Funding Disclosure Google is the sole source of funding for this work. Acknowledgment We thank Qizhe Xie, Avital Oliver, Quoc V. Le, and Sercan Arik for their feedback on this paper.
1. What is the main contribution of the paper in semi-supervised learning? 2. What are the strengths of the proposed method, particularly in its simplicity and performance? 3. What are the weaknesses of the paper, especially regarding its comparison with prior works such as ReMixMatch? 4. Do you have any concerns about the experimental setup or results presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The work combines the pseudo-labeling and the consistency regularization techniques in SSL to propose a simple method for high-performing semi-supervised learning. The main idea is to use weakly-augmented images to produce pseudo-labels, and then train the model on heavy-augmented images with these labels. Strengths The paper presents a simplified version of earlier works, such as UDA and ReMixMatch, while achieving performance on par with ReMixMatch. Additionally, some experiments in applying the method to few-shot learning task are presented. An extensive ablation study (in supplementary material) is provided. Weaknesses 1. I cite from ReMixMatch figure caption: "Augmentation anchoring. We use the prediction for a weakly augmented image (green, middle) as the target for predictions on strong augmentations of the same image". This sounds to me as a summary of the presented work, and as such I consider it a special case of the ReMixMatch. Authors have discussed the differences between their work and ReMixMatch, mentioning that (1) "ReMixMatch don`t use pseudo labeling", and (2) ReMixMatch uses sharpening of pseudolabels and weight annealing of the unlabeled data loss. However, in section 3.2.1 of ReMixMatch, it is stated that the guessed labels are used as targets (for strongly augmented images) using cross-entropy loss. I believe this is called self-training with pseudo-labeling, just as this work proposes. 2. It is stated (lines 213-215) that FixMatch substantially outperforms MixMatch, ReMixMatch, and UDA with 40 and 250 labels, but this is incorrect. The performance of ReMixMatch is very close to the FixMAtch in this regime (and outperforms the FixMatch with more data). 3. The "Barely supervised learning" section describes 1-shot experiments in some setting that approximates the standard few-shot training/test regime (i.e., with episodes). The authors are encouraged to align with standard few-shot protocols and compare their performance to other methods in the data-starved regime (e.g., ReMixMatch).
NIPS
Title FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence Abstract Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model’s predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch’s success. The code is available at https://github.com/google-research/fixmatch. 1 Introduction Deep neural networks have become the de facto model for computer vision applications. Their success is partially attributable to their scalability, i.e., the empirical observation that training them on larger datasets produces better performance [30, 20, 42, 55, 41, 21]. Deep networks often achieve their strong performance through supervised learning, which requires a labeled dataset. The performance benefit conferred by the use of a larger dataset can therefore come at a significant cost since labeling data often requires human labor. This cost can be particularly extreme when labeling must be done by an expert (for example, a doctor in medical applications). A powerful approach for training models on a large amount of data without requiring a large amount of labels is semi-supervised learning (SSL). SSL mitigates the requirement for labeled data by providing a means of leveraging unlabeled data. Since unlabeled data can often be obtained with minimal human labor, any performance boost conferred by SSL often comes with low cost. This has led to a plethora of SSL methods that are designed for deep networks [33, 46, 24, 51, 4, 54, 3, 25, 45, 52]. A popular class of SSL methods can be viewed as producing an artificial label for unlabeled images and training the model to predict the artificial label when fed unlabeled images as input. For example, pseudo-labeling [25] (also called self-training [32, 55, 44, 47]) uses the model’s class prediction as a label to train against. Similarly, consistency regularization [2, 46, 24] obtains an artificial label using the model’s predicted distribution after randomly modifying the input or model function. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we break the trend of recent state-of-the-art methods that combine increasingly complex mechanisms [4, 54, 3] and produce a method that is simpler, but also more accurate. Our algorithm, FixMatch, produces artificial labels using both consistency regularization and pseudo-labeling. Crucially, the artificial label is produced based on a weakly-augmented unlabeled image (e.g., using only flip-and-shift data augmentation) which is used as a target when the model is fed a stronglyaugmented version of the same image. Inspired by UDA [54] and ReMixMatch [3], we leverage Cutout [14], CTAugment [3], and RandAugment [11] for strong augmentation, which all produce heavily-distorted versions of a given image. Following the approach of pseudo-labeling [25], we only retain an artificial label if the model assigns a high probability to one of the possible classes. A diagram of FixMatch is shown in fig. 1. Despite its simplicity, we show that FixMatch obtains state-of-the-art performance on the most commonly-studied SSL benchmarks. For example, FixMatch achieves 94.93% accuracy on CIFAR-10 with 250 labeled examples compared to the previous state-of-the-art of 93.73% [3] in the standard experimental setting from [36]. We also explore the limits of our approach by applying it in the extremely-scarce-labels regime, obtaining 88.61% accuracy on CIFAR-10 with only 4 labels per class. Since FixMatch is a simplification of existing approaches but achieves substantially better performance, we include an extensive ablation study to determine which factors contribute the most to its success. A key benefit of FixMatch being a simplification of existing methods is that it requires many fewer additional hyperparameters. As such, it allows us to perform an extensive ablation study of each of them. Our ablation study also includes basic fully-supervised learning experimental choices that are often ignored or not reported when new SSL methods are proposed (such as the optimizer or learning rate schedule). 2 FixMatch FixMatch is a combination of two approaches to SSL: Consistency regularization and pseudo-labeling. Its main novelty comes from the combination of these two ingredients as well as the use of a separate weak and strong augmentation when performing consistency regularization. In this section, we first review consistency regularization and pseudo-labeling before describing FixMatch in detail. We also describe the other factors, such as regularization, which contribute to FixMatch’s empirical success. For an L-class classification problem, let X = { (xb, pb) : b ∈ (1, . . . , B) } be a batch of B labeled examples, where xb are the training examples and pb are one-hot labels. Let U = { ub : b ∈ (1, . . . , µB) } be a batch of µB unlabeled examples where µ is a hyperparameter that determines the relative sizes of X and U . Let pm(y | x) be the predicted class distribution produced by the model for input x. We denote the cross-entropy between two probability distributions p and q as H(p, q). We perform two types of augmentations as part of FixMatch: strong and weak, denoted by A(·) and α(·) respectively. We describe the form of augmentation we use for A and α in section 2.3. 2.1 Background Consistency regularization is an important component of recent state-of-the-art SSL algorithms. Consistency regularization utilizes unlabeled data by relying on the assumption that the model should output similar predictions when fed perturbed versions of the same image. This idea was first proposed in [2] and popularized by [46, 24], where the model is trained both via a standard supervised classification loss and on unlabeled data via the loss function µB∑ b=1 ‖pm(y|α(ub))− pm(y|α(ub))‖22 (1) Note that both α and pm are stochastic functions, so the two terms in eq. (1) will indeed have different values. Extensions to this idea include using an adversarial transformation in place of α [33], using a running average or past model predictions for one invocation of pm [51, 24], using a cross-entropy loss in place of the squared `2 loss [33, 54, 3], using stronger forms of augmentation [54, 3], and using consistency regularization as a component in a larger SSL pipeline [4, 3]. Pseudo-labeling leverages the idea of using the model itself to obtain artificial labels for unlabeled data [32, 47]. Specifically, this refers to the use of “hard” labels (i.e., the arg max of the model’s output) and only retaining artificial labels whose largest class probability fall above a predefined threshold [25]. Letting qb = pm(y|ub), pseudo-labeling uses the following loss function: 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, qb) (2) where q̂b = arg max(qb) and τ is the threshold. For simplicity, we assume that arg max applied to a probability distribution produces a valid “one-hot” probability distribution. The use of a hard label makes pseudo-labeling closely related to entropy minimization [17, 45], where the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unlabeled data. 2.2 Our Algorithm: FixMatch The loss function for FixMatch consists of two cross-entropy loss terms: a supervised loss `s applied to labeled data and an unsupervised loss `u. Specifically, `s is just the standard cross-entropy loss on weakly augmented labeled examples: `s = 1 B B∑ b=1 H(pb, pm(y | α(xb))) (3) FixMatch computes an artificial label for each unlabeled example2 which is then used in a standard cross-entropy loss. To obtain an artificial label, we first compute the model’s predicted class distribution given a weakly-augmented version of a given unlabeled image: qb = pm(y | α(ub)). Then, we use q̂b = arg max(qb) as a pseudo-label, except we enforce the cross-entropy loss against the model’s output for a strongly-augmented version of ub: `u = 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, pm(y | A(ub))) (4) where τ is a scalar hyperparameter denoting the threshold above which we retain a pseudo-label. The loss minimized by FixMatch is simply `s + λu`u where λu is a fixed scalar hyperparameter denoting the relative weight of the unlabeled loss. We present a complete algorithm for FixMatch in algorithm 1 of the supplementary material. While eq. (4) is similar to the pseudo-labeling loss in eq. (2), it is crucially different in that the artificial label is computed based on a weakly-augmented image and the loss is enforced against the model’s output for a strongly-augmented image. This introduces a form of consistency regularization which, as we will show in section 5, is crucial to FixMatch’s success. We also note that it is typical in modern SSL algorithms to increase the weight of the unlabeled loss term (λu) during training [51, 24, 4, 3, 36]. We found that this was unnecessary for FixMatch, which may be due to the fact 2In practice, we include all labeled data as part of unlabeled data without their labels when constructing U . that max(qb) is typically less than τ early in training. As training progresses, the model’s predictions become more confident and it is more frequently the case that max(qb) > τ . This suggests that pseudo-labeling may produce a natural curriculum “for free”. Similar justifications have been used in the past for ignoring low-confidence predictions in visual domain adaptation [15]. 2.3 Augmentation in FixMatch FixMatch leverages two kinds of augmentations: “weak” and “strong”. In all of our experiments, weak augmentation is a standard flip-and-shift augmentation strategy. Specifically, we randomly flip images horizontally with a probability of 50% on all datasets except SVHN and we randomly translate images by up to 12.5% vertically and horizontally. For “strong” augmentation, we experiment with two methods based on AutoAugment [10], which are then followed by the Cutout [14]. AutoAugment uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library.3 This requires labeled data to learn the augmentation strategy, making it problematic to use in SSL settings where limited labeled data is available. As a result, variants of AutoAugment which do not require the augmentation strategy to be learned ahead of time with labeled data, such as RandAugment [11] and CTAugment [3], have been proposed. Instead of using a learned strategy, both RandAugment and CTAugment randomly select transformations for each sample. For RandAugment, the magnitude that controls the severity of all distortions is randomly sampled from a pre-defined range (RandAugment with random magnitude was also used for UDA by [54]), whereas the magnitudes of individual transformations are learned on-the-fly for CTAugment. Refer to appendix E for more details. 2.4 Additional important factors Semi-supervised performance can be substantially impacted by factors other than the SSL algorithm used because considerations like the amount of regularization can be particularly important in the low-label regime. This is compounded by the fact that the performance of deep networks trained for image classification can heavily depend on the architecture, optimizer, training schedule, etc. These factors are typically not emphasized when new SSL algorithms are introduced. Instead, we endeavor to quantify their importance and highlight which ones have a significant impact on performance. Most analysis is performed in section 5. In this section we identify a few key considerations. First, as mentioned above, we find that regularization is particularly important. In all of our models and experiments, we use simple weight decay regularization. We also found that using the Adam optimizer [22] resulted in worse performance and instead use standard SGD with momentum [50, 40, 34]. We did not find a substantial difference between standard and Nesterov momentum. For a learning rate schedule, we use a cosine learning rate decay [28] which sets the learning rate to η cos ( 7πk 16K ) where η is the initial learning rate, k is the current training step, and K is the total number of training steps. Finally, we report final performance using an exponential moving average of model parameters. 2.5 Extensions of FixMatch Due to its simplicity, FixMatch can be readily extended with techniques in SSL literature. For example, both Augmentation Anchoring (where M strong augmentations are used for consistency regularization for each unlabeled example) and Distribution Alignment (which encourages the model predictions to have the same class distribution as the labeled set) from ReMixMatch [3] can be straightforwardly applied to FixMatch. Moreover, one may replace strong augmentations in FixMatch with modality-agnostic augmentation strategies, such as MixUp [59] or adversarial perturbations [33]. We present some exploration and experiments with these extensions in appendix D. 3 Related work Semi-supervised learning is a mature field with a huge diversity of approaches. In this review, we focus on methods closely related to FixMatch. Broader introductions are provided in [60, 61, 6]. The idea behind self-training has been around for decades [47, 32]. The generality of self-training (i.e., using a model’s predictions to obtain artificial labels for unlabeled data) has led it to be applied in many domains including NLP [31], object detection [44], image classification [25, 55], domain 3https://www.pythonware.com/products/pil/ adaptation [62], to name a few. Pseudo-labeling refers to a specific variant where model predictions are converted to hard labels [25], which is often used along with a confidence-based thresholding that retains unlabeled examples only when the classifier is sufficiently confident (e.g., [44]). While some studies have suggested that pseudo-labeling is not competitive against other modern SSL algorithms on its own [36], recent SSL algorithms have used pseudo-labeling as a part of their pipeline to produce better results [1, 39]. As mentioned above, pseudo-labeling results in a form of entropy minimization [17] which has been used as a component for many SSL techniques [33]. Consistency regularization was first proposed by [2] and later referred to as “Transformation/Stability” (or TS for short) [46] or the “Π-Model” [43]. Early extensions included using an exponential moving average of model parameters [51] or using previous model checkpoints [24] when producing artificial labels. Several methods have been used to produce random perturbations including data augmentation [15], stochastic regularization (e.g. Dropout [49]) [46, 24], and adversarial perturbations [33]. More recently, it has been shown that using strong data augmentation can produce better results [54, 3]. These heavily-augmented examples are almost certainly outside of the data distribution, which has in fact been shown to be beneficial for SSL [12]. Noisy Student [55] has integrated these techniques into a self-training framework and demonstrated impressive performance on ImageNet with additional massive amount of unlabeled data. Of the aforementioned work, FixMatch bears the closest resemblance to two recent methods: Unsupervised Data Augmentation (UDA) [54] and ReMixMatch [3]. They both use a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented examples. Neither of them uses pseudo-labeling, but both approaches “sharpen” the artificial label to encourage the model to produce high-confidence predictions. UDA in particular also only enforces consistency when the highest probability in the predicted class distribution for the artificial label is above a threshold. The thresholded pseudo-labeling of FixMatch has a similar effect to sharpening. In addition, ReMixMatch anneals the weight of the unlabeled data loss, which we omit from FixMatch because we posit that the thresholding used in pseudo-labeling has a similar effect (as mentioned in section 2.2). These similarities suggest that FixMatch can be viewed as a substantially simplified version of UDA and ReMixMatch, where we have combined two common techniques (pseudo-labeling and consistency regularization) while removing many components (sharpening, training signal annealing from UDA, distribution alignment and the rotation loss from ReMixMatch, etc.). Since the core of FixMatch is a simple combination of two existing techniques, it also bears substantial similarities to many previously-proposed SSL algorithms. We provide a concise comparison of each of these techniques in table 1 where we list the augmentation used for the artificial label, the model’s prediction, and any post-processing applied to the artificial label. A more thorough comparison of these different algorithms and their constituent approaches is provided in the following section. 4 Experiments We evaluate the efficacy of FixMatch on several SSL image classification benchmarks. Specifically, we perform experiments with varying amounts of labeled data and augmentation strategies on CIFAR10/100 [23], SVHN [35], STL-10 [9], and ImageNet [13], following standard SSL evaluation protocols [36, 4, 3]. In many cases, we perform experiments with fewer labels than previously considered since FixMatch shows promise in extremely label-scarce settings. Note that we use an identical set of hyperparameters (λu = 1, η= 0.03, β= 0.9, τ = 0.95, µ= 7, B= 64, K = 220)4 across all amounts of labeled examples and datasets other than ImageNet. A complete list of hyperparameters is reported in appendix B.1. We include an extensive ablation study in section 5 to tease apart the importance of the different components and hyperparameters of FixMatch, including factors that are not explicitly part of the SSL algorithm such as the optimizer and learning rate. 4.1 CIFAR-10, CIFAR-100, and SVHN We compare FixMatch to various existing methods on the standard CIFAR-10, CIFAR-100, and SVHN benchmarks. As suggested by [36], we reimplemented all existing baselines and performed all experiments using the same codebase. In particular, we use the same network architecture and training protocol, including the optimizer, learning rate schedule, data preprocessing, etc. across all SSL methods. Following [4], we used a Wide ResNet-28-2 [56] with 1.5M parameters for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10. For baselines, we consider methods that are similar to FixMatch and/or are state-of-the-art: Π-Model [43], Mean Teacher [51], Pseudo-Label [25], MixMatch [4], UDA [54], and ReMixMatch [3]. Besides [3], previous work has not considered fewer than 25 labels per class on these benchmarks. Performing better with less supervision is the central goal of SSL in practice since it alleviates the need for labeled data. We also consider the setting where only 4 labeled images are given for each class on each dataset. As far as we are aware, we are the first to run any experiments at 4 labels per class on CIFAR-100. We report the performance of all baselines along with FixMatch in table 2. We compute the mean and variance of accuracy when training on 5 different “folds” of labeled data. We omit results with 4 labels per class for Π-Model, Mean Teacher, and Pseudo-Labeling since the performance was poor at 250 labels. MixMatch, ReMixMatch, and UDA all perform reasonably well with 40 and 250 labels, but we find that FixMatch substantially outperforms each of these methods while nevertheless being simpler. For example, FixMatch achieves an average error rate of 11.39% on CIFAR-10 with 4 labels per class. As a point of reference, among the methods studied in [36] (where the same network architecture was used), the lowest error rate achieved on CIFAR-10 with 400 labels per class was 13.13%. Our results also compare favorably to recent state-of-the-art results achieved by ReMixMatch [3], despite the fact that we omit various components such as the self-supervised loss. Our results are state-of-the-art on all datasets except for CIFAR-100 where ReMixMatch performs a bit better. To understand why ReMixMatch performs better than FixMatch, we experimented with a few variants of FixMatch which copy various components of ReMixMatch into FixMatch. We find that the most important term is Distribution Alignment (DA), which encourages the model predictions to have the same class distribution as the labeled set. Combining FixMatch with DA reaches a 40.14% error rate with 400 labeled examples, which is substantially better than the 44.28% achieved by ReMixMatch. We find that in most cases the performance of FixMatch using CTAugment and RandAugment is similar, except in the settings where we have 4 labels per class. This may be explained by the fact that these results are particularly high-variance. For example, the variance over 5 different folds for CIFAR-10 with 4 labels per class is 3.35%, which is significantly higher than that with 25 labels per class (0.33%). The error rates are also affected significantly by the random seeds when the number of labeled examples per class is extremely small, as shown in table 8 of supplementary material. 4β refers to a momentum in SGD optimizer. The definition of other hyperparameters are found in section 2. 4.2 STL-10 The STL-10 dataset contains 5,000 labeled images of size 96×96 from 10 classes and 100,000 unlabeled images. There exist out-of-distribution images in the unlabeled set, making it a more realistic and challenging test of SSL performance. We test SSL algorithms on five of the predefined folds of 1,000 labeled images each. Following [4], we use a WRN-37-2 network (comprising 5.9M parameters).5 As in table 2, FixMatch achieves the state-of-the-art performance of ReMixMatch [3] despite being significantly simpler. 4.3 ImageNet We evaluate FixMatch on ImageNet to verify that it performs well on a larger and more complex dataset. Following [54], we use 10% of the training data as labeled and treat the rest as unlabeled examples. We use a ResNet-50 network architecture and RandAugment [11] as strong augmentation for this experiment. We include additional implementation details in appendix C. FixMatch achieves a top-1 error rate of 28.54± 0.52%, which is 2.68% better than UDA [54]. Our top-5 error rate is 10.87± 0.28%. While S4L [57] holds state-of-the-art on semi-supervised ImageNet with a 26.79% error rate, it leverages 2 additional training phases (pseudo-label re-training and supervised finetuning) to significantly lower the error rate from 30.27% after the first phase. FixMatch outperforms S4L after its first phase, and it is possible that a similar performance gain could be achieved by incorporating these techniques into FixMatch. 4.4 Barely Supervised Learning To test the limits of our proposed approach, we applied FixMatch to CIFAR-10 with only one example per class.6 We conduct two sets of experiments. First, we create four datasets by randomly selecting one example per class. We train on each dataset four times and reach between 48.58% and 85.32% test accuracy with a median of 64.28%. The inter-dataset variance is much lower, however; for example, the four models trained on the first dataset all reach between 61% and 67% accuracy, and the second dataset reaches between 68% and 75%. We hypothesize that this variability is caused by the quality of the 10 labeled examples comprising each dataset and that sampling low-quality examples might make it more difficult for the model to learn some particular class effectively. To test this, we construct eight new training datasets with examples ranging in “prototypicality” (i.e., representative of the underlying class). Specifically, we take the ordering of the CIFAR-10 training set from [5] that sorts examples from those that are most representative to those that are least. This example ordering was determined after training many CIFAR-10 models with all labeled data. We thus do not envision this as a practical method for choosing examples for use in SSL, but rather to experimentally verify that examples that are more representative are better suited for low-label training. We divide this ordering evenly into eight buckets (so all of the most representative examples are in the first bucket, and all of the outliers in the last). We then create eight labeled training sets by randomly selecting one labeled example of each class from the same bucket. Using the same hyperparameters, the model trained only on the most prototypical examples reaches a median of 78% accuracy (with a maximum of 84% accuracy); training on the middle of the distribution reaches 65% accuracy; and training on only the outliers fails to converge completely, with 10% accuracy. Figure 2 shows the full labeled training dataset for the split where FixMatch achieved a median accuracy of 78%. Further analysis is presented in Appendix B.7. 5We clarify that both FixMatch and ReMixMatch [3], which has reported an incorrect number of network parameters (23.8M), are tested with the same network architecture containing 5.9M parameters. 6The experimental protocol of barely supervised learning (BSL) shares similarities to those of few-shot learning (FSL) [37] as they both assume a limited availability (e.g., 1 or 5) of labeled examples from categories of interest. However, two protocols have a critical difference, where for FSL one is provided with extra labeled training examples from regular classes, whereas for BSL one is given additional unlabeled training examples. 5 Ablation Study Since FixMatch comprises a simple combination of two existing techniques, we perform an extensive ablation study to better understand why it is able to obtain state-of-the-art results. Due to the number of experiments in our ablation study, we focus on studying with a single 250 label split from CIFAR10 and only report results using CTAugment. Note that FixMatch with default parameters achieves 4.84% error rate on this particular split. We present complete ablation results, including optimizer (appendix B.3), learning rate decay schedule (appendix B.4), weight decay (appendix B.6), labeled to unlabeled data ratio µ (appendix B.5), in the supplementary material. 5.1 Sharpening and Thresholding A “soft” version of pseudo-labeling can be designed by sharpening the predicted distribution. This formulation appears in UDA and is of general interest since other methods such as MixMatch and ReMixMatch also make use of sharpening (albeit without thresholding). Using sharpening instead of an arg max introduces a hyper-parameter: the temperature T [4, 54, 3]. We study the interactions between the temperature T and the confidence threshold τ . Note that pseudo-labeling in FixMatch is recovered as T → 0. The results are presented in fig. 3a and fig. 3b. The threshold value of 0.95 shows the lowest error rate, though increasing it to 0.97 or 0.99 did not hurt much. In contrast, accuracy drops by more than 1.5% when using a small threshold value. Note that the threshold value controls the trade-off between the quality and the quantity of pseudo-labels. As discussed in appendix B.2, the accuracy of pseudo-labels for unlabeled data increases with higher threshold values, while the amount of unlabeled data contributing to `u in eq. (4) decreases. This suggests that the quality of pseudo-labels is more important than the quantity for reaching a high accuracy. Sharpening, on the other hand, did not show a significant difference in performance when a confidence threshold is used. In summary, we observe that swapping pseudo-labeling for sharpening and thresholding would introduce a new hyperparameter while achieving no better performance. 5.2 Augmentation Strategy We conduct an ablation study on different strong data augmentation policies as it plays a key role in FixMatch. Specifically, we chose RandAugment [11] and CTAugment [3], which have been used for state-of-the-art SSL algorithms such as UDA [54] and ReMixMatch [4] respectively. On CIFAR-10, CIFAR-100, and SVHN we observed highly comparable results between the two policies, whereas in STL-10 (table 2), we observe a significant gain by using CTAugment. We measure the effect of Cutout in table 3, which is used by default after strong augmentation in both RandAugment and CTAugment. We find that both Cutout and CTAugment are required to obtain the best performance; removing either results in a significant increase in error rate. We also study different combinations of weak and strong augmentations for pseudo-label generation and prediction (i.e., the upper and lower paths in fig. 1). When we replaced the weak augmentation for label guessing with strong augmentation, we found that the model diverged early in training. Conversely, when replacing weak augmentation with no augmentation, the model overfits the guessed unlabeled labels. Using weak augmentation in place of strong augmentation to generate the model’s prediction for training peaked at 45% accuracy but was not stable and progressively collapsed to 12%, suggesting the importance of strong data augmentation. This observation is well-aligned with those from supervised learning [10]. 6 Conclusion There has been rapid recent progress in SSL. Unfortunately, much of this progress comes at the cost of increasingly complicated learning algorithms with sophisticated loss terms and numerous difficult-totune hyper-parameters. We introduce FixMatch, a simpler SSL algorithm that achieves state-of-the-art results across many datasets. We show how FixMatch can begin to bridge the gap between low-label semi-supervised learning and few-shot learning or clustering: we obtain surprisingly-high accuracy with just one label per class. Using only standard cross-entropy losses on both labeled and unlabeled data, FixMatch’s training objective can be written in just a few lines of code. Because of this simplicity, we are able to thoroughly investigate how FixMatch works. We find that certain design choices are important (and often underemphasized) – most importantly, weight decay and the choice of optimizer. The importance of these factors means that even when controlling for model architecture as is recommended in [36], the same technique can not always be directly compared across different implementations. On the whole, we believe that the existence of such simple but performant semi-supervised machine learning algorithms will help to allow machine learning to be deployed in increasingly many practical domains where labels are expensive or difficult to obtain. Broader Impact FixMatch helps democratize machine learning in two ways: first, its simplicity makes it available to a wider audience, and second, its accuracy with only a few labels means that it can be applied to domains where previously machine learning was not feasible. The flip side of democratization of machine learning research is that it becomes easy for both good and bad actors to apply. We hope that this ability will be used for good—for example, obtaining medical scans is often far cheaper than paying an expert doctor to label every image. However, it is possible that more advanced techniques for semi-supervised learning will allow for more advanced surveillance: for example, the efficacy of our one-shot classification might allow for more accurate person identification from a few images. Broadly speaking, any progress on semi-supervised learning will have these same consequences. Funding Disclosure Google is the sole source of funding for this work. Acknowledgment We thank Qizhe Xie, Avital Oliver, Quoc V. Le, and Sercan Arik for their feedback on this paper.
1. What is the main contribution of the paper in semi-supervised learning? 2. What are the strengths of the proposed method, particularly its novelty and effectiveness? 3. What are the weaknesses of the paper regarding the explanation of the method's success and the definition of weak and strong augmentations?
Summary and Contributions Strengths Weaknesses
Summary and Contributions They propose a new approach for semi-supervised learning (SSL) that gives the pseudo-label to a strongly-augmented image using the pseudo-label of weakly-augmented image. Despite the simplicity of their method, they achieve very strong results in SSL benchmark settings. Strengths 1. Their proposed method can be a good direction for semi-supervised learning. Although there are several SSL methods effectively using data augmentation such as mix-up, the proposed new approach seems to have different aspects from previous works. The FixMatch is a simple, yet effective SSL method. 2. Empirical evaluation is very carefully designed. The evaluation sufficiently shows the effectiveness of their approach. 3. Analysis on augmentation strategy and sharpening also provide good insights. Weaknesses 1. The method does not have good explanation on why guiding predictions for strongly-augmented images by weakly-augmented images works so well. Although this issue can be common with other works, it is good if they provide empirical or theoretically analysis on this point. 2. Is it always easy to define "weak" and "strong" augmentation? They defined two kinds of augmentation in a heuristic way. But, maybe for some datasets, their defined "weak" augmentation can be a "strong" augmentation? I cannot come up with a good example, but the way of data augmentation can be different from datasets to datasets.
NIPS
Title FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence Abstract Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model’s predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch’s success. The code is available at https://github.com/google-research/fixmatch. 1 Introduction Deep neural networks have become the de facto model for computer vision applications. Their success is partially attributable to their scalability, i.e., the empirical observation that training them on larger datasets produces better performance [30, 20, 42, 55, 41, 21]. Deep networks often achieve their strong performance through supervised learning, which requires a labeled dataset. The performance benefit conferred by the use of a larger dataset can therefore come at a significant cost since labeling data often requires human labor. This cost can be particularly extreme when labeling must be done by an expert (for example, a doctor in medical applications). A powerful approach for training models on a large amount of data without requiring a large amount of labels is semi-supervised learning (SSL). SSL mitigates the requirement for labeled data by providing a means of leveraging unlabeled data. Since unlabeled data can often be obtained with minimal human labor, any performance boost conferred by SSL often comes with low cost. This has led to a plethora of SSL methods that are designed for deep networks [33, 46, 24, 51, 4, 54, 3, 25, 45, 52]. A popular class of SSL methods can be viewed as producing an artificial label for unlabeled images and training the model to predict the artificial label when fed unlabeled images as input. For example, pseudo-labeling [25] (also called self-training [32, 55, 44, 47]) uses the model’s class prediction as a label to train against. Similarly, consistency regularization [2, 46, 24] obtains an artificial label using the model’s predicted distribution after randomly modifying the input or model function. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we break the trend of recent state-of-the-art methods that combine increasingly complex mechanisms [4, 54, 3] and produce a method that is simpler, but also more accurate. Our algorithm, FixMatch, produces artificial labels using both consistency regularization and pseudo-labeling. Crucially, the artificial label is produced based on a weakly-augmented unlabeled image (e.g., using only flip-and-shift data augmentation) which is used as a target when the model is fed a stronglyaugmented version of the same image. Inspired by UDA [54] and ReMixMatch [3], we leverage Cutout [14], CTAugment [3], and RandAugment [11] for strong augmentation, which all produce heavily-distorted versions of a given image. Following the approach of pseudo-labeling [25], we only retain an artificial label if the model assigns a high probability to one of the possible classes. A diagram of FixMatch is shown in fig. 1. Despite its simplicity, we show that FixMatch obtains state-of-the-art performance on the most commonly-studied SSL benchmarks. For example, FixMatch achieves 94.93% accuracy on CIFAR-10 with 250 labeled examples compared to the previous state-of-the-art of 93.73% [3] in the standard experimental setting from [36]. We also explore the limits of our approach by applying it in the extremely-scarce-labels regime, obtaining 88.61% accuracy on CIFAR-10 with only 4 labels per class. Since FixMatch is a simplification of existing approaches but achieves substantially better performance, we include an extensive ablation study to determine which factors contribute the most to its success. A key benefit of FixMatch being a simplification of existing methods is that it requires many fewer additional hyperparameters. As such, it allows us to perform an extensive ablation study of each of them. Our ablation study also includes basic fully-supervised learning experimental choices that are often ignored or not reported when new SSL methods are proposed (such as the optimizer or learning rate schedule). 2 FixMatch FixMatch is a combination of two approaches to SSL: Consistency regularization and pseudo-labeling. Its main novelty comes from the combination of these two ingredients as well as the use of a separate weak and strong augmentation when performing consistency regularization. In this section, we first review consistency regularization and pseudo-labeling before describing FixMatch in detail. We also describe the other factors, such as regularization, which contribute to FixMatch’s empirical success. For an L-class classification problem, let X = { (xb, pb) : b ∈ (1, . . . , B) } be a batch of B labeled examples, where xb are the training examples and pb are one-hot labels. Let U = { ub : b ∈ (1, . . . , µB) } be a batch of µB unlabeled examples where µ is a hyperparameter that determines the relative sizes of X and U . Let pm(y | x) be the predicted class distribution produced by the model for input x. We denote the cross-entropy between two probability distributions p and q as H(p, q). We perform two types of augmentations as part of FixMatch: strong and weak, denoted by A(·) and α(·) respectively. We describe the form of augmentation we use for A and α in section 2.3. 2.1 Background Consistency regularization is an important component of recent state-of-the-art SSL algorithms. Consistency regularization utilizes unlabeled data by relying on the assumption that the model should output similar predictions when fed perturbed versions of the same image. This idea was first proposed in [2] and popularized by [46, 24], where the model is trained both via a standard supervised classification loss and on unlabeled data via the loss function µB∑ b=1 ‖pm(y|α(ub))− pm(y|α(ub))‖22 (1) Note that both α and pm are stochastic functions, so the two terms in eq. (1) will indeed have different values. Extensions to this idea include using an adversarial transformation in place of α [33], using a running average or past model predictions for one invocation of pm [51, 24], using a cross-entropy loss in place of the squared `2 loss [33, 54, 3], using stronger forms of augmentation [54, 3], and using consistency regularization as a component in a larger SSL pipeline [4, 3]. Pseudo-labeling leverages the idea of using the model itself to obtain artificial labels for unlabeled data [32, 47]. Specifically, this refers to the use of “hard” labels (i.e., the arg max of the model’s output) and only retaining artificial labels whose largest class probability fall above a predefined threshold [25]. Letting qb = pm(y|ub), pseudo-labeling uses the following loss function: 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, qb) (2) where q̂b = arg max(qb) and τ is the threshold. For simplicity, we assume that arg max applied to a probability distribution produces a valid “one-hot” probability distribution. The use of a hard label makes pseudo-labeling closely related to entropy minimization [17, 45], where the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unlabeled data. 2.2 Our Algorithm: FixMatch The loss function for FixMatch consists of two cross-entropy loss terms: a supervised loss `s applied to labeled data and an unsupervised loss `u. Specifically, `s is just the standard cross-entropy loss on weakly augmented labeled examples: `s = 1 B B∑ b=1 H(pb, pm(y | α(xb))) (3) FixMatch computes an artificial label for each unlabeled example2 which is then used in a standard cross-entropy loss. To obtain an artificial label, we first compute the model’s predicted class distribution given a weakly-augmented version of a given unlabeled image: qb = pm(y | α(ub)). Then, we use q̂b = arg max(qb) as a pseudo-label, except we enforce the cross-entropy loss against the model’s output for a strongly-augmented version of ub: `u = 1 µB µB∑ b=1 1(max(qb) ≥ τ) H(q̂b, pm(y | A(ub))) (4) where τ is a scalar hyperparameter denoting the threshold above which we retain a pseudo-label. The loss minimized by FixMatch is simply `s + λu`u where λu is a fixed scalar hyperparameter denoting the relative weight of the unlabeled loss. We present a complete algorithm for FixMatch in algorithm 1 of the supplementary material. While eq. (4) is similar to the pseudo-labeling loss in eq. (2), it is crucially different in that the artificial label is computed based on a weakly-augmented image and the loss is enforced against the model’s output for a strongly-augmented image. This introduces a form of consistency regularization which, as we will show in section 5, is crucial to FixMatch’s success. We also note that it is typical in modern SSL algorithms to increase the weight of the unlabeled loss term (λu) during training [51, 24, 4, 3, 36]. We found that this was unnecessary for FixMatch, which may be due to the fact 2In practice, we include all labeled data as part of unlabeled data without their labels when constructing U . that max(qb) is typically less than τ early in training. As training progresses, the model’s predictions become more confident and it is more frequently the case that max(qb) > τ . This suggests that pseudo-labeling may produce a natural curriculum “for free”. Similar justifications have been used in the past for ignoring low-confidence predictions in visual domain adaptation [15]. 2.3 Augmentation in FixMatch FixMatch leverages two kinds of augmentations: “weak” and “strong”. In all of our experiments, weak augmentation is a standard flip-and-shift augmentation strategy. Specifically, we randomly flip images horizontally with a probability of 50% on all datasets except SVHN and we randomly translate images by up to 12.5% vertically and horizontally. For “strong” augmentation, we experiment with two methods based on AutoAugment [10], which are then followed by the Cutout [14]. AutoAugment uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library.3 This requires labeled data to learn the augmentation strategy, making it problematic to use in SSL settings where limited labeled data is available. As a result, variants of AutoAugment which do not require the augmentation strategy to be learned ahead of time with labeled data, such as RandAugment [11] and CTAugment [3], have been proposed. Instead of using a learned strategy, both RandAugment and CTAugment randomly select transformations for each sample. For RandAugment, the magnitude that controls the severity of all distortions is randomly sampled from a pre-defined range (RandAugment with random magnitude was also used for UDA by [54]), whereas the magnitudes of individual transformations are learned on-the-fly for CTAugment. Refer to appendix E for more details. 2.4 Additional important factors Semi-supervised performance can be substantially impacted by factors other than the SSL algorithm used because considerations like the amount of regularization can be particularly important in the low-label regime. This is compounded by the fact that the performance of deep networks trained for image classification can heavily depend on the architecture, optimizer, training schedule, etc. These factors are typically not emphasized when new SSL algorithms are introduced. Instead, we endeavor to quantify their importance and highlight which ones have a significant impact on performance. Most analysis is performed in section 5. In this section we identify a few key considerations. First, as mentioned above, we find that regularization is particularly important. In all of our models and experiments, we use simple weight decay regularization. We also found that using the Adam optimizer [22] resulted in worse performance and instead use standard SGD with momentum [50, 40, 34]. We did not find a substantial difference between standard and Nesterov momentum. For a learning rate schedule, we use a cosine learning rate decay [28] which sets the learning rate to η cos ( 7πk 16K ) where η is the initial learning rate, k is the current training step, and K is the total number of training steps. Finally, we report final performance using an exponential moving average of model parameters. 2.5 Extensions of FixMatch Due to its simplicity, FixMatch can be readily extended with techniques in SSL literature. For example, both Augmentation Anchoring (where M strong augmentations are used for consistency regularization for each unlabeled example) and Distribution Alignment (which encourages the model predictions to have the same class distribution as the labeled set) from ReMixMatch [3] can be straightforwardly applied to FixMatch. Moreover, one may replace strong augmentations in FixMatch with modality-agnostic augmentation strategies, such as MixUp [59] or adversarial perturbations [33]. We present some exploration and experiments with these extensions in appendix D. 3 Related work Semi-supervised learning is a mature field with a huge diversity of approaches. In this review, we focus on methods closely related to FixMatch. Broader introductions are provided in [60, 61, 6]. The idea behind self-training has been around for decades [47, 32]. The generality of self-training (i.e., using a model’s predictions to obtain artificial labels for unlabeled data) has led it to be applied in many domains including NLP [31], object detection [44], image classification [25, 55], domain 3https://www.pythonware.com/products/pil/ adaptation [62], to name a few. Pseudo-labeling refers to a specific variant where model predictions are converted to hard labels [25], which is often used along with a confidence-based thresholding that retains unlabeled examples only when the classifier is sufficiently confident (e.g., [44]). While some studies have suggested that pseudo-labeling is not competitive against other modern SSL algorithms on its own [36], recent SSL algorithms have used pseudo-labeling as a part of their pipeline to produce better results [1, 39]. As mentioned above, pseudo-labeling results in a form of entropy minimization [17] which has been used as a component for many SSL techniques [33]. Consistency regularization was first proposed by [2] and later referred to as “Transformation/Stability” (or TS for short) [46] or the “Π-Model” [43]. Early extensions included using an exponential moving average of model parameters [51] or using previous model checkpoints [24] when producing artificial labels. Several methods have been used to produce random perturbations including data augmentation [15], stochastic regularization (e.g. Dropout [49]) [46, 24], and adversarial perturbations [33]. More recently, it has been shown that using strong data augmentation can produce better results [54, 3]. These heavily-augmented examples are almost certainly outside of the data distribution, which has in fact been shown to be beneficial for SSL [12]. Noisy Student [55] has integrated these techniques into a self-training framework and demonstrated impressive performance on ImageNet with additional massive amount of unlabeled data. Of the aforementioned work, FixMatch bears the closest resemblance to two recent methods: Unsupervised Data Augmentation (UDA) [54] and ReMixMatch [3]. They both use a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented examples. Neither of them uses pseudo-labeling, but both approaches “sharpen” the artificial label to encourage the model to produce high-confidence predictions. UDA in particular also only enforces consistency when the highest probability in the predicted class distribution for the artificial label is above a threshold. The thresholded pseudo-labeling of FixMatch has a similar effect to sharpening. In addition, ReMixMatch anneals the weight of the unlabeled data loss, which we omit from FixMatch because we posit that the thresholding used in pseudo-labeling has a similar effect (as mentioned in section 2.2). These similarities suggest that FixMatch can be viewed as a substantially simplified version of UDA and ReMixMatch, where we have combined two common techniques (pseudo-labeling and consistency regularization) while removing many components (sharpening, training signal annealing from UDA, distribution alignment and the rotation loss from ReMixMatch, etc.). Since the core of FixMatch is a simple combination of two existing techniques, it also bears substantial similarities to many previously-proposed SSL algorithms. We provide a concise comparison of each of these techniques in table 1 where we list the augmentation used for the artificial label, the model’s prediction, and any post-processing applied to the artificial label. A more thorough comparison of these different algorithms and their constituent approaches is provided in the following section. 4 Experiments We evaluate the efficacy of FixMatch on several SSL image classification benchmarks. Specifically, we perform experiments with varying amounts of labeled data and augmentation strategies on CIFAR10/100 [23], SVHN [35], STL-10 [9], and ImageNet [13], following standard SSL evaluation protocols [36, 4, 3]. In many cases, we perform experiments with fewer labels than previously considered since FixMatch shows promise in extremely label-scarce settings. Note that we use an identical set of hyperparameters (λu = 1, η= 0.03, β= 0.9, τ = 0.95, µ= 7, B= 64, K = 220)4 across all amounts of labeled examples and datasets other than ImageNet. A complete list of hyperparameters is reported in appendix B.1. We include an extensive ablation study in section 5 to tease apart the importance of the different components and hyperparameters of FixMatch, including factors that are not explicitly part of the SSL algorithm such as the optimizer and learning rate. 4.1 CIFAR-10, CIFAR-100, and SVHN We compare FixMatch to various existing methods on the standard CIFAR-10, CIFAR-100, and SVHN benchmarks. As suggested by [36], we reimplemented all existing baselines and performed all experiments using the same codebase. In particular, we use the same network architecture and training protocol, including the optimizer, learning rate schedule, data preprocessing, etc. across all SSL methods. Following [4], we used a Wide ResNet-28-2 [56] with 1.5M parameters for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10. For baselines, we consider methods that are similar to FixMatch and/or are state-of-the-art: Π-Model [43], Mean Teacher [51], Pseudo-Label [25], MixMatch [4], UDA [54], and ReMixMatch [3]. Besides [3], previous work has not considered fewer than 25 labels per class on these benchmarks. Performing better with less supervision is the central goal of SSL in practice since it alleviates the need for labeled data. We also consider the setting where only 4 labeled images are given for each class on each dataset. As far as we are aware, we are the first to run any experiments at 4 labels per class on CIFAR-100. We report the performance of all baselines along with FixMatch in table 2. We compute the mean and variance of accuracy when training on 5 different “folds” of labeled data. We omit results with 4 labels per class for Π-Model, Mean Teacher, and Pseudo-Labeling since the performance was poor at 250 labels. MixMatch, ReMixMatch, and UDA all perform reasonably well with 40 and 250 labels, but we find that FixMatch substantially outperforms each of these methods while nevertheless being simpler. For example, FixMatch achieves an average error rate of 11.39% on CIFAR-10 with 4 labels per class. As a point of reference, among the methods studied in [36] (where the same network architecture was used), the lowest error rate achieved on CIFAR-10 with 400 labels per class was 13.13%. Our results also compare favorably to recent state-of-the-art results achieved by ReMixMatch [3], despite the fact that we omit various components such as the self-supervised loss. Our results are state-of-the-art on all datasets except for CIFAR-100 where ReMixMatch performs a bit better. To understand why ReMixMatch performs better than FixMatch, we experimented with a few variants of FixMatch which copy various components of ReMixMatch into FixMatch. We find that the most important term is Distribution Alignment (DA), which encourages the model predictions to have the same class distribution as the labeled set. Combining FixMatch with DA reaches a 40.14% error rate with 400 labeled examples, which is substantially better than the 44.28% achieved by ReMixMatch. We find that in most cases the performance of FixMatch using CTAugment and RandAugment is similar, except in the settings where we have 4 labels per class. This may be explained by the fact that these results are particularly high-variance. For example, the variance over 5 different folds for CIFAR-10 with 4 labels per class is 3.35%, which is significantly higher than that with 25 labels per class (0.33%). The error rates are also affected significantly by the random seeds when the number of labeled examples per class is extremely small, as shown in table 8 of supplementary material. 4β refers to a momentum in SGD optimizer. The definition of other hyperparameters are found in section 2. 4.2 STL-10 The STL-10 dataset contains 5,000 labeled images of size 96×96 from 10 classes and 100,000 unlabeled images. There exist out-of-distribution images in the unlabeled set, making it a more realistic and challenging test of SSL performance. We test SSL algorithms on five of the predefined folds of 1,000 labeled images each. Following [4], we use a WRN-37-2 network (comprising 5.9M parameters).5 As in table 2, FixMatch achieves the state-of-the-art performance of ReMixMatch [3] despite being significantly simpler. 4.3 ImageNet We evaluate FixMatch on ImageNet to verify that it performs well on a larger and more complex dataset. Following [54], we use 10% of the training data as labeled and treat the rest as unlabeled examples. We use a ResNet-50 network architecture and RandAugment [11] as strong augmentation for this experiment. We include additional implementation details in appendix C. FixMatch achieves a top-1 error rate of 28.54± 0.52%, which is 2.68% better than UDA [54]. Our top-5 error rate is 10.87± 0.28%. While S4L [57] holds state-of-the-art on semi-supervised ImageNet with a 26.79% error rate, it leverages 2 additional training phases (pseudo-label re-training and supervised finetuning) to significantly lower the error rate from 30.27% after the first phase. FixMatch outperforms S4L after its first phase, and it is possible that a similar performance gain could be achieved by incorporating these techniques into FixMatch. 4.4 Barely Supervised Learning To test the limits of our proposed approach, we applied FixMatch to CIFAR-10 with only one example per class.6 We conduct two sets of experiments. First, we create four datasets by randomly selecting one example per class. We train on each dataset four times and reach between 48.58% and 85.32% test accuracy with a median of 64.28%. The inter-dataset variance is much lower, however; for example, the four models trained on the first dataset all reach between 61% and 67% accuracy, and the second dataset reaches between 68% and 75%. We hypothesize that this variability is caused by the quality of the 10 labeled examples comprising each dataset and that sampling low-quality examples might make it more difficult for the model to learn some particular class effectively. To test this, we construct eight new training datasets with examples ranging in “prototypicality” (i.e., representative of the underlying class). Specifically, we take the ordering of the CIFAR-10 training set from [5] that sorts examples from those that are most representative to those that are least. This example ordering was determined after training many CIFAR-10 models with all labeled data. We thus do not envision this as a practical method for choosing examples for use in SSL, but rather to experimentally verify that examples that are more representative are better suited for low-label training. We divide this ordering evenly into eight buckets (so all of the most representative examples are in the first bucket, and all of the outliers in the last). We then create eight labeled training sets by randomly selecting one labeled example of each class from the same bucket. Using the same hyperparameters, the model trained only on the most prototypical examples reaches a median of 78% accuracy (with a maximum of 84% accuracy); training on the middle of the distribution reaches 65% accuracy; and training on only the outliers fails to converge completely, with 10% accuracy. Figure 2 shows the full labeled training dataset for the split where FixMatch achieved a median accuracy of 78%. Further analysis is presented in Appendix B.7. 5We clarify that both FixMatch and ReMixMatch [3], which has reported an incorrect number of network parameters (23.8M), are tested with the same network architecture containing 5.9M parameters. 6The experimental protocol of barely supervised learning (BSL) shares similarities to those of few-shot learning (FSL) [37] as they both assume a limited availability (e.g., 1 or 5) of labeled examples from categories of interest. However, two protocols have a critical difference, where for FSL one is provided with extra labeled training examples from regular classes, whereas for BSL one is given additional unlabeled training examples. 5 Ablation Study Since FixMatch comprises a simple combination of two existing techniques, we perform an extensive ablation study to better understand why it is able to obtain state-of-the-art results. Due to the number of experiments in our ablation study, we focus on studying with a single 250 label split from CIFAR10 and only report results using CTAugment. Note that FixMatch with default parameters achieves 4.84% error rate on this particular split. We present complete ablation results, including optimizer (appendix B.3), learning rate decay schedule (appendix B.4), weight decay (appendix B.6), labeled to unlabeled data ratio µ (appendix B.5), in the supplementary material. 5.1 Sharpening and Thresholding A “soft” version of pseudo-labeling can be designed by sharpening the predicted distribution. This formulation appears in UDA and is of general interest since other methods such as MixMatch and ReMixMatch also make use of sharpening (albeit without thresholding). Using sharpening instead of an arg max introduces a hyper-parameter: the temperature T [4, 54, 3]. We study the interactions between the temperature T and the confidence threshold τ . Note that pseudo-labeling in FixMatch is recovered as T → 0. The results are presented in fig. 3a and fig. 3b. The threshold value of 0.95 shows the lowest error rate, though increasing it to 0.97 or 0.99 did not hurt much. In contrast, accuracy drops by more than 1.5% when using a small threshold value. Note that the threshold value controls the trade-off between the quality and the quantity of pseudo-labels. As discussed in appendix B.2, the accuracy of pseudo-labels for unlabeled data increases with higher threshold values, while the amount of unlabeled data contributing to `u in eq. (4) decreases. This suggests that the quality of pseudo-labels is more important than the quantity for reaching a high accuracy. Sharpening, on the other hand, did not show a significant difference in performance when a confidence threshold is used. In summary, we observe that swapping pseudo-labeling for sharpening and thresholding would introduce a new hyperparameter while achieving no better performance. 5.2 Augmentation Strategy We conduct an ablation study on different strong data augmentation policies as it plays a key role in FixMatch. Specifically, we chose RandAugment [11] and CTAugment [3], which have been used for state-of-the-art SSL algorithms such as UDA [54] and ReMixMatch [4] respectively. On CIFAR-10, CIFAR-100, and SVHN we observed highly comparable results between the two policies, whereas in STL-10 (table 2), we observe a significant gain by using CTAugment. We measure the effect of Cutout in table 3, which is used by default after strong augmentation in both RandAugment and CTAugment. We find that both Cutout and CTAugment are required to obtain the best performance; removing either results in a significant increase in error rate. We also study different combinations of weak and strong augmentations for pseudo-label generation and prediction (i.e., the upper and lower paths in fig. 1). When we replaced the weak augmentation for label guessing with strong augmentation, we found that the model diverged early in training. Conversely, when replacing weak augmentation with no augmentation, the model overfits the guessed unlabeled labels. Using weak augmentation in place of strong augmentation to generate the model’s prediction for training peaked at 45% accuracy but was not stable and progressively collapsed to 12%, suggesting the importance of strong data augmentation. This observation is well-aligned with those from supervised learning [10]. 6 Conclusion There has been rapid recent progress in SSL. Unfortunately, much of this progress comes at the cost of increasingly complicated learning algorithms with sophisticated loss terms and numerous difficult-totune hyper-parameters. We introduce FixMatch, a simpler SSL algorithm that achieves state-of-the-art results across many datasets. We show how FixMatch can begin to bridge the gap between low-label semi-supervised learning and few-shot learning or clustering: we obtain surprisingly-high accuracy with just one label per class. Using only standard cross-entropy losses on both labeled and unlabeled data, FixMatch’s training objective can be written in just a few lines of code. Because of this simplicity, we are able to thoroughly investigate how FixMatch works. We find that certain design choices are important (and often underemphasized) – most importantly, weight decay and the choice of optimizer. The importance of these factors means that even when controlling for model architecture as is recommended in [36], the same technique can not always be directly compared across different implementations. On the whole, we believe that the existence of such simple but performant semi-supervised machine learning algorithms will help to allow machine learning to be deployed in increasingly many practical domains where labels are expensive or difficult to obtain. Broader Impact FixMatch helps democratize machine learning in two ways: first, its simplicity makes it available to a wider audience, and second, its accuracy with only a few labels means that it can be applied to domains where previously machine learning was not feasible. The flip side of democratization of machine learning research is that it becomes easy for both good and bad actors to apply. We hope that this ability will be used for good—for example, obtaining medical scans is often far cheaper than paying an expert doctor to label every image. However, it is possible that more advanced techniques for semi-supervised learning will allow for more advanced surveillance: for example, the efficacy of our one-shot classification might allow for more accurate person identification from a few images. Broadly speaking, any progress on semi-supervised learning will have these same consequences. Funding Disclosure Google is the sole source of funding for this work. Acknowledgment We thank Qizhe Xie, Avital Oliver, Quoc V. Le, and Sercan Arik for their feedback on this paper.
1. What is the focus and contribution of the paper on self-supervised learning? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding the novelty and performance comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any minor comments or suggestions for improving the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a method called FixMatch for SSL. It simply treats predictions of weak augmented images as pseudo labels with argmax operation, and manages to minimize the loss between pseudo-label and predictions from strong augmented images. The method achieves promising results on several image recognition datasets. Strengths 1. The paper is well written and easy to follow. 2. The performances on several datasets are promising Weaknesses 1. The novelty of the proposed method is incremental. Both hard pseudo labels and consistency ideas are proposed in previous literatures. 2. Why does hard pseudo-labeling perform better than soft sharpening ? The soft-labels are widely used in image classification tasks (in Both supervised and SSL settings) to improve the model generalization capacity. When there are only a few labels, there is a high chance that predictions of unlabeled samples are incorrect. The wrong predictions may lead to more noise labels compared to soft labels. 3. The performance on CIFAR100 of the proposed method is inferior to ReMixMatch. The authors demonstrate that the model can achieve best performance with distribution alignment. Why does the result vary so heavily? Is this happening on ImageNet too? Is this related to the number of classes? Minor Comments 1. It would be better if the results on ImageNet are compared in a table.
NIPS
Title Post-Training Sparsity-Aware Quantization Abstract Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-byactivation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other’s 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation. The code is available at https://github.com/gilshm/sparq. 1 Introduction Deep neural networks (DNNs) are at the heart of numerous applications, such as image classification and object detection [8], image synthesis [30], and recommendation systems [7]. DNNs, however, require abundant computations, as, for example, billions of multiply-and-accumulate (MAC) operations are required to assign a 224×224 colored image from the ImageNet dataset to one of its thousand possible classes. Limited computational resources, such as those in edge devices, latency constraints, and higher input resolutions, are all catalysts for development of methods that increase the ratio between DNN execution performance to hardware area, with as minimal impact on model accuracy as possible. One common method of doing so is quantization. Quantization is commonly used to map the 32-bit floating-point (FP32) activations and weights in convolutional neural networks (CNNs) to 8-bit integers (INT8), which is known to result in minor or no degradation in model accuracy while easing hardware implementation [14]. Going below 8 bits, however, is not trivial, as quantization noise leads to a noticeable decrease in model accuracy. Quantization-aware training (QAT) methods employ training for quantization, to decrease quantization noise and recoup model accuracy [3, 25, 42]. Nevertheless, it is not always possible to employ training, for reasons such as lack of hardware resources, time, power, energy, dataset availability, or skilled manpower. Post-training quantization (PTQ) methods circumvent these issues [1, 5, 6]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). PTQ methods, basically, search for the optimal tensor clipping values to minimize quantization noise [1, 5]. They usually employ uniform quantization, since computing a dot product (DP) of evenly-spaced integer values can be implemented efficiently in hardware. DNN tensor distributions, however, are known to follow a bell-shaped distribution, such as Gaussian or Laplacian, i.e., the uniform quantization that is, on one hand, hardware-friendly, may not be, on the other hand, the best choice for minimizing the noise induced by the quantization process. To solve this mismatch, to some extent, PTQ methods that break tensor distributions into different quantization regions were proposed [6, 12, 24]. Computing a DP comprising values from different quantizations is not trivial though, since each activation-weight multiplication result may correspond to a different scaling factor, i.e., it will induce a multiplication by a different FP value per quantization region. In this paper, we propose sparsity-aware quantization (SPARQ), which leverages the inherent and dynamic activation sparsity from granularities of entire integer 8-bit values (vSPARQ), down to INT8 representation zero-value bits (bSPARQ). With bSPARQ, instead of quantizing every activation to, for example, 4 bits according to a predetermined scaling factor, activations are first quantized to 8 bits and then dynamically quantized to 4 bits by choosing the most significant consecutive 4 bits while skipping leading zero bits (Figure 1). bSPARQ effectively achieves a number of quantization ranges while still enabling a practical hardware implementation. Moreover, inspired by [32], we also leverage the entire 8-bit activation sparsity with vSPARQ, for additional mitigation of quantization noise. Instead of quantizing activation-by-activation to 4 bits, activations are quantized to 4 bits in pairs. If one activation is zero, then the other can span its bits across the first, and thereby still be represented by 8 bits to avoid additional quantization noise. If, however, both activations are non-zero, both are quantized to 4 bits by bSPARQ. We experiment with vSPARQ and bSPARQ in configurations of 4, 3, and 2 data bits. This paper makes the following contributions: • Sparsity-aware quantization (SPARQ). We present a sparsity-aware quantization method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation, while skipping leading zero-value bits. Moreover, since many activations are zero-value, we consider pairs of activations in the quantization process. If one activation is zero, the other can use the entire 2n-bit budget. We experiment with a number of bit-group selection options and activation bit-widths that demonstrates the trade-off between model accuracy and hardware overhead. • Practical hardware implementation. We implement SPARQ on top of a systolic array (SA), inspired by Google TPUs, and on top of a Tensor Core (TC) DP unit, inspired by NVIDIA GPUs, and show that SPARQ is practical in terms of area overheads. In addition, we also discuss SPARQ implementation on top of NVIDIA Sparse TCs (STCs), thus leveraging activation sparsity on top of weight sparsity. • Comprehensive evaluation. We evaluate our method on a variety of image classification models, with numerous configurations and activation bit-widths, and compare it with previous PTQ works. 2 Related Work PTQ methods are the most relevant works that are related to this work. ACIQ [1] analytically extracts the optimal quantization clipping values from the tensors’ distributions and uses per-channel bit-allocation and per-channel quantization of activations. LBQ [5] formulates a minimum MSE optimization problem that is then solved numerically per layer, and employs additional low-precision tensors to sensitive layers. AdaQuant [10] and AdaRound [21] optimize the common round-to-nearest rounding scheme to reduce quantization noise. BRECQ [16] analyzes the second-order error and optimizes the quantization at block granularity. Conceptually, both vSPARQ and bSPARQ can be employed on top of any of the above quantizations (for simplicity’s sake, we use a simple 8b-8b min-max symmetric quantization, as we also describe in Section 5). Other works, such as OLAccel [24], PWLQ [6], and BiScaled-DNN [12], divide the tensor distribution into two regions. OLAccel divides the tensor distribution into a low-precision region that contains the majority of data, and a high-precision region that contains a small portion of the data (e.g., 3%), which they define as outliers. PWLQ and BiScaled-DNN, on the other hand, divide the tensor distribution into two regions with the same bit-width. BiScaled-DNN uses different scale factors on overlapping regions and implements a ratio heuristic to set the breakpoint between the regions, whereas PWLQ picks the appropriate breakpoint via minimization of the quantization error. Interestingly, PWLQ is capable of breaking the distribution into more than two regions; however, the authors state that from a hardware perspective, this may not be feasible. Following OLAccel, OverQ [41] leverages activation sparsity to avoid the dedicated outlier datapath used in OLAccel. In this work, we employ a simple rounding mechanism and bit-level sparsity to mitigate noise in the occasion a zero-value does not exist, and we propose a parallel implementation rather than a serial one. SySMT [32] leverages sparsity in quantization of both activations and weights to 4 bits. Their method incurs relatively high area overheads, since the quantization logic has to be scaled with the number of processing units. Moreover, SySMT incurs relatively high degradation in accuracy, since quantization to 4 bits is implemented by trimming either the 4-bit most significant bits (MSBs) or the 4-bit least significant bits (LSBs). These two options are not optimal, since we find that, for example, with ResNet-18 and ILSVRC-2012, 67% of the non-zero-value activation values have at least one of the 4-bit MSBs toggled (i.e., equal to one), even though 90% of the time, the two MSBs are not toggled. That is, the two MSBs are most likely not toggled when the 4-bit MSBs are chosen. 3 The Basic Principle of SPARQ SPARQ comprises two orthogonal techniques: bSPARQ and vSPARQ. The former leverages zerovalue bits to trim an 8-bit value to an n-bit value; and the latter leverages zero-value activations. Below, we describe both in detail. Throughout this work, we focus on quantizing the activations and leveraging only their sparsity, i.e., no correlation is made with the weight values, unless otherwise stated. 3.1 bSPARQ: Leveraging Bit Sparsity Consider an already quantized 8-bit activation, x, and quantization to 4 bits (i.e., n = 4). bSPARQ trims the activation from 8 bits to 4 bits by inspecting the activation bits and choosing the most significant consecutive 4 bits within it, which, in practice, is achieved by searching for the first most significant toggled bit. The motivation behind bSPARQ is twofold: first, activations usually follow a bell-shaped distribution, meaning that the MSBs are usually equal to zero and, therefore, can be skipped; and second, if the MSBs are toggled, the LSBs’ contribution to the entire value is insignificant. For example, given the value 000110112 (2710), the 4-bit window will be positioned at bits [4:1] (000110112), thus achieving the approximated value 2610. Notice that since there are five window position options, the 4-bit window is accompanied by a 3-bit identifier that corresponds to the window position—that is, how much shift-left is required on top of the four trimmed bits. In addition, to further reduce the dynamic quantization noise, we round the value within the chosen window according to the residual LSBs. bSPARQ is visually demonstrated in Figure 1. Supporting five window options requires additional circuitry compared with, for example, three window options, since additional placement options require additional hardware support by the shift-left unit. The trade-off is, however, improved accuracy, since additional placement options introduce less quantization noise. We experiment with five, three, and two placement options, denoted as 5opt, 3opt, and 2opt, respectively. With the 3opt configuration, [7:4], [5:2], or [3:0] are chosen, and with the 2opt configuration, either [7:4] or [3:0] are chosen (we leave the analysis of asymmetrical configurations for future work). For example, given the previous value, 000110112, 3opt will choose bits [5:2] (000110112), whereas 2opt will choose bits [7:4] (000110112). Relation to piecewise linear quantization. To mitigate quantization errors, previous works suggest dividing the tensor distributions into different quantization regions, each with a scaling factor of its own [6, 12, 24]. In a sense, bSPARQ is somewhat similar to those. First, each activation is assigned to a quantization range according to its value; however, we break the distributions into hardware-oriented regions of power of two. For example, for the 5opt case, the regions are [0, 21 − 1], [21, 22 − 1], and so on. As a result, values are mapped to their appropriate range by simply counting the leading zero bits. In addition, we avoid any need for preprocessing that searches for the distribution breakpoints to minimize the quantization noise. Second, each region has an individual scaling factor; however, each region scaling factor is a product of a base scaling factor with the corresponding power of two. For example, in the 5opt configuration, the scaling factor of the decimal number 3310 = 001000012 is the original scaling factor times 22. This enables a relatively simple implementation with up to five regions when considering 4-bit activations, and even six and seven regions when considering 3- and 2-bit activations, respectively—as opposed to the two quantization regions used by previous works. 3.2 vSPARQ: Leveraging Sparsity with Pairs of Activations Consider an 8-bit unsigned activation vector, X = (x1, · · · , xL), and an 8-bit signed weight vector, W = (w1, · · · , wL), both of length L. Also, consider a single MAC unit that computes a single activation-weight multiplication per cycle. vSPARQ, similar to [32, 34, 41], groups activations in pairs, to leverage the dynamic and unstructured activation sparsity. That is, the DP calculations can be formulated as: X ·W = L∑ i even xiwi + xi+1wi+1 = y , (1) where y is the DP scalar result, and in our context, an output activation. For some i, if xi = 0, then xi+1 can be used with 8-bit representation, and vice versa. If, however, both xi 6= 0 and xi+1 6= 0, and given that, for example, bSPARQ is employed, then the precision of both xi and xi+1 is reduced to 4 bits. For a certain i, the vSPARQ operation can also be formulated as: xiwi + xi+1wi+1 = xiwi, if xi+1 = 0 xi+1wi+1, if xi = 0 bSPARQ(xi)wi + bSPARQ(xi+1)wi+1, otherwise . (2) Notice that the two first case statements correspond to an 8b-8b computation, whereas the last case statement corresponds to two 4b-8b computations. The latter case is possible, since two 4b-8b multiplications are logically equivalent to a single 8b-8b multiplication, as we describe next. 8b-8b = 2x4b-8b. Given an 8-bit unsigned activation, x, and an 8-bit signed weight, w, the activationweight multiplication can be formulated as x[7:0] · w[7:0] = 7∑ i=0 2ixi · w[7:0] = ( 3∑ i=0 2i+4xi+4 + 3∑ i=0 2ixi ) · w[7:0] = 24x[7:4] · w[7:0] + x[3:0] · w[7:0] , (3) where the [b : a] notation represents the b-to-a range in bits, the two activation-weight multiplications are 4b-8b wide, and the 24 is equivalent to a 4-bit shift-left operation. By considering an additional weight input as well as dynamic shift-left operations, we can reuse the multipliers and achieve a multiplier capable of either one 8b-8b multiplication or two independent 4b-8b multiplications with a dynamic range: 2opt1xin1,4b · win1,8b + 2opt2xin2,4b · win2,8b , (4) where the activation and weight inputs are 4 bits and 8 bits long, respectively. Equation (4) resembles a FP representation; however, the “opt” configurations are not necessarily continuous, as in 3opt and 2opt. Figure 2 illustrates how Equation (4) is mapped to hardware. The two 4b-8b multipliers correspond to xin1 · win1 and xin2 · win2, and the two shift-left units ( ) correspond to 2opt1 and 2opt2 . The adder corresponds to the addition of the two groups, and the multiplexers, which are not explicitly formulated in Equation (4), are used to choose dynamically between win1, win2, or select both, during execution. We use this multiplier instead of the conventional one used in well-known hardware structures. 4 Case Studies In this section, we examine SPARQ on top of two well-known matrix multiplication accelerator implementations: systolic arrays (SAs) and Tensor Cores (TCs). These accelerators are commonly used for CNNs, since it is a standard practice to map the convolution operation to matrix multiplication [2, 18, 39]. Our focus here is on the processing engines (PEs) comprising each of these structures and that are responsible for single DPs. Both implementations are fully equivalent from a mathematical point of view. Systolic arrays. SAs consist of a large monolithic network of PEs designed for fast and efficient processing of systematic algorithms that execute the same computations with different data at different time instances [15]. The topology of SAs, illustrated in Figure 3, consists of a homogeneous network of tightly coupled PEs, each performing a MAC operation. PEs work in tandem: each PE in the SA receives data from its upstream neighbors, performs a MAC operation, and forwards the data downstream. In our PE design, also known as output-stationary SA, each PE will eventually hold the result of a DP; and the entire SA will comprise a tile from a result matrix. Google’s TPUv2 and TPUv3, for example, consist of 128×128 SA arrays [22]. To deploy SPARQ, the conventional multiplier in each PE is replaced with the one presented in Figure 2, the weight bandwidth is doubled, and the activation bandwidth does not change. Tensor cores. TCs were first introduced in NVIDIA’s Volta architecture to accelerate matrix operations [4, 13, 19]. TCs multiply two 4×4 matrices and add an additional one to the multiplication result. The specific implementation details of TCs are not publicly disclosed; however, a proposed architecture that fits the original TC performance is suggested in [27]. In the proposed TC architecture, there are a number of DP units. Each DP unit performs four parallel activation-weight multiplications, accumulating them in an adder tree together with an additional third value. In this work, we focus on the architecture of a single DP, as presented in Figure 4. To enable SPARQ, the multipliers are replaced and the weight bandwidth is doubled, similar to the SA. NVIDIA also recently introduced weight sparsity acceleration in its Ampere microarchitecture [20, 23]. The Sparse TC (STC) hardware achieves 2× speedup over the original TC by essentially skipping 50% of the computations (Figure 5). STC requires 50% weight structured pruning at a granularity of four elements, i.e., every four adjacent weights must have two zero-value weights. Only the non-zero-value weights are stored with additional coordinates. In Figure 5, the two leftmost weights and two rightmost weights correspond to the four leftmost activations and rightmost activations, respectively. The stored coordinates indicate which activations are picked, since they are to be multiplied by non-zero-value weights. After filtering the activations, they are passed with the weights to the DP unit for further processing. Notice, however, that activation sparsity may still exist even after the selection process. 5 Experiments We evaluate the impact on model accuracy using PyTorch [26], the ILSVRC-2012 dataset [28], and various CNN models [8, 9, 11, 37, 37, 38] (see Table 1). All models are quantized using a simple uniform min-max quantization, employing symmetric unsigned per-layer quantization for activations and symmetric signed per-kernel quantization for weights. The min-max statistics are gathered during a quick preprocessing stage on 2K randomly picked images from the training set. In addition, during preprocessing, we recalibrate the BatchNorm layers’ running mean and running variance statistics [29, 33, 35, 36]. In all models, the first convolution layer is left intact, since its input activations, which correspond to the image pixels, do not include many zero values, if any. Quantization is, therefore, performed on all convolution layers, with the exception of the first layer. We present the quantization results in Table 1 . Throughout this section, we use SPARQ on top of the 8-bit models (A8W8) and report the accuracy degradation relative to the corresponding FP32 model. A4W8 and A8W4 are presented in Table 1 as references to the worse-case accuracy. In Section 5.3, we experiment with a 2:4 structured pruning [23]. To achieve the sparse model with the baseline accuracy, we prune the network based on its pretrained weights and retrain the model from scratch for 90 epochs with a learning rate starting from 0.1 and divided by 10 at epochs 30 and 60. Weight decay and momentum are set to 0.0001 and 0.9, respectively. The different designs are implemented using SystemVerilog and synthesized using Synopsys® Design Compiler® and Virage (now Synopsys) 65nm standard cell library. We use a frequency of 500MHz at slow and fast corners for setup and hold timing closure, respectively. Area estimates were extracted after place-and-route using Cadence® Innovus™. We assume that the overall hardware overhead related to activation trimming and rounding is relatively negligible with respect to the SA and TC, since (1) the trimming and rounding unit involves a simple hardware scheme; and (2) it is performed at a significantly lower processing rate. We validated our multiplier against our PyTorch CUDA implementation with cycle-accurate testbenches to verify calculation integrity. 5.1 Accuracy Results In Table 2, we present our method’s results for the 5opt, 3opt, and 2opt configurations, with and without rounding (±R), as described in Section 3.1, and without vSPARQ (-vS). As expected, we observe that (1) better accuracy is achieved with the increase of window placement options; (2) overall, rounding further reduces quantization noise, which leads to smaller accuracy degradation; and (3) vSPARQ contribution is noticeable mainly in configurations with relatively high quantization noise. In addition, we observe a large impact on accuracy in the transition from 2opt to 3opt, since there is a high probability that at least one of the 4-bit MSBs will be toggled. For example, given the non-zero-valued activations in ResNet-18 with the ILSVRC-2012 dataset, we measure that bits 7, 6, 5, and 4 are toggled in 0.5%, 9.2%, 33.8%, and 44.8% of the time, respectively. Assuming the bit values are statistically independent, the probability of at least one toggled bit is 67%. Notice that there is a clear redundancy in the 2opt configuration that picks the 4-bit MSBs, even though 10% of the time the two MSBs are toggled. Computationally, SPARQ may be considered as a dynamic 4b-8b PTQ, in which quantization to 4 bits from 8 bits is conducted occasionally in the event of two adjacent non-zero-value activations. The upside of conventional PTQ methods, however, is the reduction in memory footprint, where the dynamic method falls short, due to the additional metadata. For example, the 3opt configuration requires additional 3-bit metadata per 4-bit activation data (2-bit ShiftCtrl and 1-bit MuxCtrl). Still, the memory footprint may be reduced by grouping the metadata for several activations, which we leave for future exploration. In Table 3, we present our results compared with previous related works [1, 5, 6, 31]. We would like to point out that SySMT is similar to the 2opt configuration. The slightly different results are due to the different BatchNorm calibrations and the slightly different 8-bit quantized models. Regarding ResNet-50, SySMT quantizes its weights, whereas SPARQ focuses on quantizing activations. Reducing the bit width: 3 bits and 2 bits. To further challenge SPARQ efficiency, we experiment with 3-bit and 2-bit configurations. The lower budget leads to increased quantization noise even when one of the activations within the activation pair has a zero value, since the total window sizes are 6 and 4 bits for the 3-bit and 2-bit configurations, respectively. In Table 4, we present SPARQ accuracy results compared with other methods that reported sub-4b quantization results. As opposed to Table 2, we observe that vSPARQ impact is more significant in lower bit-widths. 5.2 Hardware Evaluation Table 5 summarizes the area overhead normalized to the MAC throughput of SPARQ for both SA and TC use cases. The SA and TC baselines are conventional 8b-8b SA and TC PEs, respectively. Memory, such as SRAMs, are not considered in the analysis (which could decrease the area overhead percentages). The 2×4b-8b design is presented as a reference implementation in the case of 4b-8b quantized values with equivalent throughput to the design in Figure 2. For the sake of fair comparison, there is a single psum in the 2×4b-8b design. With respect to the SA, the 2×4b-8b PE requires approximately half the area per MAC operation than the 8b-8b PE. On the one hand, the total multipliers’ area of the 2×4b-8b PE is significantly smaller; however, the 2×4b-8b PE employs a 3-input adder. The shift-left logic is the main contributor to the increasing area overhead of opt2 through opt5. As the number of shift-left options increases, the shift logic becomes more complex and utilizes a bigger logic area. Regarding 6opt (3 bits) and 7opt (2 bits) configurations, even though they require additional window placement options, the overall area decreases, since the area of the multipliers, registers, and multiplexers within the shift-left units is reduced. Also, our 2opt scheme introduces a significantly smaller area overhead compared with SySMT, due to the fact that SySMT required the trimming and rounding hardware to operate at the same high throughput rate as the SA. Regarding TC, the 2×4b-8b implementation requires half the area (normalized) of the TC 8b-8b baseline PE. Similar to the SA use case, the 2×4b-8b PE multipliers are smaller; however, this time the 2×4b-8b PE adder tree grows. Interestingly, the relative area of 5opt no-vSPARQ (-vS) is only slightly higher than the “full” 3opt SPARQ implementation. Given the accuracy differences between the two configurations (Table 2), the 3opt SPARQ operating point presented in this work may not be a good trade-off between accuracy and area. 5.3 Leveraging Activation Sparsity on Top of Sparse Tensor Cores We simulate SPARQ on top of an STC with models pruned with 2:4 structured pruning. As presented in Figure 5, activations are first filtered through the multiplexers according to the non-zero-value weight coordinates. Then, vSPARQ comes into play, inspecting pairs of activations, as described in Section 3. Since in STC the trimming and rounding logic should be replicated for each DP unit, we implemented and synthesized the trimming and rounding unit to estimate its area overhead. The unit area, relative to the conventional TC (Figure 4), is 17%, 12%, and 9% for the 5opt, 3opt, and 2opt configurations, respectively. The relative area may be even smaller if we consider the entire STC design (Figure 5). SPARQ is, therefore, beneficial in terms of performance-to-area when attached to an STC. In Table 6, we report the pruned models’ FP32 and A8W8 quantized accuracies, and repeat all experiments described thus far. Interestingly, the relative accuracy degradation of the pruned models is slightly higher than that of the unpruned models in Table 3 [17, 40]. Nevertheless, SPARQ still achieves less than 1% relative degradation in accuracy with 4-bit 5opt and 3opt, and 3-bit 6opt. 6 Limitations and Societal Impacts SPARQ has two main limitations: (1) It does not achieve the memory footprint decrease as native 4b-8b quantization methods do, because of the additional metadata that accompanies each value, as discussed in Section 5.1. The memory footprint may be decreased by giving up vSPARQ or sharing ShiftCtrl for a number of activations. We leave these research directions for future work. (2) From a hardware perspective, SPARQ requires hardware support, i.e., it cannot run on today’s commodity hardware. In addition, compared with native 4b-8b quantizations, our hardware implementation incurs some overhead, as described in Section 5.2. As for the societal impacts, quantization methods, in general, increase the effective amount of available computing resources, since the execution requirements of quantized models are lower. The effective increase in computing power may be targeted towards negative use, such as surveillance and fake profile generation. 7 Conclusion We present SPARQ, a sparsity-aware quantization method that dynamically leverages sparsity in different granularities—from the entire 8-bit value to the individual bits. Thanks to the inherent activation sparsity, quantization to n bits occurs only occasionally. When quantization to n bits does occur, bit-level sparsity is leveraged by trimming leading zero bits and picking the most significant consecutive n bits. SPARQ induces minor accuracy degradation and is hardware-friendly. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. We also thank Moran Shkolnik, Mario Shalabi, and Michael Behar for their valuable feedback.
1. What are the key contributions and novel aspects introduced by the paper in post-training quantization? 2. What are the strengths of the proposed approach, particularly in terms of accuracy and hardware efficiency? 3. Do you have any concerns or critiques regarding the paper, such as the area overhead of the control bits or the comparison with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors propose a post-training quantization scheme comprising two orthogonal ideas: bSPARQ and vSPARQ. bSPARQ a dynamic quantization technique. Instead of rounding values to the required bitwidth (equivalent to keeping the MSBs), the technique keeps the most significant consecutive non-zero bits (i.e. the sequence of bits starting from the first non-zero bit). This is equivalent to making a power-of-two adjustment to the scale factor for each individual value. vSPARQ is a dynamic sparsity technique meant for activations. First, adjacent pairs of activatons are grouped (e.g. [x1, x2, x3, x4] -> [x1x2, x3x4]). In each pair, if one of the two activations is zero, the other can make use of its compute resources. The idea is detailed in Equation (2) and can be realized in hardware with the architecture in Figure 2. The authors demonstrate SOTA accuracy results on CNN image classification for very low activation bitwidths (4/3/2-bit activations and 8-bit weights). However, the technique also incurs significant area overhead: 22% for the 3opt design point. Review This is a very interesting paper that at first glance combines two unrelated techniques: bSPARQ shifts each value and vSPARQ exploits dynamic sparsity. However, both techniques can make use of a shifter after the multiplier (see Figure 2) which is what makes combining them a compelling idea. Overall a strong paper with solid results on both accuracy and hardware. Some comments/critiques of the paper: The overhead of the control bits is significant (3 bits per 4-bit activations with the 3-opt configuration). The area results on the multiplier unit shows that this is not an issue for compute, but for routing and memory it feels like a big problem. In particular, the paper doesn't describe the datapath at the output of the conv/matmul unit, which must re-quantize the wide accumulator outputs back to 4-bit for the next layer. This datapath will likely be significantly larger due to the overhead bits and the logic to generate them. I think this is what may kill the idea in practice. The accuracy comparison in Table 2 is a bit misleading. ACIQ, for example, incurs no area overhead. This should be pointed out in Table 3 (which specifically only mentions SySMT to make the comparison look better). The authors should cite https://arxiv.org/abs/1910.06909 which proposes a very similar idea to vSPARQ, with more analysis. The hardware evaluation methodology section in the supplements should be moved to main paper to assure readers that a solid hardware evaluation was done.
NIPS
Title Post-Training Sparsity-Aware Quantization Abstract Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-byactivation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other’s 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation. The code is available at https://github.com/gilshm/sparq. 1 Introduction Deep neural networks (DNNs) are at the heart of numerous applications, such as image classification and object detection [8], image synthesis [30], and recommendation systems [7]. DNNs, however, require abundant computations, as, for example, billions of multiply-and-accumulate (MAC) operations are required to assign a 224×224 colored image from the ImageNet dataset to one of its thousand possible classes. Limited computational resources, such as those in edge devices, latency constraints, and higher input resolutions, are all catalysts for development of methods that increase the ratio between DNN execution performance to hardware area, with as minimal impact on model accuracy as possible. One common method of doing so is quantization. Quantization is commonly used to map the 32-bit floating-point (FP32) activations and weights in convolutional neural networks (CNNs) to 8-bit integers (INT8), which is known to result in minor or no degradation in model accuracy while easing hardware implementation [14]. Going below 8 bits, however, is not trivial, as quantization noise leads to a noticeable decrease in model accuracy. Quantization-aware training (QAT) methods employ training for quantization, to decrease quantization noise and recoup model accuracy [3, 25, 42]. Nevertheless, it is not always possible to employ training, for reasons such as lack of hardware resources, time, power, energy, dataset availability, or skilled manpower. Post-training quantization (PTQ) methods circumvent these issues [1, 5, 6]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). PTQ methods, basically, search for the optimal tensor clipping values to minimize quantization noise [1, 5]. They usually employ uniform quantization, since computing a dot product (DP) of evenly-spaced integer values can be implemented efficiently in hardware. DNN tensor distributions, however, are known to follow a bell-shaped distribution, such as Gaussian or Laplacian, i.e., the uniform quantization that is, on one hand, hardware-friendly, may not be, on the other hand, the best choice for minimizing the noise induced by the quantization process. To solve this mismatch, to some extent, PTQ methods that break tensor distributions into different quantization regions were proposed [6, 12, 24]. Computing a DP comprising values from different quantizations is not trivial though, since each activation-weight multiplication result may correspond to a different scaling factor, i.e., it will induce a multiplication by a different FP value per quantization region. In this paper, we propose sparsity-aware quantization (SPARQ), which leverages the inherent and dynamic activation sparsity from granularities of entire integer 8-bit values (vSPARQ), down to INT8 representation zero-value bits (bSPARQ). With bSPARQ, instead of quantizing every activation to, for example, 4 bits according to a predetermined scaling factor, activations are first quantized to 8 bits and then dynamically quantized to 4 bits by choosing the most significant consecutive 4 bits while skipping leading zero bits (Figure 1). bSPARQ effectively achieves a number of quantization ranges while still enabling a practical hardware implementation. Moreover, inspired by [32], we also leverage the entire 8-bit activation sparsity with vSPARQ, for additional mitigation of quantization noise. Instead of quantizing activation-by-activation to 4 bits, activations are quantized to 4 bits in pairs. If one activation is zero, then the other can span its bits across the first, and thereby still be represented by 8 bits to avoid additional quantization noise. If, however, both activations are non-zero, both are quantized to 4 bits by bSPARQ. We experiment with vSPARQ and bSPARQ in configurations of 4, 3, and 2 data bits. This paper makes the following contributions: • Sparsity-aware quantization (SPARQ). We present a sparsity-aware quantization method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation, while skipping leading zero-value bits. Moreover, since many activations are zero-value, we consider pairs of activations in the quantization process. If one activation is zero, the other can use the entire 2n-bit budget. We experiment with a number of bit-group selection options and activation bit-widths that demonstrates the trade-off between model accuracy and hardware overhead. • Practical hardware implementation. We implement SPARQ on top of a systolic array (SA), inspired by Google TPUs, and on top of a Tensor Core (TC) DP unit, inspired by NVIDIA GPUs, and show that SPARQ is practical in terms of area overheads. In addition, we also discuss SPARQ implementation on top of NVIDIA Sparse TCs (STCs), thus leveraging activation sparsity on top of weight sparsity. • Comprehensive evaluation. We evaluate our method on a variety of image classification models, with numerous configurations and activation bit-widths, and compare it with previous PTQ works. 2 Related Work PTQ methods are the most relevant works that are related to this work. ACIQ [1] analytically extracts the optimal quantization clipping values from the tensors’ distributions and uses per-channel bit-allocation and per-channel quantization of activations. LBQ [5] formulates a minimum MSE optimization problem that is then solved numerically per layer, and employs additional low-precision tensors to sensitive layers. AdaQuant [10] and AdaRound [21] optimize the common round-to-nearest rounding scheme to reduce quantization noise. BRECQ [16] analyzes the second-order error and optimizes the quantization at block granularity. Conceptually, both vSPARQ and bSPARQ can be employed on top of any of the above quantizations (for simplicity’s sake, we use a simple 8b-8b min-max symmetric quantization, as we also describe in Section 5). Other works, such as OLAccel [24], PWLQ [6], and BiScaled-DNN [12], divide the tensor distribution into two regions. OLAccel divides the tensor distribution into a low-precision region that contains the majority of data, and a high-precision region that contains a small portion of the data (e.g., 3%), which they define as outliers. PWLQ and BiScaled-DNN, on the other hand, divide the tensor distribution into two regions with the same bit-width. BiScaled-DNN uses different scale factors on overlapping regions and implements a ratio heuristic to set the breakpoint between the regions, whereas PWLQ picks the appropriate breakpoint via minimization of the quantization error. Interestingly, PWLQ is capable of breaking the distribution into more than two regions; however, the authors state that from a hardware perspective, this may not be feasible. Following OLAccel, OverQ [41] leverages activation sparsity to avoid the dedicated outlier datapath used in OLAccel. In this work, we employ a simple rounding mechanism and bit-level sparsity to mitigate noise in the occasion a zero-value does not exist, and we propose a parallel implementation rather than a serial one. SySMT [32] leverages sparsity in quantization of both activations and weights to 4 bits. Their method incurs relatively high area overheads, since the quantization logic has to be scaled with the number of processing units. Moreover, SySMT incurs relatively high degradation in accuracy, since quantization to 4 bits is implemented by trimming either the 4-bit most significant bits (MSBs) or the 4-bit least significant bits (LSBs). These two options are not optimal, since we find that, for example, with ResNet-18 and ILSVRC-2012, 67% of the non-zero-value activation values have at least one of the 4-bit MSBs toggled (i.e., equal to one), even though 90% of the time, the two MSBs are not toggled. That is, the two MSBs are most likely not toggled when the 4-bit MSBs are chosen. 3 The Basic Principle of SPARQ SPARQ comprises two orthogonal techniques: bSPARQ and vSPARQ. The former leverages zerovalue bits to trim an 8-bit value to an n-bit value; and the latter leverages zero-value activations. Below, we describe both in detail. Throughout this work, we focus on quantizing the activations and leveraging only their sparsity, i.e., no correlation is made with the weight values, unless otherwise stated. 3.1 bSPARQ: Leveraging Bit Sparsity Consider an already quantized 8-bit activation, x, and quantization to 4 bits (i.e., n = 4). bSPARQ trims the activation from 8 bits to 4 bits by inspecting the activation bits and choosing the most significant consecutive 4 bits within it, which, in practice, is achieved by searching for the first most significant toggled bit. The motivation behind bSPARQ is twofold: first, activations usually follow a bell-shaped distribution, meaning that the MSBs are usually equal to zero and, therefore, can be skipped; and second, if the MSBs are toggled, the LSBs’ contribution to the entire value is insignificant. For example, given the value 000110112 (2710), the 4-bit window will be positioned at bits [4:1] (000110112), thus achieving the approximated value 2610. Notice that since there are five window position options, the 4-bit window is accompanied by a 3-bit identifier that corresponds to the window position—that is, how much shift-left is required on top of the four trimmed bits. In addition, to further reduce the dynamic quantization noise, we round the value within the chosen window according to the residual LSBs. bSPARQ is visually demonstrated in Figure 1. Supporting five window options requires additional circuitry compared with, for example, three window options, since additional placement options require additional hardware support by the shift-left unit. The trade-off is, however, improved accuracy, since additional placement options introduce less quantization noise. We experiment with five, three, and two placement options, denoted as 5opt, 3opt, and 2opt, respectively. With the 3opt configuration, [7:4], [5:2], or [3:0] are chosen, and with the 2opt configuration, either [7:4] or [3:0] are chosen (we leave the analysis of asymmetrical configurations for future work). For example, given the previous value, 000110112, 3opt will choose bits [5:2] (000110112), whereas 2opt will choose bits [7:4] (000110112). Relation to piecewise linear quantization. To mitigate quantization errors, previous works suggest dividing the tensor distributions into different quantization regions, each with a scaling factor of its own [6, 12, 24]. In a sense, bSPARQ is somewhat similar to those. First, each activation is assigned to a quantization range according to its value; however, we break the distributions into hardware-oriented regions of power of two. For example, for the 5opt case, the regions are [0, 21 − 1], [21, 22 − 1], and so on. As a result, values are mapped to their appropriate range by simply counting the leading zero bits. In addition, we avoid any need for preprocessing that searches for the distribution breakpoints to minimize the quantization noise. Second, each region has an individual scaling factor; however, each region scaling factor is a product of a base scaling factor with the corresponding power of two. For example, in the 5opt configuration, the scaling factor of the decimal number 3310 = 001000012 is the original scaling factor times 22. This enables a relatively simple implementation with up to five regions when considering 4-bit activations, and even six and seven regions when considering 3- and 2-bit activations, respectively—as opposed to the two quantization regions used by previous works. 3.2 vSPARQ: Leveraging Sparsity with Pairs of Activations Consider an 8-bit unsigned activation vector, X = (x1, · · · , xL), and an 8-bit signed weight vector, W = (w1, · · · , wL), both of length L. Also, consider a single MAC unit that computes a single activation-weight multiplication per cycle. vSPARQ, similar to [32, 34, 41], groups activations in pairs, to leverage the dynamic and unstructured activation sparsity. That is, the DP calculations can be formulated as: X ·W = L∑ i even xiwi + xi+1wi+1 = y , (1) where y is the DP scalar result, and in our context, an output activation. For some i, if xi = 0, then xi+1 can be used with 8-bit representation, and vice versa. If, however, both xi 6= 0 and xi+1 6= 0, and given that, for example, bSPARQ is employed, then the precision of both xi and xi+1 is reduced to 4 bits. For a certain i, the vSPARQ operation can also be formulated as: xiwi + xi+1wi+1 = xiwi, if xi+1 = 0 xi+1wi+1, if xi = 0 bSPARQ(xi)wi + bSPARQ(xi+1)wi+1, otherwise . (2) Notice that the two first case statements correspond to an 8b-8b computation, whereas the last case statement corresponds to two 4b-8b computations. The latter case is possible, since two 4b-8b multiplications are logically equivalent to a single 8b-8b multiplication, as we describe next. 8b-8b = 2x4b-8b. Given an 8-bit unsigned activation, x, and an 8-bit signed weight, w, the activationweight multiplication can be formulated as x[7:0] · w[7:0] = 7∑ i=0 2ixi · w[7:0] = ( 3∑ i=0 2i+4xi+4 + 3∑ i=0 2ixi ) · w[7:0] = 24x[7:4] · w[7:0] + x[3:0] · w[7:0] , (3) where the [b : a] notation represents the b-to-a range in bits, the two activation-weight multiplications are 4b-8b wide, and the 24 is equivalent to a 4-bit shift-left operation. By considering an additional weight input as well as dynamic shift-left operations, we can reuse the multipliers and achieve a multiplier capable of either one 8b-8b multiplication or two independent 4b-8b multiplications with a dynamic range: 2opt1xin1,4b · win1,8b + 2opt2xin2,4b · win2,8b , (4) where the activation and weight inputs are 4 bits and 8 bits long, respectively. Equation (4) resembles a FP representation; however, the “opt” configurations are not necessarily continuous, as in 3opt and 2opt. Figure 2 illustrates how Equation (4) is mapped to hardware. The two 4b-8b multipliers correspond to xin1 · win1 and xin2 · win2, and the two shift-left units ( ) correspond to 2opt1 and 2opt2 . The adder corresponds to the addition of the two groups, and the multiplexers, which are not explicitly formulated in Equation (4), are used to choose dynamically between win1, win2, or select both, during execution. We use this multiplier instead of the conventional one used in well-known hardware structures. 4 Case Studies In this section, we examine SPARQ on top of two well-known matrix multiplication accelerator implementations: systolic arrays (SAs) and Tensor Cores (TCs). These accelerators are commonly used for CNNs, since it is a standard practice to map the convolution operation to matrix multiplication [2, 18, 39]. Our focus here is on the processing engines (PEs) comprising each of these structures and that are responsible for single DPs. Both implementations are fully equivalent from a mathematical point of view. Systolic arrays. SAs consist of a large monolithic network of PEs designed for fast and efficient processing of systematic algorithms that execute the same computations with different data at different time instances [15]. The topology of SAs, illustrated in Figure 3, consists of a homogeneous network of tightly coupled PEs, each performing a MAC operation. PEs work in tandem: each PE in the SA receives data from its upstream neighbors, performs a MAC operation, and forwards the data downstream. In our PE design, also known as output-stationary SA, each PE will eventually hold the result of a DP; and the entire SA will comprise a tile from a result matrix. Google’s TPUv2 and TPUv3, for example, consist of 128×128 SA arrays [22]. To deploy SPARQ, the conventional multiplier in each PE is replaced with the one presented in Figure 2, the weight bandwidth is doubled, and the activation bandwidth does not change. Tensor cores. TCs were first introduced in NVIDIA’s Volta architecture to accelerate matrix operations [4, 13, 19]. TCs multiply two 4×4 matrices and add an additional one to the multiplication result. The specific implementation details of TCs are not publicly disclosed; however, a proposed architecture that fits the original TC performance is suggested in [27]. In the proposed TC architecture, there are a number of DP units. Each DP unit performs four parallel activation-weight multiplications, accumulating them in an adder tree together with an additional third value. In this work, we focus on the architecture of a single DP, as presented in Figure 4. To enable SPARQ, the multipliers are replaced and the weight bandwidth is doubled, similar to the SA. NVIDIA also recently introduced weight sparsity acceleration in its Ampere microarchitecture [20, 23]. The Sparse TC (STC) hardware achieves 2× speedup over the original TC by essentially skipping 50% of the computations (Figure 5). STC requires 50% weight structured pruning at a granularity of four elements, i.e., every four adjacent weights must have two zero-value weights. Only the non-zero-value weights are stored with additional coordinates. In Figure 5, the two leftmost weights and two rightmost weights correspond to the four leftmost activations and rightmost activations, respectively. The stored coordinates indicate which activations are picked, since they are to be multiplied by non-zero-value weights. After filtering the activations, they are passed with the weights to the DP unit for further processing. Notice, however, that activation sparsity may still exist even after the selection process. 5 Experiments We evaluate the impact on model accuracy using PyTorch [26], the ILSVRC-2012 dataset [28], and various CNN models [8, 9, 11, 37, 37, 38] (see Table 1). All models are quantized using a simple uniform min-max quantization, employing symmetric unsigned per-layer quantization for activations and symmetric signed per-kernel quantization for weights. The min-max statistics are gathered during a quick preprocessing stage on 2K randomly picked images from the training set. In addition, during preprocessing, we recalibrate the BatchNorm layers’ running mean and running variance statistics [29, 33, 35, 36]. In all models, the first convolution layer is left intact, since its input activations, which correspond to the image pixels, do not include many zero values, if any. Quantization is, therefore, performed on all convolution layers, with the exception of the first layer. We present the quantization results in Table 1 . Throughout this section, we use SPARQ on top of the 8-bit models (A8W8) and report the accuracy degradation relative to the corresponding FP32 model. A4W8 and A8W4 are presented in Table 1 as references to the worse-case accuracy. In Section 5.3, we experiment with a 2:4 structured pruning [23]. To achieve the sparse model with the baseline accuracy, we prune the network based on its pretrained weights and retrain the model from scratch for 90 epochs with a learning rate starting from 0.1 and divided by 10 at epochs 30 and 60. Weight decay and momentum are set to 0.0001 and 0.9, respectively. The different designs are implemented using SystemVerilog and synthesized using Synopsys® Design Compiler® and Virage (now Synopsys) 65nm standard cell library. We use a frequency of 500MHz at slow and fast corners for setup and hold timing closure, respectively. Area estimates were extracted after place-and-route using Cadence® Innovus™. We assume that the overall hardware overhead related to activation trimming and rounding is relatively negligible with respect to the SA and TC, since (1) the trimming and rounding unit involves a simple hardware scheme; and (2) it is performed at a significantly lower processing rate. We validated our multiplier against our PyTorch CUDA implementation with cycle-accurate testbenches to verify calculation integrity. 5.1 Accuracy Results In Table 2, we present our method’s results for the 5opt, 3opt, and 2opt configurations, with and without rounding (±R), as described in Section 3.1, and without vSPARQ (-vS). As expected, we observe that (1) better accuracy is achieved with the increase of window placement options; (2) overall, rounding further reduces quantization noise, which leads to smaller accuracy degradation; and (3) vSPARQ contribution is noticeable mainly in configurations with relatively high quantization noise. In addition, we observe a large impact on accuracy in the transition from 2opt to 3opt, since there is a high probability that at least one of the 4-bit MSBs will be toggled. For example, given the non-zero-valued activations in ResNet-18 with the ILSVRC-2012 dataset, we measure that bits 7, 6, 5, and 4 are toggled in 0.5%, 9.2%, 33.8%, and 44.8% of the time, respectively. Assuming the bit values are statistically independent, the probability of at least one toggled bit is 67%. Notice that there is a clear redundancy in the 2opt configuration that picks the 4-bit MSBs, even though 10% of the time the two MSBs are toggled. Computationally, SPARQ may be considered as a dynamic 4b-8b PTQ, in which quantization to 4 bits from 8 bits is conducted occasionally in the event of two adjacent non-zero-value activations. The upside of conventional PTQ methods, however, is the reduction in memory footprint, where the dynamic method falls short, due to the additional metadata. For example, the 3opt configuration requires additional 3-bit metadata per 4-bit activation data (2-bit ShiftCtrl and 1-bit MuxCtrl). Still, the memory footprint may be reduced by grouping the metadata for several activations, which we leave for future exploration. In Table 3, we present our results compared with previous related works [1, 5, 6, 31]. We would like to point out that SySMT is similar to the 2opt configuration. The slightly different results are due to the different BatchNorm calibrations and the slightly different 8-bit quantized models. Regarding ResNet-50, SySMT quantizes its weights, whereas SPARQ focuses on quantizing activations. Reducing the bit width: 3 bits and 2 bits. To further challenge SPARQ efficiency, we experiment with 3-bit and 2-bit configurations. The lower budget leads to increased quantization noise even when one of the activations within the activation pair has a zero value, since the total window sizes are 6 and 4 bits for the 3-bit and 2-bit configurations, respectively. In Table 4, we present SPARQ accuracy results compared with other methods that reported sub-4b quantization results. As opposed to Table 2, we observe that vSPARQ impact is more significant in lower bit-widths. 5.2 Hardware Evaluation Table 5 summarizes the area overhead normalized to the MAC throughput of SPARQ for both SA and TC use cases. The SA and TC baselines are conventional 8b-8b SA and TC PEs, respectively. Memory, such as SRAMs, are not considered in the analysis (which could decrease the area overhead percentages). The 2×4b-8b design is presented as a reference implementation in the case of 4b-8b quantized values with equivalent throughput to the design in Figure 2. For the sake of fair comparison, there is a single psum in the 2×4b-8b design. With respect to the SA, the 2×4b-8b PE requires approximately half the area per MAC operation than the 8b-8b PE. On the one hand, the total multipliers’ area of the 2×4b-8b PE is significantly smaller; however, the 2×4b-8b PE employs a 3-input adder. The shift-left logic is the main contributor to the increasing area overhead of opt2 through opt5. As the number of shift-left options increases, the shift logic becomes more complex and utilizes a bigger logic area. Regarding 6opt (3 bits) and 7opt (2 bits) configurations, even though they require additional window placement options, the overall area decreases, since the area of the multipliers, registers, and multiplexers within the shift-left units is reduced. Also, our 2opt scheme introduces a significantly smaller area overhead compared with SySMT, due to the fact that SySMT required the trimming and rounding hardware to operate at the same high throughput rate as the SA. Regarding TC, the 2×4b-8b implementation requires half the area (normalized) of the TC 8b-8b baseline PE. Similar to the SA use case, the 2×4b-8b PE multipliers are smaller; however, this time the 2×4b-8b PE adder tree grows. Interestingly, the relative area of 5opt no-vSPARQ (-vS) is only slightly higher than the “full” 3opt SPARQ implementation. Given the accuracy differences between the two configurations (Table 2), the 3opt SPARQ operating point presented in this work may not be a good trade-off between accuracy and area. 5.3 Leveraging Activation Sparsity on Top of Sparse Tensor Cores We simulate SPARQ on top of an STC with models pruned with 2:4 structured pruning. As presented in Figure 5, activations are first filtered through the multiplexers according to the non-zero-value weight coordinates. Then, vSPARQ comes into play, inspecting pairs of activations, as described in Section 3. Since in STC the trimming and rounding logic should be replicated for each DP unit, we implemented and synthesized the trimming and rounding unit to estimate its area overhead. The unit area, relative to the conventional TC (Figure 4), is 17%, 12%, and 9% for the 5opt, 3opt, and 2opt configurations, respectively. The relative area may be even smaller if we consider the entire STC design (Figure 5). SPARQ is, therefore, beneficial in terms of performance-to-area when attached to an STC. In Table 6, we report the pruned models’ FP32 and A8W8 quantized accuracies, and repeat all experiments described thus far. Interestingly, the relative accuracy degradation of the pruned models is slightly higher than that of the unpruned models in Table 3 [17, 40]. Nevertheless, SPARQ still achieves less than 1% relative degradation in accuracy with 4-bit 5opt and 3opt, and 3-bit 6opt. 6 Limitations and Societal Impacts SPARQ has two main limitations: (1) It does not achieve the memory footprint decrease as native 4b-8b quantization methods do, because of the additional metadata that accompanies each value, as discussed in Section 5.1. The memory footprint may be decreased by giving up vSPARQ or sharing ShiftCtrl for a number of activations. We leave these research directions for future work. (2) From a hardware perspective, SPARQ requires hardware support, i.e., it cannot run on today’s commodity hardware. In addition, compared with native 4b-8b quantizations, our hardware implementation incurs some overhead, as described in Section 5.2. As for the societal impacts, quantization methods, in general, increase the effective amount of available computing resources, since the execution requirements of quantized models are lower. The effective increase in computing power may be targeted towards negative use, such as surveillance and fake profile generation. 7 Conclusion We present SPARQ, a sparsity-aware quantization method that dynamically leverages sparsity in different granularities—from the entire 8-bit value to the individual bits. Thanks to the inherent activation sparsity, quantization to n bits occurs only occasionally. When quantization to n bits does occur, bit-level sparsity is leveraged by trimming leading zero bits and picking the most significant consecutive n bits. SPARQ induces minor accuracy degradation and is hardware-friendly. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. We also thank Moran Shkolnik, Mario Shalabi, and Michael Behar for their valuable feedback.
1. What is the focus and contribution of the paper on neural network quantization? 2. What are the strengths of the proposed approach, particularly in terms of hardware implementation and area cost analysis? 3. What are the weaknesses of the paper regarding its experimental evaluation and ablation study? 4. Do you have any concerns about the fairness of comparing the proposed method with other quantization techniques? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a post-training sparsity-aware quantization algorithm, SPARQ, for neural networks. SPARQ leverages the bitwise sparsity by skipping leading zero-value during quantization. SPARQ also leverages the elementwise sparsity by dynamically increase the quantization precision if one in the activation pair is zero. This paper also presents a practical hardware implementation for supporting the SPARQ. The experiments show a 0.18% accuracy drop in 4-bit quantized ResNet50 on ImageNet. Review Strength: The writing is clear. The paper is well structured and easy to follow. The paper presents the corresponding hardware design and analyzes the extra area cost for supporting the proposed quantization method. Weakness: The evaluation is mainly conducted on large-scale neural networks such as ResNet and GoogleNet. It is unclear how SPARQ performs on lightweight networks such as SqueezeNet and MobileNets. The evaluation lacks an ablation study. It is unclear how much improvement comes from bSPARQ and how much comes from vSPARQ. Each bSPARQ quantized value requires additional index bits. For example, 4-bit bSPARQ with 5opt is actually 7-bit quantization, and 2-bit with 7opt is 5-bit quantization. It remains questionable whether it is fair to compare against other 4-bit quantization methods. Moreover, it is unclear how speedup is calculated. According to the roofline model, most neural network computation is memory-bounded. Since the memory footprint does not decrease much, what is the real benefit of SPARQ in terms of both latency and energy? Even though it has been stated in the limitation section in the paper, it is a severe problem of the proposed method.
NIPS
Title Post-Training Sparsity-Aware Quantization Abstract Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-byactivation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other’s 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation. The code is available at https://github.com/gilshm/sparq. 1 Introduction Deep neural networks (DNNs) are at the heart of numerous applications, such as image classification and object detection [8], image synthesis [30], and recommendation systems [7]. DNNs, however, require abundant computations, as, for example, billions of multiply-and-accumulate (MAC) operations are required to assign a 224×224 colored image from the ImageNet dataset to one of its thousand possible classes. Limited computational resources, such as those in edge devices, latency constraints, and higher input resolutions, are all catalysts for development of methods that increase the ratio between DNN execution performance to hardware area, with as minimal impact on model accuracy as possible. One common method of doing so is quantization. Quantization is commonly used to map the 32-bit floating-point (FP32) activations and weights in convolutional neural networks (CNNs) to 8-bit integers (INT8), which is known to result in minor or no degradation in model accuracy while easing hardware implementation [14]. Going below 8 bits, however, is not trivial, as quantization noise leads to a noticeable decrease in model accuracy. Quantization-aware training (QAT) methods employ training for quantization, to decrease quantization noise and recoup model accuracy [3, 25, 42]. Nevertheless, it is not always possible to employ training, for reasons such as lack of hardware resources, time, power, energy, dataset availability, or skilled manpower. Post-training quantization (PTQ) methods circumvent these issues [1, 5, 6]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). PTQ methods, basically, search for the optimal tensor clipping values to minimize quantization noise [1, 5]. They usually employ uniform quantization, since computing a dot product (DP) of evenly-spaced integer values can be implemented efficiently in hardware. DNN tensor distributions, however, are known to follow a bell-shaped distribution, such as Gaussian or Laplacian, i.e., the uniform quantization that is, on one hand, hardware-friendly, may not be, on the other hand, the best choice for minimizing the noise induced by the quantization process. To solve this mismatch, to some extent, PTQ methods that break tensor distributions into different quantization regions were proposed [6, 12, 24]. Computing a DP comprising values from different quantizations is not trivial though, since each activation-weight multiplication result may correspond to a different scaling factor, i.e., it will induce a multiplication by a different FP value per quantization region. In this paper, we propose sparsity-aware quantization (SPARQ), which leverages the inherent and dynamic activation sparsity from granularities of entire integer 8-bit values (vSPARQ), down to INT8 representation zero-value bits (bSPARQ). With bSPARQ, instead of quantizing every activation to, for example, 4 bits according to a predetermined scaling factor, activations are first quantized to 8 bits and then dynamically quantized to 4 bits by choosing the most significant consecutive 4 bits while skipping leading zero bits (Figure 1). bSPARQ effectively achieves a number of quantization ranges while still enabling a practical hardware implementation. Moreover, inspired by [32], we also leverage the entire 8-bit activation sparsity with vSPARQ, for additional mitigation of quantization noise. Instead of quantizing activation-by-activation to 4 bits, activations are quantized to 4 bits in pairs. If one activation is zero, then the other can span its bits across the first, and thereby still be represented by 8 bits to avoid additional quantization noise. If, however, both activations are non-zero, both are quantized to 4 bits by bSPARQ. We experiment with vSPARQ and bSPARQ in configurations of 4, 3, and 2 data bits. This paper makes the following contributions: • Sparsity-aware quantization (SPARQ). We present a sparsity-aware quantization method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation, while skipping leading zero-value bits. Moreover, since many activations are zero-value, we consider pairs of activations in the quantization process. If one activation is zero, the other can use the entire 2n-bit budget. We experiment with a number of bit-group selection options and activation bit-widths that demonstrates the trade-off between model accuracy and hardware overhead. • Practical hardware implementation. We implement SPARQ on top of a systolic array (SA), inspired by Google TPUs, and on top of a Tensor Core (TC) DP unit, inspired by NVIDIA GPUs, and show that SPARQ is practical in terms of area overheads. In addition, we also discuss SPARQ implementation on top of NVIDIA Sparse TCs (STCs), thus leveraging activation sparsity on top of weight sparsity. • Comprehensive evaluation. We evaluate our method on a variety of image classification models, with numerous configurations and activation bit-widths, and compare it with previous PTQ works. 2 Related Work PTQ methods are the most relevant works that are related to this work. ACIQ [1] analytically extracts the optimal quantization clipping values from the tensors’ distributions and uses per-channel bit-allocation and per-channel quantization of activations. LBQ [5] formulates a minimum MSE optimization problem that is then solved numerically per layer, and employs additional low-precision tensors to sensitive layers. AdaQuant [10] and AdaRound [21] optimize the common round-to-nearest rounding scheme to reduce quantization noise. BRECQ [16] analyzes the second-order error and optimizes the quantization at block granularity. Conceptually, both vSPARQ and bSPARQ can be employed on top of any of the above quantizations (for simplicity’s sake, we use a simple 8b-8b min-max symmetric quantization, as we also describe in Section 5). Other works, such as OLAccel [24], PWLQ [6], and BiScaled-DNN [12], divide the tensor distribution into two regions. OLAccel divides the tensor distribution into a low-precision region that contains the majority of data, and a high-precision region that contains a small portion of the data (e.g., 3%), which they define as outliers. PWLQ and BiScaled-DNN, on the other hand, divide the tensor distribution into two regions with the same bit-width. BiScaled-DNN uses different scale factors on overlapping regions and implements a ratio heuristic to set the breakpoint between the regions, whereas PWLQ picks the appropriate breakpoint via minimization of the quantization error. Interestingly, PWLQ is capable of breaking the distribution into more than two regions; however, the authors state that from a hardware perspective, this may not be feasible. Following OLAccel, OverQ [41] leverages activation sparsity to avoid the dedicated outlier datapath used in OLAccel. In this work, we employ a simple rounding mechanism and bit-level sparsity to mitigate noise in the occasion a zero-value does not exist, and we propose a parallel implementation rather than a serial one. SySMT [32] leverages sparsity in quantization of both activations and weights to 4 bits. Their method incurs relatively high area overheads, since the quantization logic has to be scaled with the number of processing units. Moreover, SySMT incurs relatively high degradation in accuracy, since quantization to 4 bits is implemented by trimming either the 4-bit most significant bits (MSBs) or the 4-bit least significant bits (LSBs). These two options are not optimal, since we find that, for example, with ResNet-18 and ILSVRC-2012, 67% of the non-zero-value activation values have at least one of the 4-bit MSBs toggled (i.e., equal to one), even though 90% of the time, the two MSBs are not toggled. That is, the two MSBs are most likely not toggled when the 4-bit MSBs are chosen. 3 The Basic Principle of SPARQ SPARQ comprises two orthogonal techniques: bSPARQ and vSPARQ. The former leverages zerovalue bits to trim an 8-bit value to an n-bit value; and the latter leverages zero-value activations. Below, we describe both in detail. Throughout this work, we focus on quantizing the activations and leveraging only their sparsity, i.e., no correlation is made with the weight values, unless otherwise stated. 3.1 bSPARQ: Leveraging Bit Sparsity Consider an already quantized 8-bit activation, x, and quantization to 4 bits (i.e., n = 4). bSPARQ trims the activation from 8 bits to 4 bits by inspecting the activation bits and choosing the most significant consecutive 4 bits within it, which, in practice, is achieved by searching for the first most significant toggled bit. The motivation behind bSPARQ is twofold: first, activations usually follow a bell-shaped distribution, meaning that the MSBs are usually equal to zero and, therefore, can be skipped; and second, if the MSBs are toggled, the LSBs’ contribution to the entire value is insignificant. For example, given the value 000110112 (2710), the 4-bit window will be positioned at bits [4:1] (000110112), thus achieving the approximated value 2610. Notice that since there are five window position options, the 4-bit window is accompanied by a 3-bit identifier that corresponds to the window position—that is, how much shift-left is required on top of the four trimmed bits. In addition, to further reduce the dynamic quantization noise, we round the value within the chosen window according to the residual LSBs. bSPARQ is visually demonstrated in Figure 1. Supporting five window options requires additional circuitry compared with, for example, three window options, since additional placement options require additional hardware support by the shift-left unit. The trade-off is, however, improved accuracy, since additional placement options introduce less quantization noise. We experiment with five, three, and two placement options, denoted as 5opt, 3opt, and 2opt, respectively. With the 3opt configuration, [7:4], [5:2], or [3:0] are chosen, and with the 2opt configuration, either [7:4] or [3:0] are chosen (we leave the analysis of asymmetrical configurations for future work). For example, given the previous value, 000110112, 3opt will choose bits [5:2] (000110112), whereas 2opt will choose bits [7:4] (000110112). Relation to piecewise linear quantization. To mitigate quantization errors, previous works suggest dividing the tensor distributions into different quantization regions, each with a scaling factor of its own [6, 12, 24]. In a sense, bSPARQ is somewhat similar to those. First, each activation is assigned to a quantization range according to its value; however, we break the distributions into hardware-oriented regions of power of two. For example, for the 5opt case, the regions are [0, 21 − 1], [21, 22 − 1], and so on. As a result, values are mapped to their appropriate range by simply counting the leading zero bits. In addition, we avoid any need for preprocessing that searches for the distribution breakpoints to minimize the quantization noise. Second, each region has an individual scaling factor; however, each region scaling factor is a product of a base scaling factor with the corresponding power of two. For example, in the 5opt configuration, the scaling factor of the decimal number 3310 = 001000012 is the original scaling factor times 22. This enables a relatively simple implementation with up to five regions when considering 4-bit activations, and even six and seven regions when considering 3- and 2-bit activations, respectively—as opposed to the two quantization regions used by previous works. 3.2 vSPARQ: Leveraging Sparsity with Pairs of Activations Consider an 8-bit unsigned activation vector, X = (x1, · · · , xL), and an 8-bit signed weight vector, W = (w1, · · · , wL), both of length L. Also, consider a single MAC unit that computes a single activation-weight multiplication per cycle. vSPARQ, similar to [32, 34, 41], groups activations in pairs, to leverage the dynamic and unstructured activation sparsity. That is, the DP calculations can be formulated as: X ·W = L∑ i even xiwi + xi+1wi+1 = y , (1) where y is the DP scalar result, and in our context, an output activation. For some i, if xi = 0, then xi+1 can be used with 8-bit representation, and vice versa. If, however, both xi 6= 0 and xi+1 6= 0, and given that, for example, bSPARQ is employed, then the precision of both xi and xi+1 is reduced to 4 bits. For a certain i, the vSPARQ operation can also be formulated as: xiwi + xi+1wi+1 = xiwi, if xi+1 = 0 xi+1wi+1, if xi = 0 bSPARQ(xi)wi + bSPARQ(xi+1)wi+1, otherwise . (2) Notice that the two first case statements correspond to an 8b-8b computation, whereas the last case statement corresponds to two 4b-8b computations. The latter case is possible, since two 4b-8b multiplications are logically equivalent to a single 8b-8b multiplication, as we describe next. 8b-8b = 2x4b-8b. Given an 8-bit unsigned activation, x, and an 8-bit signed weight, w, the activationweight multiplication can be formulated as x[7:0] · w[7:0] = 7∑ i=0 2ixi · w[7:0] = ( 3∑ i=0 2i+4xi+4 + 3∑ i=0 2ixi ) · w[7:0] = 24x[7:4] · w[7:0] + x[3:0] · w[7:0] , (3) where the [b : a] notation represents the b-to-a range in bits, the two activation-weight multiplications are 4b-8b wide, and the 24 is equivalent to a 4-bit shift-left operation. By considering an additional weight input as well as dynamic shift-left operations, we can reuse the multipliers and achieve a multiplier capable of either one 8b-8b multiplication or two independent 4b-8b multiplications with a dynamic range: 2opt1xin1,4b · win1,8b + 2opt2xin2,4b · win2,8b , (4) where the activation and weight inputs are 4 bits and 8 bits long, respectively. Equation (4) resembles a FP representation; however, the “opt” configurations are not necessarily continuous, as in 3opt and 2opt. Figure 2 illustrates how Equation (4) is mapped to hardware. The two 4b-8b multipliers correspond to xin1 · win1 and xin2 · win2, and the two shift-left units ( ) correspond to 2opt1 and 2opt2 . The adder corresponds to the addition of the two groups, and the multiplexers, which are not explicitly formulated in Equation (4), are used to choose dynamically between win1, win2, or select both, during execution. We use this multiplier instead of the conventional one used in well-known hardware structures. 4 Case Studies In this section, we examine SPARQ on top of two well-known matrix multiplication accelerator implementations: systolic arrays (SAs) and Tensor Cores (TCs). These accelerators are commonly used for CNNs, since it is a standard practice to map the convolution operation to matrix multiplication [2, 18, 39]. Our focus here is on the processing engines (PEs) comprising each of these structures and that are responsible for single DPs. Both implementations are fully equivalent from a mathematical point of view. Systolic arrays. SAs consist of a large monolithic network of PEs designed for fast and efficient processing of systematic algorithms that execute the same computations with different data at different time instances [15]. The topology of SAs, illustrated in Figure 3, consists of a homogeneous network of tightly coupled PEs, each performing a MAC operation. PEs work in tandem: each PE in the SA receives data from its upstream neighbors, performs a MAC operation, and forwards the data downstream. In our PE design, also known as output-stationary SA, each PE will eventually hold the result of a DP; and the entire SA will comprise a tile from a result matrix. Google’s TPUv2 and TPUv3, for example, consist of 128×128 SA arrays [22]. To deploy SPARQ, the conventional multiplier in each PE is replaced with the one presented in Figure 2, the weight bandwidth is doubled, and the activation bandwidth does not change. Tensor cores. TCs were first introduced in NVIDIA’s Volta architecture to accelerate matrix operations [4, 13, 19]. TCs multiply two 4×4 matrices and add an additional one to the multiplication result. The specific implementation details of TCs are not publicly disclosed; however, a proposed architecture that fits the original TC performance is suggested in [27]. In the proposed TC architecture, there are a number of DP units. Each DP unit performs four parallel activation-weight multiplications, accumulating them in an adder tree together with an additional third value. In this work, we focus on the architecture of a single DP, as presented in Figure 4. To enable SPARQ, the multipliers are replaced and the weight bandwidth is doubled, similar to the SA. NVIDIA also recently introduced weight sparsity acceleration in its Ampere microarchitecture [20, 23]. The Sparse TC (STC) hardware achieves 2× speedup over the original TC by essentially skipping 50% of the computations (Figure 5). STC requires 50% weight structured pruning at a granularity of four elements, i.e., every four adjacent weights must have two zero-value weights. Only the non-zero-value weights are stored with additional coordinates. In Figure 5, the two leftmost weights and two rightmost weights correspond to the four leftmost activations and rightmost activations, respectively. The stored coordinates indicate which activations are picked, since they are to be multiplied by non-zero-value weights. After filtering the activations, they are passed with the weights to the DP unit for further processing. Notice, however, that activation sparsity may still exist even after the selection process. 5 Experiments We evaluate the impact on model accuracy using PyTorch [26], the ILSVRC-2012 dataset [28], and various CNN models [8, 9, 11, 37, 37, 38] (see Table 1). All models are quantized using a simple uniform min-max quantization, employing symmetric unsigned per-layer quantization for activations and symmetric signed per-kernel quantization for weights. The min-max statistics are gathered during a quick preprocessing stage on 2K randomly picked images from the training set. In addition, during preprocessing, we recalibrate the BatchNorm layers’ running mean and running variance statistics [29, 33, 35, 36]. In all models, the first convolution layer is left intact, since its input activations, which correspond to the image pixels, do not include many zero values, if any. Quantization is, therefore, performed on all convolution layers, with the exception of the first layer. We present the quantization results in Table 1 . Throughout this section, we use SPARQ on top of the 8-bit models (A8W8) and report the accuracy degradation relative to the corresponding FP32 model. A4W8 and A8W4 are presented in Table 1 as references to the worse-case accuracy. In Section 5.3, we experiment with a 2:4 structured pruning [23]. To achieve the sparse model with the baseline accuracy, we prune the network based on its pretrained weights and retrain the model from scratch for 90 epochs with a learning rate starting from 0.1 and divided by 10 at epochs 30 and 60. Weight decay and momentum are set to 0.0001 and 0.9, respectively. The different designs are implemented using SystemVerilog and synthesized using Synopsys® Design Compiler® and Virage (now Synopsys) 65nm standard cell library. We use a frequency of 500MHz at slow and fast corners for setup and hold timing closure, respectively. Area estimates were extracted after place-and-route using Cadence® Innovus™. We assume that the overall hardware overhead related to activation trimming and rounding is relatively negligible with respect to the SA and TC, since (1) the trimming and rounding unit involves a simple hardware scheme; and (2) it is performed at a significantly lower processing rate. We validated our multiplier against our PyTorch CUDA implementation with cycle-accurate testbenches to verify calculation integrity. 5.1 Accuracy Results In Table 2, we present our method’s results for the 5opt, 3opt, and 2opt configurations, with and without rounding (±R), as described in Section 3.1, and without vSPARQ (-vS). As expected, we observe that (1) better accuracy is achieved with the increase of window placement options; (2) overall, rounding further reduces quantization noise, which leads to smaller accuracy degradation; and (3) vSPARQ contribution is noticeable mainly in configurations with relatively high quantization noise. In addition, we observe a large impact on accuracy in the transition from 2opt to 3opt, since there is a high probability that at least one of the 4-bit MSBs will be toggled. For example, given the non-zero-valued activations in ResNet-18 with the ILSVRC-2012 dataset, we measure that bits 7, 6, 5, and 4 are toggled in 0.5%, 9.2%, 33.8%, and 44.8% of the time, respectively. Assuming the bit values are statistically independent, the probability of at least one toggled bit is 67%. Notice that there is a clear redundancy in the 2opt configuration that picks the 4-bit MSBs, even though 10% of the time the two MSBs are toggled. Computationally, SPARQ may be considered as a dynamic 4b-8b PTQ, in which quantization to 4 bits from 8 bits is conducted occasionally in the event of two adjacent non-zero-value activations. The upside of conventional PTQ methods, however, is the reduction in memory footprint, where the dynamic method falls short, due to the additional metadata. For example, the 3opt configuration requires additional 3-bit metadata per 4-bit activation data (2-bit ShiftCtrl and 1-bit MuxCtrl). Still, the memory footprint may be reduced by grouping the metadata for several activations, which we leave for future exploration. In Table 3, we present our results compared with previous related works [1, 5, 6, 31]. We would like to point out that SySMT is similar to the 2opt configuration. The slightly different results are due to the different BatchNorm calibrations and the slightly different 8-bit quantized models. Regarding ResNet-50, SySMT quantizes its weights, whereas SPARQ focuses on quantizing activations. Reducing the bit width: 3 bits and 2 bits. To further challenge SPARQ efficiency, we experiment with 3-bit and 2-bit configurations. The lower budget leads to increased quantization noise even when one of the activations within the activation pair has a zero value, since the total window sizes are 6 and 4 bits for the 3-bit and 2-bit configurations, respectively. In Table 4, we present SPARQ accuracy results compared with other methods that reported sub-4b quantization results. As opposed to Table 2, we observe that vSPARQ impact is more significant in lower bit-widths. 5.2 Hardware Evaluation Table 5 summarizes the area overhead normalized to the MAC throughput of SPARQ for both SA and TC use cases. The SA and TC baselines are conventional 8b-8b SA and TC PEs, respectively. Memory, such as SRAMs, are not considered in the analysis (which could decrease the area overhead percentages). The 2×4b-8b design is presented as a reference implementation in the case of 4b-8b quantized values with equivalent throughput to the design in Figure 2. For the sake of fair comparison, there is a single psum in the 2×4b-8b design. With respect to the SA, the 2×4b-8b PE requires approximately half the area per MAC operation than the 8b-8b PE. On the one hand, the total multipliers’ area of the 2×4b-8b PE is significantly smaller; however, the 2×4b-8b PE employs a 3-input adder. The shift-left logic is the main contributor to the increasing area overhead of opt2 through opt5. As the number of shift-left options increases, the shift logic becomes more complex and utilizes a bigger logic area. Regarding 6opt (3 bits) and 7opt (2 bits) configurations, even though they require additional window placement options, the overall area decreases, since the area of the multipliers, registers, and multiplexers within the shift-left units is reduced. Also, our 2opt scheme introduces a significantly smaller area overhead compared with SySMT, due to the fact that SySMT required the trimming and rounding hardware to operate at the same high throughput rate as the SA. Regarding TC, the 2×4b-8b implementation requires half the area (normalized) of the TC 8b-8b baseline PE. Similar to the SA use case, the 2×4b-8b PE multipliers are smaller; however, this time the 2×4b-8b PE adder tree grows. Interestingly, the relative area of 5opt no-vSPARQ (-vS) is only slightly higher than the “full” 3opt SPARQ implementation. Given the accuracy differences between the two configurations (Table 2), the 3opt SPARQ operating point presented in this work may not be a good trade-off between accuracy and area. 5.3 Leveraging Activation Sparsity on Top of Sparse Tensor Cores We simulate SPARQ on top of an STC with models pruned with 2:4 structured pruning. As presented in Figure 5, activations are first filtered through the multiplexers according to the non-zero-value weight coordinates. Then, vSPARQ comes into play, inspecting pairs of activations, as described in Section 3. Since in STC the trimming and rounding logic should be replicated for each DP unit, we implemented and synthesized the trimming and rounding unit to estimate its area overhead. The unit area, relative to the conventional TC (Figure 4), is 17%, 12%, and 9% for the 5opt, 3opt, and 2opt configurations, respectively. The relative area may be even smaller if we consider the entire STC design (Figure 5). SPARQ is, therefore, beneficial in terms of performance-to-area when attached to an STC. In Table 6, we report the pruned models’ FP32 and A8W8 quantized accuracies, and repeat all experiments described thus far. Interestingly, the relative accuracy degradation of the pruned models is slightly higher than that of the unpruned models in Table 3 [17, 40]. Nevertheless, SPARQ still achieves less than 1% relative degradation in accuracy with 4-bit 5opt and 3opt, and 3-bit 6opt. 6 Limitations and Societal Impacts SPARQ has two main limitations: (1) It does not achieve the memory footprint decrease as native 4b-8b quantization methods do, because of the additional metadata that accompanies each value, as discussed in Section 5.1. The memory footprint may be decreased by giving up vSPARQ or sharing ShiftCtrl for a number of activations. We leave these research directions for future work. (2) From a hardware perspective, SPARQ requires hardware support, i.e., it cannot run on today’s commodity hardware. In addition, compared with native 4b-8b quantizations, our hardware implementation incurs some overhead, as described in Section 5.2. As for the societal impacts, quantization methods, in general, increase the effective amount of available computing resources, since the execution requirements of quantized models are lower. The effective increase in computing power may be targeted towards negative use, such as surveillance and fake profile generation. 7 Conclusion We present SPARQ, a sparsity-aware quantization method that dynamically leverages sparsity in different granularities—from the entire 8-bit value to the individual bits. Thanks to the inherent activation sparsity, quantization to n bits occurs only occasionally. When quantization to n bits does occur, bit-level sparsity is leveraged by trimming leading zero bits and picking the most significant consecutive n bits. SPARQ induces minor accuracy degradation and is hardware-friendly. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. We also thank Moran Shkolnik, Mario Shalabi, and Michael Behar for their valuable feedback.
1. What is the main contribution of the paper regarding post-training quantization? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any questions or concerns regarding the experimental results and hardware implementations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any missing ablation studies or comparisons with recent works that could enhance the paper's findings?
Summary Of The Paper Review
Summary Of The Paper This paper presents a sparsity-aware post-training quantization approach, which considers the sparsity on both the digit representation level and the activation level. The algorithm thus can select the most significant bits from the original 8-bit value representation, and dynamically adjust the bit budget allocation for the activation pairs during multiplication. The authors also discuss the hardware implementation for the proposed approach based on systolic array and Tensor Core DP unit. Experimental results show the effectiveness of the proposed method on various network architectures. Nonetheless, the paper still has some unclear issues with the experimental results and hardware implementations. Review Strengths: The paper proposes a novel solution that leverages the sparsity of bit representations and activations to enhance the quantized representation dynamically. Comprehensive empirical evaluations are provided to verify the proposed approach. The paper is overall easy to follow. Weakness: The paper has missing some recent efforts on post-training quantization listed below [1,2,3,4]. These works largely follow uniform quantization and are therefore friendly to general hardwares. [1] Hubara I, Nahshan Y, Hanani Y, et al. Improving post training neural quantization: Layer-wise calibration and integer programming. ICML, 2021. [2] Li Y, Gong R, Tan X, et al. Brecq: Pushing the limit of post-training quantization by block reconstruction. ICLR 2021. [3] Nagel M, Amjad R A, Van Baalen M, et al. Up or down? adaptive rounding for post-training quantization. ICML, 2020. [4] Nahshan Y, Chmiel B, Baskin C, et al. Loss aware post-training quantization. arXiv preprint arXiv:1911.07190, 2019. The hardware implementation and evaluation are not clear enough: In terms of the hardware evaluation, please introduce more details on how to calculate the hardware area for the proposed method, which would be clear to the general audience. It is still unclear why the proposed method enjoys 2x speedup. Is it due to the design of STC (L195)? Does the performance refer to acceleration in L251? What about the time consumption on hardware to search the most significant bits in bSPARQ (L111)? Do you measure the practical hardware speed up for your proposed algorithm? More issues for the experiments: What is the absolute value accuracy of the quantized models? For now the results are mostly based on the accuracy degradation, however, the original model might have different full-precision accuracies in the first place. It would be more comprehensive to compare with the missing recent works [1,2,3,4]? Especially, BRECQ[2] also considers the low bit post-training quantization on ResNet-18 and ResNet-50. It would be also convenient to compare with these approaches with absolute accuracy values reported. Missing ablations studies for results without bSPRAC or vSPARC. You may present the ablation for low bit quantization (e.g. Table 5), so that the gain of bSPARC and vSPARC can be presented more clearly. Detailed comments: It would be more inspiring to show the statistics of activation sparsity for different layers or network architectures. This illustrates how much we can benefit from vSPARQ (Equation 2). Is the proposed method only designed for activation quantization? What are the potential issues if applied for weight quantization? The bit sparsity under sparsity-aware post-training quantization seems kind of misleading to me, which simply refers to the zeros in the binary digits of 8-bit representation. For bSPARQ, what about trimming the bit-width with non-consecutive but equally spanned digits, e.g., 4-bit at 1,3,5,7 or 2,4,6,8 digits from the 8-bit representation?
NIPS
Title Post-Training Sparsity-Aware Quantization Abstract Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-byactivation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other’s 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation. The code is available at https://github.com/gilshm/sparq. 1 Introduction Deep neural networks (DNNs) are at the heart of numerous applications, such as image classification and object detection [8], image synthesis [30], and recommendation systems [7]. DNNs, however, require abundant computations, as, for example, billions of multiply-and-accumulate (MAC) operations are required to assign a 224×224 colored image from the ImageNet dataset to one of its thousand possible classes. Limited computational resources, such as those in edge devices, latency constraints, and higher input resolutions, are all catalysts for development of methods that increase the ratio between DNN execution performance to hardware area, with as minimal impact on model accuracy as possible. One common method of doing so is quantization. Quantization is commonly used to map the 32-bit floating-point (FP32) activations and weights in convolutional neural networks (CNNs) to 8-bit integers (INT8), which is known to result in minor or no degradation in model accuracy while easing hardware implementation [14]. Going below 8 bits, however, is not trivial, as quantization noise leads to a noticeable decrease in model accuracy. Quantization-aware training (QAT) methods employ training for quantization, to decrease quantization noise and recoup model accuracy [3, 25, 42]. Nevertheless, it is not always possible to employ training, for reasons such as lack of hardware resources, time, power, energy, dataset availability, or skilled manpower. Post-training quantization (PTQ) methods circumvent these issues [1, 5, 6]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). PTQ methods, basically, search for the optimal tensor clipping values to minimize quantization noise [1, 5]. They usually employ uniform quantization, since computing a dot product (DP) of evenly-spaced integer values can be implemented efficiently in hardware. DNN tensor distributions, however, are known to follow a bell-shaped distribution, such as Gaussian or Laplacian, i.e., the uniform quantization that is, on one hand, hardware-friendly, may not be, on the other hand, the best choice for minimizing the noise induced by the quantization process. To solve this mismatch, to some extent, PTQ methods that break tensor distributions into different quantization regions were proposed [6, 12, 24]. Computing a DP comprising values from different quantizations is not trivial though, since each activation-weight multiplication result may correspond to a different scaling factor, i.e., it will induce a multiplication by a different FP value per quantization region. In this paper, we propose sparsity-aware quantization (SPARQ), which leverages the inherent and dynamic activation sparsity from granularities of entire integer 8-bit values (vSPARQ), down to INT8 representation zero-value bits (bSPARQ). With bSPARQ, instead of quantizing every activation to, for example, 4 bits according to a predetermined scaling factor, activations are first quantized to 8 bits and then dynamically quantized to 4 bits by choosing the most significant consecutive 4 bits while skipping leading zero bits (Figure 1). bSPARQ effectively achieves a number of quantization ranges while still enabling a practical hardware implementation. Moreover, inspired by [32], we also leverage the entire 8-bit activation sparsity with vSPARQ, for additional mitigation of quantization noise. Instead of quantizing activation-by-activation to 4 bits, activations are quantized to 4 bits in pairs. If one activation is zero, then the other can span its bits across the first, and thereby still be represented by 8 bits to avoid additional quantization noise. If, however, both activations are non-zero, both are quantized to 4 bits by bSPARQ. We experiment with vSPARQ and bSPARQ in configurations of 4, 3, and 2 data bits. This paper makes the following contributions: • Sparsity-aware quantization (SPARQ). We present a sparsity-aware quantization method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation, while skipping leading zero-value bits. Moreover, since many activations are zero-value, we consider pairs of activations in the quantization process. If one activation is zero, the other can use the entire 2n-bit budget. We experiment with a number of bit-group selection options and activation bit-widths that demonstrates the trade-off between model accuracy and hardware overhead. • Practical hardware implementation. We implement SPARQ on top of a systolic array (SA), inspired by Google TPUs, and on top of a Tensor Core (TC) DP unit, inspired by NVIDIA GPUs, and show that SPARQ is practical in terms of area overheads. In addition, we also discuss SPARQ implementation on top of NVIDIA Sparse TCs (STCs), thus leveraging activation sparsity on top of weight sparsity. • Comprehensive evaluation. We evaluate our method on a variety of image classification models, with numerous configurations and activation bit-widths, and compare it with previous PTQ works. 2 Related Work PTQ methods are the most relevant works that are related to this work. ACIQ [1] analytically extracts the optimal quantization clipping values from the tensors’ distributions and uses per-channel bit-allocation and per-channel quantization of activations. LBQ [5] formulates a minimum MSE optimization problem that is then solved numerically per layer, and employs additional low-precision tensors to sensitive layers. AdaQuant [10] and AdaRound [21] optimize the common round-to-nearest rounding scheme to reduce quantization noise. BRECQ [16] analyzes the second-order error and optimizes the quantization at block granularity. Conceptually, both vSPARQ and bSPARQ can be employed on top of any of the above quantizations (for simplicity’s sake, we use a simple 8b-8b min-max symmetric quantization, as we also describe in Section 5). Other works, such as OLAccel [24], PWLQ [6], and BiScaled-DNN [12], divide the tensor distribution into two regions. OLAccel divides the tensor distribution into a low-precision region that contains the majority of data, and a high-precision region that contains a small portion of the data (e.g., 3%), which they define as outliers. PWLQ and BiScaled-DNN, on the other hand, divide the tensor distribution into two regions with the same bit-width. BiScaled-DNN uses different scale factors on overlapping regions and implements a ratio heuristic to set the breakpoint between the regions, whereas PWLQ picks the appropriate breakpoint via minimization of the quantization error. Interestingly, PWLQ is capable of breaking the distribution into more than two regions; however, the authors state that from a hardware perspective, this may not be feasible. Following OLAccel, OverQ [41] leverages activation sparsity to avoid the dedicated outlier datapath used in OLAccel. In this work, we employ a simple rounding mechanism and bit-level sparsity to mitigate noise in the occasion a zero-value does not exist, and we propose a parallel implementation rather than a serial one. SySMT [32] leverages sparsity in quantization of both activations and weights to 4 bits. Their method incurs relatively high area overheads, since the quantization logic has to be scaled with the number of processing units. Moreover, SySMT incurs relatively high degradation in accuracy, since quantization to 4 bits is implemented by trimming either the 4-bit most significant bits (MSBs) or the 4-bit least significant bits (LSBs). These two options are not optimal, since we find that, for example, with ResNet-18 and ILSVRC-2012, 67% of the non-zero-value activation values have at least one of the 4-bit MSBs toggled (i.e., equal to one), even though 90% of the time, the two MSBs are not toggled. That is, the two MSBs are most likely not toggled when the 4-bit MSBs are chosen. 3 The Basic Principle of SPARQ SPARQ comprises two orthogonal techniques: bSPARQ and vSPARQ. The former leverages zerovalue bits to trim an 8-bit value to an n-bit value; and the latter leverages zero-value activations. Below, we describe both in detail. Throughout this work, we focus on quantizing the activations and leveraging only their sparsity, i.e., no correlation is made with the weight values, unless otherwise stated. 3.1 bSPARQ: Leveraging Bit Sparsity Consider an already quantized 8-bit activation, x, and quantization to 4 bits (i.e., n = 4). bSPARQ trims the activation from 8 bits to 4 bits by inspecting the activation bits and choosing the most significant consecutive 4 bits within it, which, in practice, is achieved by searching for the first most significant toggled bit. The motivation behind bSPARQ is twofold: first, activations usually follow a bell-shaped distribution, meaning that the MSBs are usually equal to zero and, therefore, can be skipped; and second, if the MSBs are toggled, the LSBs’ contribution to the entire value is insignificant. For example, given the value 000110112 (2710), the 4-bit window will be positioned at bits [4:1] (000110112), thus achieving the approximated value 2610. Notice that since there are five window position options, the 4-bit window is accompanied by a 3-bit identifier that corresponds to the window position—that is, how much shift-left is required on top of the four trimmed bits. In addition, to further reduce the dynamic quantization noise, we round the value within the chosen window according to the residual LSBs. bSPARQ is visually demonstrated in Figure 1. Supporting five window options requires additional circuitry compared with, for example, three window options, since additional placement options require additional hardware support by the shift-left unit. The trade-off is, however, improved accuracy, since additional placement options introduce less quantization noise. We experiment with five, three, and two placement options, denoted as 5opt, 3opt, and 2opt, respectively. With the 3opt configuration, [7:4], [5:2], or [3:0] are chosen, and with the 2opt configuration, either [7:4] or [3:0] are chosen (we leave the analysis of asymmetrical configurations for future work). For example, given the previous value, 000110112, 3opt will choose bits [5:2] (000110112), whereas 2opt will choose bits [7:4] (000110112). Relation to piecewise linear quantization. To mitigate quantization errors, previous works suggest dividing the tensor distributions into different quantization regions, each with a scaling factor of its own [6, 12, 24]. In a sense, bSPARQ is somewhat similar to those. First, each activation is assigned to a quantization range according to its value; however, we break the distributions into hardware-oriented regions of power of two. For example, for the 5opt case, the regions are [0, 21 − 1], [21, 22 − 1], and so on. As a result, values are mapped to their appropriate range by simply counting the leading zero bits. In addition, we avoid any need for preprocessing that searches for the distribution breakpoints to minimize the quantization noise. Second, each region has an individual scaling factor; however, each region scaling factor is a product of a base scaling factor with the corresponding power of two. For example, in the 5opt configuration, the scaling factor of the decimal number 3310 = 001000012 is the original scaling factor times 22. This enables a relatively simple implementation with up to five regions when considering 4-bit activations, and even six and seven regions when considering 3- and 2-bit activations, respectively—as opposed to the two quantization regions used by previous works. 3.2 vSPARQ: Leveraging Sparsity with Pairs of Activations Consider an 8-bit unsigned activation vector, X = (x1, · · · , xL), and an 8-bit signed weight vector, W = (w1, · · · , wL), both of length L. Also, consider a single MAC unit that computes a single activation-weight multiplication per cycle. vSPARQ, similar to [32, 34, 41], groups activations in pairs, to leverage the dynamic and unstructured activation sparsity. That is, the DP calculations can be formulated as: X ·W = L∑ i even xiwi + xi+1wi+1 = y , (1) where y is the DP scalar result, and in our context, an output activation. For some i, if xi = 0, then xi+1 can be used with 8-bit representation, and vice versa. If, however, both xi 6= 0 and xi+1 6= 0, and given that, for example, bSPARQ is employed, then the precision of both xi and xi+1 is reduced to 4 bits. For a certain i, the vSPARQ operation can also be formulated as: xiwi + xi+1wi+1 = xiwi, if xi+1 = 0 xi+1wi+1, if xi = 0 bSPARQ(xi)wi + bSPARQ(xi+1)wi+1, otherwise . (2) Notice that the two first case statements correspond to an 8b-8b computation, whereas the last case statement corresponds to two 4b-8b computations. The latter case is possible, since two 4b-8b multiplications are logically equivalent to a single 8b-8b multiplication, as we describe next. 8b-8b = 2x4b-8b. Given an 8-bit unsigned activation, x, and an 8-bit signed weight, w, the activationweight multiplication can be formulated as x[7:0] · w[7:0] = 7∑ i=0 2ixi · w[7:0] = ( 3∑ i=0 2i+4xi+4 + 3∑ i=0 2ixi ) · w[7:0] = 24x[7:4] · w[7:0] + x[3:0] · w[7:0] , (3) where the [b : a] notation represents the b-to-a range in bits, the two activation-weight multiplications are 4b-8b wide, and the 24 is equivalent to a 4-bit shift-left operation. By considering an additional weight input as well as dynamic shift-left operations, we can reuse the multipliers and achieve a multiplier capable of either one 8b-8b multiplication or two independent 4b-8b multiplications with a dynamic range: 2opt1xin1,4b · win1,8b + 2opt2xin2,4b · win2,8b , (4) where the activation and weight inputs are 4 bits and 8 bits long, respectively. Equation (4) resembles a FP representation; however, the “opt” configurations are not necessarily continuous, as in 3opt and 2opt. Figure 2 illustrates how Equation (4) is mapped to hardware. The two 4b-8b multipliers correspond to xin1 · win1 and xin2 · win2, and the two shift-left units ( ) correspond to 2opt1 and 2opt2 . The adder corresponds to the addition of the two groups, and the multiplexers, which are not explicitly formulated in Equation (4), are used to choose dynamically between win1, win2, or select both, during execution. We use this multiplier instead of the conventional one used in well-known hardware structures. 4 Case Studies In this section, we examine SPARQ on top of two well-known matrix multiplication accelerator implementations: systolic arrays (SAs) and Tensor Cores (TCs). These accelerators are commonly used for CNNs, since it is a standard practice to map the convolution operation to matrix multiplication [2, 18, 39]. Our focus here is on the processing engines (PEs) comprising each of these structures and that are responsible for single DPs. Both implementations are fully equivalent from a mathematical point of view. Systolic arrays. SAs consist of a large monolithic network of PEs designed for fast and efficient processing of systematic algorithms that execute the same computations with different data at different time instances [15]. The topology of SAs, illustrated in Figure 3, consists of a homogeneous network of tightly coupled PEs, each performing a MAC operation. PEs work in tandem: each PE in the SA receives data from its upstream neighbors, performs a MAC operation, and forwards the data downstream. In our PE design, also known as output-stationary SA, each PE will eventually hold the result of a DP; and the entire SA will comprise a tile from a result matrix. Google’s TPUv2 and TPUv3, for example, consist of 128×128 SA arrays [22]. To deploy SPARQ, the conventional multiplier in each PE is replaced with the one presented in Figure 2, the weight bandwidth is doubled, and the activation bandwidth does not change. Tensor cores. TCs were first introduced in NVIDIA’s Volta architecture to accelerate matrix operations [4, 13, 19]. TCs multiply two 4×4 matrices and add an additional one to the multiplication result. The specific implementation details of TCs are not publicly disclosed; however, a proposed architecture that fits the original TC performance is suggested in [27]. In the proposed TC architecture, there are a number of DP units. Each DP unit performs four parallel activation-weight multiplications, accumulating them in an adder tree together with an additional third value. In this work, we focus on the architecture of a single DP, as presented in Figure 4. To enable SPARQ, the multipliers are replaced and the weight bandwidth is doubled, similar to the SA. NVIDIA also recently introduced weight sparsity acceleration in its Ampere microarchitecture [20, 23]. The Sparse TC (STC) hardware achieves 2× speedup over the original TC by essentially skipping 50% of the computations (Figure 5). STC requires 50% weight structured pruning at a granularity of four elements, i.e., every four adjacent weights must have two zero-value weights. Only the non-zero-value weights are stored with additional coordinates. In Figure 5, the two leftmost weights and two rightmost weights correspond to the four leftmost activations and rightmost activations, respectively. The stored coordinates indicate which activations are picked, since they are to be multiplied by non-zero-value weights. After filtering the activations, they are passed with the weights to the DP unit for further processing. Notice, however, that activation sparsity may still exist even after the selection process. 5 Experiments We evaluate the impact on model accuracy using PyTorch [26], the ILSVRC-2012 dataset [28], and various CNN models [8, 9, 11, 37, 37, 38] (see Table 1). All models are quantized using a simple uniform min-max quantization, employing symmetric unsigned per-layer quantization for activations and symmetric signed per-kernel quantization for weights. The min-max statistics are gathered during a quick preprocessing stage on 2K randomly picked images from the training set. In addition, during preprocessing, we recalibrate the BatchNorm layers’ running mean and running variance statistics [29, 33, 35, 36]. In all models, the first convolution layer is left intact, since its input activations, which correspond to the image pixels, do not include many zero values, if any. Quantization is, therefore, performed on all convolution layers, with the exception of the first layer. We present the quantization results in Table 1 . Throughout this section, we use SPARQ on top of the 8-bit models (A8W8) and report the accuracy degradation relative to the corresponding FP32 model. A4W8 and A8W4 are presented in Table 1 as references to the worse-case accuracy. In Section 5.3, we experiment with a 2:4 structured pruning [23]. To achieve the sparse model with the baseline accuracy, we prune the network based on its pretrained weights and retrain the model from scratch for 90 epochs with a learning rate starting from 0.1 and divided by 10 at epochs 30 and 60. Weight decay and momentum are set to 0.0001 and 0.9, respectively. The different designs are implemented using SystemVerilog and synthesized using Synopsys® Design Compiler® and Virage (now Synopsys) 65nm standard cell library. We use a frequency of 500MHz at slow and fast corners for setup and hold timing closure, respectively. Area estimates were extracted after place-and-route using Cadence® Innovus™. We assume that the overall hardware overhead related to activation trimming and rounding is relatively negligible with respect to the SA and TC, since (1) the trimming and rounding unit involves a simple hardware scheme; and (2) it is performed at a significantly lower processing rate. We validated our multiplier against our PyTorch CUDA implementation with cycle-accurate testbenches to verify calculation integrity. 5.1 Accuracy Results In Table 2, we present our method’s results for the 5opt, 3opt, and 2opt configurations, with and without rounding (±R), as described in Section 3.1, and without vSPARQ (-vS). As expected, we observe that (1) better accuracy is achieved with the increase of window placement options; (2) overall, rounding further reduces quantization noise, which leads to smaller accuracy degradation; and (3) vSPARQ contribution is noticeable mainly in configurations with relatively high quantization noise. In addition, we observe a large impact on accuracy in the transition from 2opt to 3opt, since there is a high probability that at least one of the 4-bit MSBs will be toggled. For example, given the non-zero-valued activations in ResNet-18 with the ILSVRC-2012 dataset, we measure that bits 7, 6, 5, and 4 are toggled in 0.5%, 9.2%, 33.8%, and 44.8% of the time, respectively. Assuming the bit values are statistically independent, the probability of at least one toggled bit is 67%. Notice that there is a clear redundancy in the 2opt configuration that picks the 4-bit MSBs, even though 10% of the time the two MSBs are toggled. Computationally, SPARQ may be considered as a dynamic 4b-8b PTQ, in which quantization to 4 bits from 8 bits is conducted occasionally in the event of two adjacent non-zero-value activations. The upside of conventional PTQ methods, however, is the reduction in memory footprint, where the dynamic method falls short, due to the additional metadata. For example, the 3opt configuration requires additional 3-bit metadata per 4-bit activation data (2-bit ShiftCtrl and 1-bit MuxCtrl). Still, the memory footprint may be reduced by grouping the metadata for several activations, which we leave for future exploration. In Table 3, we present our results compared with previous related works [1, 5, 6, 31]. We would like to point out that SySMT is similar to the 2opt configuration. The slightly different results are due to the different BatchNorm calibrations and the slightly different 8-bit quantized models. Regarding ResNet-50, SySMT quantizes its weights, whereas SPARQ focuses on quantizing activations. Reducing the bit width: 3 bits and 2 bits. To further challenge SPARQ efficiency, we experiment with 3-bit and 2-bit configurations. The lower budget leads to increased quantization noise even when one of the activations within the activation pair has a zero value, since the total window sizes are 6 and 4 bits for the 3-bit and 2-bit configurations, respectively. In Table 4, we present SPARQ accuracy results compared with other methods that reported sub-4b quantization results. As opposed to Table 2, we observe that vSPARQ impact is more significant in lower bit-widths. 5.2 Hardware Evaluation Table 5 summarizes the area overhead normalized to the MAC throughput of SPARQ for both SA and TC use cases. The SA and TC baselines are conventional 8b-8b SA and TC PEs, respectively. Memory, such as SRAMs, are not considered in the analysis (which could decrease the area overhead percentages). The 2×4b-8b design is presented as a reference implementation in the case of 4b-8b quantized values with equivalent throughput to the design in Figure 2. For the sake of fair comparison, there is a single psum in the 2×4b-8b design. With respect to the SA, the 2×4b-8b PE requires approximately half the area per MAC operation than the 8b-8b PE. On the one hand, the total multipliers’ area of the 2×4b-8b PE is significantly smaller; however, the 2×4b-8b PE employs a 3-input adder. The shift-left logic is the main contributor to the increasing area overhead of opt2 through opt5. As the number of shift-left options increases, the shift logic becomes more complex and utilizes a bigger logic area. Regarding 6opt (3 bits) and 7opt (2 bits) configurations, even though they require additional window placement options, the overall area decreases, since the area of the multipliers, registers, and multiplexers within the shift-left units is reduced. Also, our 2opt scheme introduces a significantly smaller area overhead compared with SySMT, due to the fact that SySMT required the trimming and rounding hardware to operate at the same high throughput rate as the SA. Regarding TC, the 2×4b-8b implementation requires half the area (normalized) of the TC 8b-8b baseline PE. Similar to the SA use case, the 2×4b-8b PE multipliers are smaller; however, this time the 2×4b-8b PE adder tree grows. Interestingly, the relative area of 5opt no-vSPARQ (-vS) is only slightly higher than the “full” 3opt SPARQ implementation. Given the accuracy differences between the two configurations (Table 2), the 3opt SPARQ operating point presented in this work may not be a good trade-off between accuracy and area. 5.3 Leveraging Activation Sparsity on Top of Sparse Tensor Cores We simulate SPARQ on top of an STC with models pruned with 2:4 structured pruning. As presented in Figure 5, activations are first filtered through the multiplexers according to the non-zero-value weight coordinates. Then, vSPARQ comes into play, inspecting pairs of activations, as described in Section 3. Since in STC the trimming and rounding logic should be replicated for each DP unit, we implemented and synthesized the trimming and rounding unit to estimate its area overhead. The unit area, relative to the conventional TC (Figure 4), is 17%, 12%, and 9% for the 5opt, 3opt, and 2opt configurations, respectively. The relative area may be even smaller if we consider the entire STC design (Figure 5). SPARQ is, therefore, beneficial in terms of performance-to-area when attached to an STC. In Table 6, we report the pruned models’ FP32 and A8W8 quantized accuracies, and repeat all experiments described thus far. Interestingly, the relative accuracy degradation of the pruned models is slightly higher than that of the unpruned models in Table 3 [17, 40]. Nevertheless, SPARQ still achieves less than 1% relative degradation in accuracy with 4-bit 5opt and 3opt, and 3-bit 6opt. 6 Limitations and Societal Impacts SPARQ has two main limitations: (1) It does not achieve the memory footprint decrease as native 4b-8b quantization methods do, because of the additional metadata that accompanies each value, as discussed in Section 5.1. The memory footprint may be decreased by giving up vSPARQ or sharing ShiftCtrl for a number of activations. We leave these research directions for future work. (2) From a hardware perspective, SPARQ requires hardware support, i.e., it cannot run on today’s commodity hardware. In addition, compared with native 4b-8b quantizations, our hardware implementation incurs some overhead, as described in Section 5.2. As for the societal impacts, quantization methods, in general, increase the effective amount of available computing resources, since the execution requirements of quantized models are lower. The effective increase in computing power may be targeted towards negative use, such as surveillance and fake profile generation. 7 Conclusion We present SPARQ, a sparsity-aware quantization method that dynamically leverages sparsity in different granularities—from the entire 8-bit value to the individual bits. Thanks to the inherent activation sparsity, quantization to n bits occurs only occasionally. When quantization to n bits does occur, bit-level sparsity is leveraged by trimming leading zero bits and picking the most significant consecutive n bits. SPARQ induces minor accuracy degradation and is hardware-friendly. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. We also thank Moran Shkolnik, Mario Shalabi, and Michael Behar for their valuable feedback.
1. How does the proposed SPARQ method differ from other quantization methods in terms of its approach to n-bit quantization? 2. Can you provide a detailed explanation of the hardware implementation of SPARQ on a systolic array (SA) or a tensor core (TC) DP unit? 3. How does the paper evaluate the effectiveness of SPARQ compared to other quantization methods for various image classification models? 4. What are the strengths and weaknesses of the paper regarding its contributions to the field of quantization and hardware implementation?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a sparsity-aware quantization (SPARQ) method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation. Also, SPARQ is implemented on a systolic array(SA) or a tensor core(TC) DP unit. Finally, compared with the previous PTQ work, this paper evaluates the quantization effect of this method for various image classification models, and achieves the most advanced results. Review According to the description of the paper, is the hardware implementation only at the algorithm level? Please describe the process of 8b-4b in Figure 1 with words and figures more clearly. The results shown in Table 1,2,4 are not clear enough. Can the full precision results be reflected in tables or words? Compared with other methods, the accuracy of the results presented in this paper is greatly improved, and the description of hardware implementation is surprising.
NIPS
Title Extracting Relationships by Multi-Domain Matching Abstract In many biological and medical contexts, we construct a large labeled corpus by aggregating many sources to use in target prediction tasks. Unfortunately, many of the sources may be irrelevant to our target task, so ignoring the structure of the dataset is detrimental. This work proposes a novel approach, the Multiple Domain Matching Network (MDMN), to exploit this structure. MDMN embeds all data into a shared feature space while learning which domains share strong statistical relationships. These relationships are often insightful in their own right, and they allow domains to share strength without interference from irrelevant data. This methodology builds on existing distribution-matching approaches by assuming that source domains are varied and outcomes multi-factorial. Therefore, each domain should only match a relevant subset. Theoretical analysis shows that the proposed approach can have a tighter generalization bound than existing multiple-domain adaptation approaches. Empirically, we show that the proposed methodology handles higher numbers of source domains (up to 21 empirically), and provides state-of-the-art performance on image, text, and multi-channel time series classification, including clinical outcome data in an open label trial evaluating a novel treatment for Autism Spectrum Disorder. 1 Introduction Deep learning methods have shown unparalleled performance when trained on vast amounts of diverse labeled training data [21], often collected at great cost. In many contexts, especially medical and biological, it is prohibitively expensive to collect or label the number of observations necessary to train an accurate deep neural network classifier. However, a number of related sources, each with “moderate” data, may already be available, which can be combined to construct a large corpus. Naively using the combined source data is often an ineffective strategy; instead, what is needed is unsupervised multiple-domain adaptation. Given labeled data from several source domains (each representing, e.g., one patient in a medical trial, or reviews of one type of product), and unlabeled data from target domains (new patients, or new product categories), we wish to train a classifier that makes accurate predictions about the target domain data at test time. Recent approaches to multiple-domain adaptation involve learning a mapping from each domain into a common feature space, in which observations from the target and source domains have similar distributions [14, 45, 39, 30]. At test time, a target-domain observation is first mapped into this shared feature space, then classified. However, few of the existing works can model the relationship among different domains, which we note is important for several reasons. First, even though data in different domains share labels, their cause and symptoms may be different. Patients with the same 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. condition can be caused by various reasons and diagnosed while sharing only a subset of symptoms. Extracting these relationships between patients is helpful in practice because it limits the model to only relevant information. Second, as mentioned above, a training corpus may be constructed with only a small number of sources within a larger population. For example, we might collect data from many patients with “small” data and domain adaptation is used to generalize to new patients [3]. Therefore, extracting these relationships is of practical importance. In addition to the practical argument, [32] gives a theoretical proof that adding irrelevant source domains harms performance bounds on multiple-domain adaptation. Therefore, it is necessary to automatically choose a weighting over source domains to utilize only relevant domains. There are only a few works that address such a domain weighting strategy [45]. In this manuscript, we extend the proof techniques of [4, 32] to show that a multiple-domain weighting strategy can have a tighter generalization bound than traditional multiple domain approaches. Notably, many recently proposed transfer learning strategies are based on minimizing the Hdivergence between domains in feature space, which was shown to bound generalization error in domain adaptation [4]. Compared to standard L1-divergence, H-divergence limits the hypothesis to a given class, which can be better estimated using finite samples theoretically. The target error bound using H-divergence has the desirable property that it can be estimated by learning a classifier between the source and target domains with finite VC dimension, motivating the Domain Adversarial Neural Network (DANN) [14]. However, neural network usually has large VC dimensions, making the bound using H-divergence loose in practice. In this work, we propose to use a ‘Wasserstein-like’ metric to define domain similarity in the proofs. ‘Wasserstein-like’ distance in our work extends the binary output in H-divergence to real probability output. Our main contribution is our novel approach to multiple-domain adaptation. A key idea from prior work is to match every source domain’s feature-space distribution to that of the target domain [37, 29]. In contrast, we map the distribution (i) among sources and target and (ii) within source domains. It is only necessary and prudent to match one domain to a relevant subset of the others. This makes sense particularly in medical contexts, as nearly all diagnoses address a multi-factorial disease. The Wasserstein distance is chosen to facilitate the mathematical and theoretical operations of pairwise matching in multiple domains. The underlying idea is also closely related to optimal transport for domain adaptation [7, 8], but address multiple domain matching. The proposed method, MDMN, is visualized in Figure 1(b), compared with standard source to target matching scheme (Figure 1(a)), showing the matching of source domains. This tweak allows for already-similar domains to merge and share statistical strength, while keeping distant clusters of domains separate from one another. At test time, only the domains most relevant to the target are used [5, 32]. In essence, this induces a potentially sparse graph on all domains, which is visualized for 22 patients from one of our experiments in Figure 2. Any neural network architecture can be modified to use MDMN, which can be considered a stand-alone domain-matching module. 2 Method Multiple Domain Matching Network (MDMN) is based upon the intuition that in the extracted feature space, inherently similar domains should have similar or identical distributions. By sharing strength within source domains, MDMN can better deal with the overfitting problem within each domain, a common problem in scientific domains. Meanwhile, the relationships between domains can also be learned, which is of interest in addition to classification performance. In the following, suppose we are given N observations {(xi, yi, si)}Ni=1 from S domains, where yi is the desired label for xi and si is the domain. (In the target domain, the label y is not provided and will instead be predicted.) For brevity, we assume source domains are 1, 2, · · · , S 1, and the Sth domain is the single target domain. In fact, our approach works analogously for any number of unlabeled target domains. The whole framework, shown in Figure 3, is composed of an feature extractor (or encoder), a domain adapter (Sec. 2.1) and a label classifier (Sec. 2.2). In this work, we instantiate all three as neural networks. The encoder E maps data points x to feature vectors E(x). These features are then used by the label classifier to make predictions for the supervised task. They are also used by the domain adapter, encouraging extracted features E(x) to be similar across nearby domains. 2.1 Domain Adaptation with Relationship Extraction This section details the structure of the domain adapter. In order to adapt one domain to the others, one approach is to consider a penalty proportional to the distance between each distribution and the weighted mean of the rest. Specifically, let Ps be the distribution over data points x in domain Ds, and P/s = 1S 1 P S s0=1 wss0Ps0 the distribution of data from all other domains D ws /s . Note that the weight ws = [ws1, · · · , wsS ] is domain specific and ws 2 RS , where ws lies on the simplex with ||ws||1 = 1, wss0 0 for s0 = 1, . . . , S and wss = 0, which will be learned in the framework. In the following, we will use Ds to represent for its distribution Ps in order to simplify the notation. Then we can encourage all domains to be close together in the feature space by adding the following term to the loss: LD(E(x;✓E);✓D) = P S s=1 sd(Ds,D ws /s ), (1) where d(·, ·) is a distance between distributions (domains). Here it is used to measure the discrepancy between one domain and a weighted average of the rest. We assume the weight s equals 1S 1 for s = 1, · · · , S 1 and S = 1 to balance the penalty for the source and target domains, although this may be chosen as a tuning parameter. LD is the total domain adapter loss function. For the rest of this manuscript, we have chosen to use the Wasserstein distance as d(·, ·). This approach is facilitated by the use of Kantorovich-Rubenstein dual formulation of the Wasserstein-1 distance [2], which is given for distributions D1 and D2 as d(D1,D2) = sup||f ||L1 Ex⇠P1 [f(E(x))] Ex⇠P2 [f(E(x))], where ||f ||L 1 denotes that the Lipschitz constant of the function f(·) is at most 1, i.e. |f(x0) f(x)| ||x0 x||2. f() is any Lipschitz-smooth nonlinear function, which can be approximated by a neural network [2]. When S is reasonably small (< 100), it is feasible to include S small neural networks fs(·;✓D) to approximate these distances for each domain. In our implementation, we use shared layers in the domain adapter to enhance computational efficiency and the output of the domain adapter is f(·;✓D) = [f1, · · · , fs, · · · , fS ]. The domain loss term is then given as P S s=1 sup||fs||L1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ . (2) To make the domain penalty in (2) feasible, it is necessary to discuss how the penalty can be included in the optimization flow of neural network training. To develop this mathematical approach, let ⇡s be the proportion of the data that comes from the sth domain, then the penalty can be rewritten as 1 S P S s=1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ = Es⇠Uniform(1,...,S) ⇥ Ex⇠Ds [rTs f(E(x))] ⇤ = Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i , (3) where f(E(x)) is the concatenation of fs(E(x)), i.e. f(E(x)) = [f1(E(x)), · · · , fS(E(x))]T . r 2 RS is defined as rs = ⇢ swss0 , s0 6= s s, s 0 = s , s 0 = 1, · · · , S. (4) The form in (3) is natural to include in an optimization loop because the expectation is empirically approximated by a mini-batch of data. Let {(xi, si)}, i = 1, . . . , N denote observations and their associated domain si, and then Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i ' 1 SN P N i=1 ⇡ 1 si rT si f(E(xi;✓E);✓D). (5) The weight vector ws for domain Ds should choose to focus only on relevant domains, and the weights on mismatched domains should be very small. As noted previously, adding uncorrelated domains hurts generalization performance [32]. In our Theorem 3.3, we shows that a weighting scheme with these properties decreases the target error bound. Once the function fs(·;✓D) is known, we can estimate ws by using a softmax transformation on the function expectations from fs between any two domains. Specifically, the weight ws to match Ds to other domains is calculated as ws = softmax/s(ls), with lss0 = Ex⇠Ds [fs(E(x))] Ex⇠Ds0 [fs(E(x))] , (6) where ls = [ls1, · · · , lss0 , · · · , lsS ]. The subscript /s means that the value wss is restricted to 0 and lss is excluded from the softmax. The scalar quantity controls how peaked ws is. Note that setting ws in (2) to the closest domain and 0 otherwise would correspond to the ! 1 case, and ! 0 corresponds to an unweighted (e.g. conventional) case. It is beneficial to force the domain regularizer to match to multiple, but not necessarily all, available domains. Practically, we can either modify in the softmax or change the Lipschitz constant used to calculate the distance (as was done). As an example, the learned graph connectivity is shown in Figure 2 is constructed by thresholding 1 2 (wss0 + ws0s) to determine connectivity between nodes. 2.2 Combining the Loss Terms The proposed method uses the loss in (5) to perform the domain matching. A label classifier is also necessary, which is defined as a neural network parameterized by ✓Y . The label classifier in Figure 3 is represented as Y [E(x)], where the classifier Y is applied on the extracted feature vector E(x). The label predictor usually contains several fully connected layers with nonlinear activation functions. The cross entropy loss is used for classification, i.e. LY (x,y;✓Y ,✓E) = P N i=1 P C c=1 yic log Yc[E(xi)], where Yc means the cth entry of the output. The MSE loss is used for regression. With the label prediction loss LY , the complete network loss is given by min✓E ,✓Y max✓D LY (✓Y ,✓E) + ⇢LD(✓D,✓E), (7) where ✓E denotes the parameters in the feature extractor/encoder, ✓D denotes the parameters in the domain adapter, and ✓Y in the label classifier. The pseudo code for training is given in Algorithm 1. Algorithm 1 Multiple Source Domain Adaptation via WDA Input: Source samples from Ds, s = 1, · · · , S 1 and target samples from DS . Note that we assume index 1, · · · , S 1 are for source domains and S is for the target domain. Iteration kY and kD for training label classifier and domain discriminator. Output: Classifier parameters ✓E ,✓Y ,✓D. for iter = 1 to max_iter do Sample a mini-batch of {xs} from {Ds}S 1s=1 and {xt} from DS . for iterY = 1 to kY do Compute lss0 = Ex2Ds [fs(E(x))] Ex2Ds0 [fs(E(x))] for 8s, s0 2 [1, S]. Compute the weight vectors ws = softmax/s(ls) and wss = 0 for 8s 2 [1, S], where ls = (ls1, · · · , lsS). Compute domain loss LW D (xs,xt) and classifier loss LY (xs). Compute r✓Y = @LY@✓Y and r✓E = @LY @✓E + ⇢@LD @✓E Update ✓Y = ✓Y r✓Y , ✓E = ✓E r✓E . end for for iterD = 1 to kD do Update the weight vectors ws, 8s 2 [1, S]. Compute LD(xs,xt) and r✓D = @LD@✓D . Update ✓D = ✓D +r✓D. end for end for During training, the target domain weight S in eq. (1) is always set to one, while sources domain weights are normalized to have sum one. This is because the ultimate goal is to work well on target domain. We use the gradient penalty introduced in [18] to implement the Lipschitz constraint. A concern is that the feature scale may change and impact the Wasserstein distance. One potential solution to this is to include batch normalization to keep the summary statistics of the extracted features constant. In practice, this is not necessary. Adam [20] is used as the optimization method while the gradient descent step in Algorithm 1 reflects the basic strategy. 2.3 Complexity Analysis Although the proposed algorithm computes pairwise domain distance, the computational cost in practice is similar compared to standard DANN model. For the domain loss functions, we share all the bottom layers for all domains. This is similar to the setup of a multi-class domain classifier with softmax output while in our model, the output is a real number. Specifically, the pairwise distance (6) is updated in each mini-batch by averaging samples in the same domain. l̂ss0 ⇡ 1 ns X 8xi2Ds fs(E(xi)) 1 ns0 X 8xi2Ds0 fs(E(xi)) (8) Because these pairwise calculations happen late in the network, their computational cost is dwarfed by feature generation. We believe that the methods will easily scale to hundreds of domains based on computational and memory scaling. We use exponential smoothing during the updates to improve the quality of the estimates, with lt+1 ss0 = 0.9l t ss0 + 0.1l̂ c ss0 . l̂ c ss0 is the value from current iteration’s mini-batch. Then the softmax is applied on the calculated values to get the weight wss0 . This procedure is used to update ws, so those parameters are not included in the backpropagation. The domain weights and network parameters are updated iteratively, as shown in Algorithm 1. 3 Theoretical Results In this section, we investigate the theorems and derivations used to bound the target error with the given method in Section 2. Specifically, the target error is bounded by the source error, the source-target distance plus additional terms which is constant under certain data and hypothesis classes. The theory is developed based on prior theories of source to target adaptation. The adaptation within source domains can be developed in the same way. Additional details and derivations are available in the Supplemental Section A. Let Ds for s = 1, · · · , S and DT represent the source and the target domain, respectively. Note that there is a notation change in the target domain, where the Sth domain was denoted as the target in previous section. Here, it is easier to separate the target domain out. Suppose that there is probabilistic true labeling functions gs, gT : X ! [0, 1] and a probabilistic hypothesis f : X ! [0, 1], which in our case is a neural network. The output value of the labeling function determines the probability that the sample is 0 or 1. gs, gT are assumed Lipschitz smooth with parameters s and T , respectively. This differs from the previous derivation [14] that assumes that the hypothesis and labeling function were deterministic ({0, 1}). In the following, the notation of encoder E() is removed for simplicity. Thus f(x) is actually f(E(x;✓E);✓D). Since we first only focus on the adaptation from source to target, the output of f(·) in this section is a scalar (The last element of f(·)). Same for notation ws, which is the domain similarity of Ds and target. Definition 3.1 (Probabilistic Classifier Discrepancy). The probabilistic classifier discrepancy for domain Ds is defined as s(f, g) = Ex⇠Ds [|f(x) g(x)|]. (9) Note that if the label hypothesis is limited to {0, 1}, this is classification accuracy. In order to construct our main theorem, we use notation ||f ||L 6 to denote -smooth function. Mathematical details are given in Definition A.6 in the appendix. Next we define the weighted Wasserstein-like quantity between sources and the target. Definition 3.2 (Weighted Wasserstein-like quantity). Given S multi-source probability distributions Ps, s = 1, · · · , S and PT for the target domain, the difference between the weighted source domains {Ds}Ss=1 and target domain DT is described as, ↵ (DT , P s wsDs) = maxf :X![0,1],||f ||L Ex⇠DT [f(x)] Ex⇠Ps wsDs [f(x)]. (10) Note that if the bound on the function from 0 to 1 is removed, then this quantity is the KantorovichRubinstein dual form of the Wasserstein-1 distance. As ! 1, this is the same as the commonly used L1-divergence or variation divergence [4]. Thus, we can derive this theorem with H-divergence exactly, but prefer to use the smoothness constraint to match the used Wasserstein distance. We also define f⇤ as an optimal hypothesis that achieves the minimum discrepancy ⇤, which is given in the appendix A.3. Now we come to the main theorem of this work. Theorem 3.3 (Bound on weighted multi-source discrepancy). For a hypothesis f : X ! [0, 1], T (f, gT ) P S s=1 ws s(f, gs) + ↵ T+ ⇤( P S s=1 wsDs,DT ) + ⇤ (11) The quantity ⇤ given in (27) is defined in the appendix and addresses the fundamental mismatch in true labeling functions, which is uncontrollable by domain adaptation. Note that a weighted sum of Lipschitz continuous functions is also Lipschitz continuous. ⇤ is the Lipschitz continuity for the weighted domain combination ⇤ = P S s=1 ws s, where fs() of domain Ds has Lipschitz constant s. We note that in Theorem 3.3 we are dependent on the weighted sum of the source domains, implying that increasing the weight on irrelevant source domain may hurt generalization performance. This matches existing literature. Second, a complex model with high learning capacity will reduce the source error s(f, gs), but the uncertainty introduced by the model will increase the domain discrepancy measurement ↵ + ⇤({Ds}Ss=1,DT ). Compared to the multi-source domain adversarial network’s (MDAN’s) [45] bound, T (f, gT ) maxs s(f, gs) + maxs dH H(Ds,DT ) + ⇤, where the definition of dH H is given in appendix section A.2. Theorem 3.3 reveals that weighting has a tighter bound because an irrelevant domain with little weight will not seriously hurt the generalization bound whereas prior approaches have taken the max over the least relevant domain. Also, the inner domain matching helps prevent spurious relationships between irrelevant domains and the target. Therefore, MDAN can pick out more relevant source domains compared to the alternative methods evaluated. 4 Related Work There is a large history in domain adaptation to transfer source distribution information to the target distribution or vice versa, and has been approached in a variety of manners. Kernel Mean Matching (KMM) is widely used in the assumption that target data can be represented by a weighted combination of samples in the source domain [37, 19, 12, 29, 40]. Clustering [25] and late fusion [1] approaches have also been evaluated. Distribution matching has been explored with the Minimum Mean Discrepancy [29] and optimal transport [8, 7], which is similar to the motivation used in our domain penalization. With the increasing use of neural networks, weight sharing and transfer has emerged as an effective strategy for domain adaptation [15]. With the development of Generative Adversarial Networks (GANs) [17], adversarial domain adaptation has become popular. The Domain Adversarial Neural Network (DANN) is a newly proposed model for feature adaptation rather than simple network weight sharing [14]. Since its publication, the DANN approach has been generalized [39, 43] and extended to multiple domains [45]. In the multiple domain case, a weighted combination of source domains is used for adaptation. [22] is based on the DANN framework, but uses distributional summary statistics in the adversary. Several other methods use source or target sample generation with GANs on single source domain adaptation [35, 27, 26, 33], but extensions to multi-source domains are not straightforward. [3] provides a multi-stage multi-source domain adaptation. There has also been theoretical analysis of error bounds for multi-source domain adaptation. [9] analyzes the theory on distributed weighted combining of multiple source domains. [32] gives a bound on target loss when only using k-nearest source domains. It shows that adding more uncorrelated source domains training data hurts the generalization bound. The bound that [4] gives is also on the target risk loss. It introduces the H-divergence as a measurement of the distance between source and target domains. [5] further analyzes whether source sample quantity can compensate for quality under different methods and different target error measurements. Domain adaptation can be used in a wide variety of applications. [16, 10] uses it for natural language processing tasks. [12] perform video concept detection using multi-source domain adaptation with auxiliary classifiers. [15, 14, 1, 3, 39] focus on image domain transfer learning. The multi-source domain adaptation in previous works is usually limited to fewer than five source domains. Some scientific applications have more challenging situation by adapting from a significantly higher number of source domains [44]. In some neural signals, different methods have been employed to transfer among subjects based on hand crafted EEG features [38, 24]; however, these models need to be trained in several steps, making them less robust. 5 Experiment We tested MDMN by applying it to three classification problems: image recognition, natural-language sentiment detection, and multi-channel time series analysis. The sentiment classification task is given in the Appendix due to limited space. 5.1 Results on Image Datasets We first test the performance of the proposed MDMN model on MNIST, MNISTM, SVHN and USPS datasets. Visualizations of these datasets are given in the Appendix Section C.1. In each test, one dataset is left out as target domain while the remaining three are treated as source domains. The feature extractor E consists of two convolutional layers plus two fully connected layers. Both the label predictor and domain adapter are two layers MLP. ReLU nonlinearity is used between layers. The baseline method is the concatenation of feature extractor and label predictor as a standard CNN but it has no access to any target domain data during training process. While TCA [34] and SA [13] methods can process raw images, the results are significantly stronger following a feature extraction step. The results from these methods are given by following two independent steps. First, a convolutional neural network with the same structure as in our proposed approach is used as a baseline. This model is trained on the source domains, and then features are extracted for all domains to use as inputs into TCA and SA. Another issue is computational complexity for TCA, because this algorithm computes the matrix inverse during the inference, which is of complexity O(N3). Hence, data was limited for this algorithm. For the adversarial based algorithms [39, 14, 45] and MDMN model, the domain classifier is the uniform, which is a two layer MLP with ReLU nonlinearities and a soft-max top layer. The classification accuracy is compared in Table 1. The top row shows the baseline result on the target domain with the classifier trained on the three other datasets. The proposed model MDMN outperforms the other baselines on all datasets. Note that some domain-adaptation algorithms actually lower the accuracy, revealing that domain-specific features are being transfered. Another problem encountered is the mismatch between the source domain and target domain. For instance, when the target domain is the MNIST-M dataset, it is expected to give large weight to MNIST dataset samples during training. However, algorithms like TCA, SA and DANN equally weight all source domain datasets, making the result worse than MDMN. Acc. % MNIST MNISTM USPS SVHN Baseline 94.6 60.8 89.4 43.7 TCA [34] 78.4 45.2 75.4 39.7 SA [13] 90.8 59.9 86.3 40.2 DAN [28] 97.1 67.0 90.4 51.9 ADDA [39] 89.0 80.3 85.2 43.5 DANN [14] 97.9 68.8 89.3 50.1 MDANs [45] 97.2 68.5 90.1 50.5 MDMN 98.0 83.8 94.5 53.1 domain adaptation is happening because the extracted features are more similar between domains. MDMN has the most clear digit mixing effect. The model finds the digit label features instead of domain specific features. A larger figure of the same result is given in Appendix C.1 for enhanced clarity. 5.2 Result on EEG Time Series Two datasets are used to evaluate performance on Electroencephalography (EEG) data: SEED dataset and an Autism Spectrum Disorder (ASD) dataset. The SEED dataset [46] focuses on analyzing emotion using EEG signal. This dataset has 15 subjects. The EEG signal is recorded when each subject watches 15 movie clips for 3 times at three different days. Each video clip is attached with a negative/neutral/positive emotion label. The sampling rate is at 1000Hz and a 62-electrode layout is used. In our experiment, we downsample the EEG signal to 200Hz. The test scheme is the leave-one-out cross-validation. In each time, one subject is picked out as test and the remaining 14 subjects are used as training and validation. The Autism Spectrum Disorder (ASD) dataset [11] aims at discovering whether there are significant changes in neural activity in a open label clinical trial evaluating the efficacy of a single infusion of autologous cord blood for treatment of ASD [11]. The study involves 22 children from ages 3 to 7 years undergoing treatment for ASD with EEG measurements at baseline (T1), 6 months post treatment (T2), and 12 months post treatment (T3). The signal was recorded when a child was watching a total of three one-minute long videos designed to measure responses to dynamic social and nonsocial stimuli. The data has 121 signal electrodes. The classification task is to predict the treatment stage T1, T2 and T3 to test the effectiveness of the treatment and analyze what features are dynamic in response to the treatment. By examining the features, we can track how neural changes correlate to this treatment stages. We also adopt the leave-one-out cross-validation scheme for this dataset, where one subject is left out as testing, the remaining 21 subjects are separated as training and validation. Leaving complete subjects out better estimates generalization to a population in these types of neural tasks [42]. The classification accuracy using different methods is compared in Table 2. In this setting, we choose our baseline model as the SyncNet [23]. SyncNet is a neural network with structured filter targeting at extracting neuroscience related features. The simplest framework of SyncNet is adopted which only contains one layer of convolutional filters. As in [23], we set the filter number to 10 for both datasets. For TCA, SA and ITL methods, the baseline model was trained as before without a domain adapter on the source domain data. Extracted features from this model were then used to extract features from target domains. MDMN outperforms other competitors on both EEG datasets. A subject by subject plot is shown in Figure 5. Because performance on subjects is highly variable, we only visualize performance relative to baseline, and absolute performance is visualized in Figure 8 in the appendix. Because the source domains are large but each source domain is highly variable, the requirement to find relevant domains is of increased importance on both of the EEG datasets. For the ASD dataset, DANN and MDANs do not match the performance of MDMN mainly because they cannot correctly pick out most related subject from source domains. This is also true for TCA, SA and ITL. Our proposed algorithm MDMN overcomes this problem by computing domain similarity in feature space while performing feature mapping, and a domain relationship graph by subject is given in Figure 2. Each subject is related to all the others with different weight. The missing edges, like the edges to node ‘s10’, are those with weight less than 0.09. Our algorithm automatically finds the relationship and the domain adaptation happens with the calculated weight, instead of treating all domains equally. 6 Conclusion In this work, we propose the Multiple Domain Matching Network (MDMN) that uses feature matching across different source domains. MDMN is able to use pairwise domain feature similarity to give a weight to each training domain, which is of key importance when the number of source domains increases, especially in many neuroscience and biological applications. While performing domain adaptation, MDMN can also extract the relationship between domains. The relationship graph itself is of interest in many applications. Our proposed adversarial training framework further applies this idea on different domain adaptation tasks and shows state-of-the-art performance. Acknowledgements Funding was provided by the Stylli Translational Neuroscience Award, Marcus Foundation, NICHD P50-HD093074, and NIMH 3R01MH099192-05S2.
1. What is the focus of the paper regarding domain adaptation? 2. What are the strengths of the proposed approach, particularly in using a Wasserstein-like distance measure? 3. What are the weaknesses of the paper, especially regarding experimental analyses? 4. How does the reviewer assess the clarity and organization of the paper's content, including the placement of the algorithm in the appendix? 5. Are there any suggestions provided by the reviewer for improving the paper?
Review
Review This paper presents a technique for adapting from multiple sources to a target. The main goal of this paper is to mitigate negative transfer from unrelated domains. The authors use the now popular adversarial neural networks-style framework where the feature layer is trained to minimize the label loss + a loss function to measure how close the domains are in the feature space. The main contribution of the paper is to use a Wasserstein like distance between the source distributions and the target distribution for the second part. The authors further add a weighted importance to each of the source domains to represent the relevance of that domain. The weights for the domains are the output of the softmax over the Wasserstein distances of target domain to each of source domains. The authors also show an error bound over classifier discrepancies using a Wasserstein-like distance measure. The key idea about this bound is that it bounds the error on target domain by a weighted sum of errors on source domains and not with the max -- this implies that reducing the weight over a potentially unrelated source domain can mitigate its impact. Pros: 1. Well written and straightforward to understand. 2. Theorem 3.3 is a nice extension to previous results on multi domain adaptation. Cons: 1. Lack of experimental analyses. E.g. it needs ablation analysis to disentangle the contributions of Wasserstein distribution matching and weighted source matching. Suggestions: 1. Authors should motivate the results in section 3 in a better way in light of the proposed setup. It is not clear how theorem 3.3 relates to the proposed technique. 2. Section 2 will be much clearer if the authors put the algorithm in the main paper instead of the appendix. E.g. there is a circular dependency between source weights w_s and the parameters \theta_D and it is not entirely clear that this is handled iteratively until one looks at the algorithm. Read author response and satisfied.
NIPS
Title Extracting Relationships by Multi-Domain Matching Abstract In many biological and medical contexts, we construct a large labeled corpus by aggregating many sources to use in target prediction tasks. Unfortunately, many of the sources may be irrelevant to our target task, so ignoring the structure of the dataset is detrimental. This work proposes a novel approach, the Multiple Domain Matching Network (MDMN), to exploit this structure. MDMN embeds all data into a shared feature space while learning which domains share strong statistical relationships. These relationships are often insightful in their own right, and they allow domains to share strength without interference from irrelevant data. This methodology builds on existing distribution-matching approaches by assuming that source domains are varied and outcomes multi-factorial. Therefore, each domain should only match a relevant subset. Theoretical analysis shows that the proposed approach can have a tighter generalization bound than existing multiple-domain adaptation approaches. Empirically, we show that the proposed methodology handles higher numbers of source domains (up to 21 empirically), and provides state-of-the-art performance on image, text, and multi-channel time series classification, including clinical outcome data in an open label trial evaluating a novel treatment for Autism Spectrum Disorder. 1 Introduction Deep learning methods have shown unparalleled performance when trained on vast amounts of diverse labeled training data [21], often collected at great cost. In many contexts, especially medical and biological, it is prohibitively expensive to collect or label the number of observations necessary to train an accurate deep neural network classifier. However, a number of related sources, each with “moderate” data, may already be available, which can be combined to construct a large corpus. Naively using the combined source data is often an ineffective strategy; instead, what is needed is unsupervised multiple-domain adaptation. Given labeled data from several source domains (each representing, e.g., one patient in a medical trial, or reviews of one type of product), and unlabeled data from target domains (new patients, or new product categories), we wish to train a classifier that makes accurate predictions about the target domain data at test time. Recent approaches to multiple-domain adaptation involve learning a mapping from each domain into a common feature space, in which observations from the target and source domains have similar distributions [14, 45, 39, 30]. At test time, a target-domain observation is first mapped into this shared feature space, then classified. However, few of the existing works can model the relationship among different domains, which we note is important for several reasons. First, even though data in different domains share labels, their cause and symptoms may be different. Patients with the same 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. condition can be caused by various reasons and diagnosed while sharing only a subset of symptoms. Extracting these relationships between patients is helpful in practice because it limits the model to only relevant information. Second, as mentioned above, a training corpus may be constructed with only a small number of sources within a larger population. For example, we might collect data from many patients with “small” data and domain adaptation is used to generalize to new patients [3]. Therefore, extracting these relationships is of practical importance. In addition to the practical argument, [32] gives a theoretical proof that adding irrelevant source domains harms performance bounds on multiple-domain adaptation. Therefore, it is necessary to automatically choose a weighting over source domains to utilize only relevant domains. There are only a few works that address such a domain weighting strategy [45]. In this manuscript, we extend the proof techniques of [4, 32] to show that a multiple-domain weighting strategy can have a tighter generalization bound than traditional multiple domain approaches. Notably, many recently proposed transfer learning strategies are based on minimizing the Hdivergence between domains in feature space, which was shown to bound generalization error in domain adaptation [4]. Compared to standard L1-divergence, H-divergence limits the hypothesis to a given class, which can be better estimated using finite samples theoretically. The target error bound using H-divergence has the desirable property that it can be estimated by learning a classifier between the source and target domains with finite VC dimension, motivating the Domain Adversarial Neural Network (DANN) [14]. However, neural network usually has large VC dimensions, making the bound using H-divergence loose in practice. In this work, we propose to use a ‘Wasserstein-like’ metric to define domain similarity in the proofs. ‘Wasserstein-like’ distance in our work extends the binary output in H-divergence to real probability output. Our main contribution is our novel approach to multiple-domain adaptation. A key idea from prior work is to match every source domain’s feature-space distribution to that of the target domain [37, 29]. In contrast, we map the distribution (i) among sources and target and (ii) within source domains. It is only necessary and prudent to match one domain to a relevant subset of the others. This makes sense particularly in medical contexts, as nearly all diagnoses address a multi-factorial disease. The Wasserstein distance is chosen to facilitate the mathematical and theoretical operations of pairwise matching in multiple domains. The underlying idea is also closely related to optimal transport for domain adaptation [7, 8], but address multiple domain matching. The proposed method, MDMN, is visualized in Figure 1(b), compared with standard source to target matching scheme (Figure 1(a)), showing the matching of source domains. This tweak allows for already-similar domains to merge and share statistical strength, while keeping distant clusters of domains separate from one another. At test time, only the domains most relevant to the target are used [5, 32]. In essence, this induces a potentially sparse graph on all domains, which is visualized for 22 patients from one of our experiments in Figure 2. Any neural network architecture can be modified to use MDMN, which can be considered a stand-alone domain-matching module. 2 Method Multiple Domain Matching Network (MDMN) is based upon the intuition that in the extracted feature space, inherently similar domains should have similar or identical distributions. By sharing strength within source domains, MDMN can better deal with the overfitting problem within each domain, a common problem in scientific domains. Meanwhile, the relationships between domains can also be learned, which is of interest in addition to classification performance. In the following, suppose we are given N observations {(xi, yi, si)}Ni=1 from S domains, where yi is the desired label for xi and si is the domain. (In the target domain, the label y is not provided and will instead be predicted.) For brevity, we assume source domains are 1, 2, · · · , S 1, and the Sth domain is the single target domain. In fact, our approach works analogously for any number of unlabeled target domains. The whole framework, shown in Figure 3, is composed of an feature extractor (or encoder), a domain adapter (Sec. 2.1) and a label classifier (Sec. 2.2). In this work, we instantiate all three as neural networks. The encoder E maps data points x to feature vectors E(x). These features are then used by the label classifier to make predictions for the supervised task. They are also used by the domain adapter, encouraging extracted features E(x) to be similar across nearby domains. 2.1 Domain Adaptation with Relationship Extraction This section details the structure of the domain adapter. In order to adapt one domain to the others, one approach is to consider a penalty proportional to the distance between each distribution and the weighted mean of the rest. Specifically, let Ps be the distribution over data points x in domain Ds, and P/s = 1S 1 P S s0=1 wss0Ps0 the distribution of data from all other domains D ws /s . Note that the weight ws = [ws1, · · · , wsS ] is domain specific and ws 2 RS , where ws lies on the simplex with ||ws||1 = 1, wss0 0 for s0 = 1, . . . , S and wss = 0, which will be learned in the framework. In the following, we will use Ds to represent for its distribution Ps in order to simplify the notation. Then we can encourage all domains to be close together in the feature space by adding the following term to the loss: LD(E(x;✓E);✓D) = P S s=1 sd(Ds,D ws /s ), (1) where d(·, ·) is a distance between distributions (domains). Here it is used to measure the discrepancy between one domain and a weighted average of the rest. We assume the weight s equals 1S 1 for s = 1, · · · , S 1 and S = 1 to balance the penalty for the source and target domains, although this may be chosen as a tuning parameter. LD is the total domain adapter loss function. For the rest of this manuscript, we have chosen to use the Wasserstein distance as d(·, ·). This approach is facilitated by the use of Kantorovich-Rubenstein dual formulation of the Wasserstein-1 distance [2], which is given for distributions D1 and D2 as d(D1,D2) = sup||f ||L1 Ex⇠P1 [f(E(x))] Ex⇠P2 [f(E(x))], where ||f ||L 1 denotes that the Lipschitz constant of the function f(·) is at most 1, i.e. |f(x0) f(x)| ||x0 x||2. f() is any Lipschitz-smooth nonlinear function, which can be approximated by a neural network [2]. When S is reasonably small (< 100), it is feasible to include S small neural networks fs(·;✓D) to approximate these distances for each domain. In our implementation, we use shared layers in the domain adapter to enhance computational efficiency and the output of the domain adapter is f(·;✓D) = [f1, · · · , fs, · · · , fS ]. The domain loss term is then given as P S s=1 sup||fs||L1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ . (2) To make the domain penalty in (2) feasible, it is necessary to discuss how the penalty can be included in the optimization flow of neural network training. To develop this mathematical approach, let ⇡s be the proportion of the data that comes from the sth domain, then the penalty can be rewritten as 1 S P S s=1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ = Es⇠Uniform(1,...,S) ⇥ Ex⇠Ds [rTs f(E(x))] ⇤ = Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i , (3) where f(E(x)) is the concatenation of fs(E(x)), i.e. f(E(x)) = [f1(E(x)), · · · , fS(E(x))]T . r 2 RS is defined as rs = ⇢ swss0 , s0 6= s s, s 0 = s , s 0 = 1, · · · , S. (4) The form in (3) is natural to include in an optimization loop because the expectation is empirically approximated by a mini-batch of data. Let {(xi, si)}, i = 1, . . . , N denote observations and their associated domain si, and then Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i ' 1 SN P N i=1 ⇡ 1 si rT si f(E(xi;✓E);✓D). (5) The weight vector ws for domain Ds should choose to focus only on relevant domains, and the weights on mismatched domains should be very small. As noted previously, adding uncorrelated domains hurts generalization performance [32]. In our Theorem 3.3, we shows that a weighting scheme with these properties decreases the target error bound. Once the function fs(·;✓D) is known, we can estimate ws by using a softmax transformation on the function expectations from fs between any two domains. Specifically, the weight ws to match Ds to other domains is calculated as ws = softmax/s(ls), with lss0 = Ex⇠Ds [fs(E(x))] Ex⇠Ds0 [fs(E(x))] , (6) where ls = [ls1, · · · , lss0 , · · · , lsS ]. The subscript /s means that the value wss is restricted to 0 and lss is excluded from the softmax. The scalar quantity controls how peaked ws is. Note that setting ws in (2) to the closest domain and 0 otherwise would correspond to the ! 1 case, and ! 0 corresponds to an unweighted (e.g. conventional) case. It is beneficial to force the domain regularizer to match to multiple, but not necessarily all, available domains. Practically, we can either modify in the softmax or change the Lipschitz constant used to calculate the distance (as was done). As an example, the learned graph connectivity is shown in Figure 2 is constructed by thresholding 1 2 (wss0 + ws0s) to determine connectivity between nodes. 2.2 Combining the Loss Terms The proposed method uses the loss in (5) to perform the domain matching. A label classifier is also necessary, which is defined as a neural network parameterized by ✓Y . The label classifier in Figure 3 is represented as Y [E(x)], where the classifier Y is applied on the extracted feature vector E(x). The label predictor usually contains several fully connected layers with nonlinear activation functions. The cross entropy loss is used for classification, i.e. LY (x,y;✓Y ,✓E) = P N i=1 P C c=1 yic log Yc[E(xi)], where Yc means the cth entry of the output. The MSE loss is used for regression. With the label prediction loss LY , the complete network loss is given by min✓E ,✓Y max✓D LY (✓Y ,✓E) + ⇢LD(✓D,✓E), (7) where ✓E denotes the parameters in the feature extractor/encoder, ✓D denotes the parameters in the domain adapter, and ✓Y in the label classifier. The pseudo code for training is given in Algorithm 1. Algorithm 1 Multiple Source Domain Adaptation via WDA Input: Source samples from Ds, s = 1, · · · , S 1 and target samples from DS . Note that we assume index 1, · · · , S 1 are for source domains and S is for the target domain. Iteration kY and kD for training label classifier and domain discriminator. Output: Classifier parameters ✓E ,✓Y ,✓D. for iter = 1 to max_iter do Sample a mini-batch of {xs} from {Ds}S 1s=1 and {xt} from DS . for iterY = 1 to kY do Compute lss0 = Ex2Ds [fs(E(x))] Ex2Ds0 [fs(E(x))] for 8s, s0 2 [1, S]. Compute the weight vectors ws = softmax/s(ls) and wss = 0 for 8s 2 [1, S], where ls = (ls1, · · · , lsS). Compute domain loss LW D (xs,xt) and classifier loss LY (xs). Compute r✓Y = @LY@✓Y and r✓E = @LY @✓E + ⇢@LD @✓E Update ✓Y = ✓Y r✓Y , ✓E = ✓E r✓E . end for for iterD = 1 to kD do Update the weight vectors ws, 8s 2 [1, S]. Compute LD(xs,xt) and r✓D = @LD@✓D . Update ✓D = ✓D +r✓D. end for end for During training, the target domain weight S in eq. (1) is always set to one, while sources domain weights are normalized to have sum one. This is because the ultimate goal is to work well on target domain. We use the gradient penalty introduced in [18] to implement the Lipschitz constraint. A concern is that the feature scale may change and impact the Wasserstein distance. One potential solution to this is to include batch normalization to keep the summary statistics of the extracted features constant. In practice, this is not necessary. Adam [20] is used as the optimization method while the gradient descent step in Algorithm 1 reflects the basic strategy. 2.3 Complexity Analysis Although the proposed algorithm computes pairwise domain distance, the computational cost in practice is similar compared to standard DANN model. For the domain loss functions, we share all the bottom layers for all domains. This is similar to the setup of a multi-class domain classifier with softmax output while in our model, the output is a real number. Specifically, the pairwise distance (6) is updated in each mini-batch by averaging samples in the same domain. l̂ss0 ⇡ 1 ns X 8xi2Ds fs(E(xi)) 1 ns0 X 8xi2Ds0 fs(E(xi)) (8) Because these pairwise calculations happen late in the network, their computational cost is dwarfed by feature generation. We believe that the methods will easily scale to hundreds of domains based on computational and memory scaling. We use exponential smoothing during the updates to improve the quality of the estimates, with lt+1 ss0 = 0.9l t ss0 + 0.1l̂ c ss0 . l̂ c ss0 is the value from current iteration’s mini-batch. Then the softmax is applied on the calculated values to get the weight wss0 . This procedure is used to update ws, so those parameters are not included in the backpropagation. The domain weights and network parameters are updated iteratively, as shown in Algorithm 1. 3 Theoretical Results In this section, we investigate the theorems and derivations used to bound the target error with the given method in Section 2. Specifically, the target error is bounded by the source error, the source-target distance plus additional terms which is constant under certain data and hypothesis classes. The theory is developed based on prior theories of source to target adaptation. The adaptation within source domains can be developed in the same way. Additional details and derivations are available in the Supplemental Section A. Let Ds for s = 1, · · · , S and DT represent the source and the target domain, respectively. Note that there is a notation change in the target domain, where the Sth domain was denoted as the target in previous section. Here, it is easier to separate the target domain out. Suppose that there is probabilistic true labeling functions gs, gT : X ! [0, 1] and a probabilistic hypothesis f : X ! [0, 1], which in our case is a neural network. The output value of the labeling function determines the probability that the sample is 0 or 1. gs, gT are assumed Lipschitz smooth with parameters s and T , respectively. This differs from the previous derivation [14] that assumes that the hypothesis and labeling function were deterministic ({0, 1}). In the following, the notation of encoder E() is removed for simplicity. Thus f(x) is actually f(E(x;✓E);✓D). Since we first only focus on the adaptation from source to target, the output of f(·) in this section is a scalar (The last element of f(·)). Same for notation ws, which is the domain similarity of Ds and target. Definition 3.1 (Probabilistic Classifier Discrepancy). The probabilistic classifier discrepancy for domain Ds is defined as s(f, g) = Ex⇠Ds [|f(x) g(x)|]. (9) Note that if the label hypothesis is limited to {0, 1}, this is classification accuracy. In order to construct our main theorem, we use notation ||f ||L 6 to denote -smooth function. Mathematical details are given in Definition A.6 in the appendix. Next we define the weighted Wasserstein-like quantity between sources and the target. Definition 3.2 (Weighted Wasserstein-like quantity). Given S multi-source probability distributions Ps, s = 1, · · · , S and PT for the target domain, the difference between the weighted source domains {Ds}Ss=1 and target domain DT is described as, ↵ (DT , P s wsDs) = maxf :X![0,1],||f ||L Ex⇠DT [f(x)] Ex⇠Ps wsDs [f(x)]. (10) Note that if the bound on the function from 0 to 1 is removed, then this quantity is the KantorovichRubinstein dual form of the Wasserstein-1 distance. As ! 1, this is the same as the commonly used L1-divergence or variation divergence [4]. Thus, we can derive this theorem with H-divergence exactly, but prefer to use the smoothness constraint to match the used Wasserstein distance. We also define f⇤ as an optimal hypothesis that achieves the minimum discrepancy ⇤, which is given in the appendix A.3. Now we come to the main theorem of this work. Theorem 3.3 (Bound on weighted multi-source discrepancy). For a hypothesis f : X ! [0, 1], T (f, gT ) P S s=1 ws s(f, gs) + ↵ T+ ⇤( P S s=1 wsDs,DT ) + ⇤ (11) The quantity ⇤ given in (27) is defined in the appendix and addresses the fundamental mismatch in true labeling functions, which is uncontrollable by domain adaptation. Note that a weighted sum of Lipschitz continuous functions is also Lipschitz continuous. ⇤ is the Lipschitz continuity for the weighted domain combination ⇤ = P S s=1 ws s, where fs() of domain Ds has Lipschitz constant s. We note that in Theorem 3.3 we are dependent on the weighted sum of the source domains, implying that increasing the weight on irrelevant source domain may hurt generalization performance. This matches existing literature. Second, a complex model with high learning capacity will reduce the source error s(f, gs), but the uncertainty introduced by the model will increase the domain discrepancy measurement ↵ + ⇤({Ds}Ss=1,DT ). Compared to the multi-source domain adversarial network’s (MDAN’s) [45] bound, T (f, gT ) maxs s(f, gs) + maxs dH H(Ds,DT ) + ⇤, where the definition of dH H is given in appendix section A.2. Theorem 3.3 reveals that weighting has a tighter bound because an irrelevant domain with little weight will not seriously hurt the generalization bound whereas prior approaches have taken the max over the least relevant domain. Also, the inner domain matching helps prevent spurious relationships between irrelevant domains and the target. Therefore, MDAN can pick out more relevant source domains compared to the alternative methods evaluated. 4 Related Work There is a large history in domain adaptation to transfer source distribution information to the target distribution or vice versa, and has been approached in a variety of manners. Kernel Mean Matching (KMM) is widely used in the assumption that target data can be represented by a weighted combination of samples in the source domain [37, 19, 12, 29, 40]. Clustering [25] and late fusion [1] approaches have also been evaluated. Distribution matching has been explored with the Minimum Mean Discrepancy [29] and optimal transport [8, 7], which is similar to the motivation used in our domain penalization. With the increasing use of neural networks, weight sharing and transfer has emerged as an effective strategy for domain adaptation [15]. With the development of Generative Adversarial Networks (GANs) [17], adversarial domain adaptation has become popular. The Domain Adversarial Neural Network (DANN) is a newly proposed model for feature adaptation rather than simple network weight sharing [14]. Since its publication, the DANN approach has been generalized [39, 43] and extended to multiple domains [45]. In the multiple domain case, a weighted combination of source domains is used for adaptation. [22] is based on the DANN framework, but uses distributional summary statistics in the adversary. Several other methods use source or target sample generation with GANs on single source domain adaptation [35, 27, 26, 33], but extensions to multi-source domains are not straightforward. [3] provides a multi-stage multi-source domain adaptation. There has also been theoretical analysis of error bounds for multi-source domain adaptation. [9] analyzes the theory on distributed weighted combining of multiple source domains. [32] gives a bound on target loss when only using k-nearest source domains. It shows that adding more uncorrelated source domains training data hurts the generalization bound. The bound that [4] gives is also on the target risk loss. It introduces the H-divergence as a measurement of the distance between source and target domains. [5] further analyzes whether source sample quantity can compensate for quality under different methods and different target error measurements. Domain adaptation can be used in a wide variety of applications. [16, 10] uses it for natural language processing tasks. [12] perform video concept detection using multi-source domain adaptation with auxiliary classifiers. [15, 14, 1, 3, 39] focus on image domain transfer learning. The multi-source domain adaptation in previous works is usually limited to fewer than five source domains. Some scientific applications have more challenging situation by adapting from a significantly higher number of source domains [44]. In some neural signals, different methods have been employed to transfer among subjects based on hand crafted EEG features [38, 24]; however, these models need to be trained in several steps, making them less robust. 5 Experiment We tested MDMN by applying it to three classification problems: image recognition, natural-language sentiment detection, and multi-channel time series analysis. The sentiment classification task is given in the Appendix due to limited space. 5.1 Results on Image Datasets We first test the performance of the proposed MDMN model on MNIST, MNISTM, SVHN and USPS datasets. Visualizations of these datasets are given in the Appendix Section C.1. In each test, one dataset is left out as target domain while the remaining three are treated as source domains. The feature extractor E consists of two convolutional layers plus two fully connected layers. Both the label predictor and domain adapter are two layers MLP. ReLU nonlinearity is used between layers. The baseline method is the concatenation of feature extractor and label predictor as a standard CNN but it has no access to any target domain data during training process. While TCA [34] and SA [13] methods can process raw images, the results are significantly stronger following a feature extraction step. The results from these methods are given by following two independent steps. First, a convolutional neural network with the same structure as in our proposed approach is used as a baseline. This model is trained on the source domains, and then features are extracted for all domains to use as inputs into TCA and SA. Another issue is computational complexity for TCA, because this algorithm computes the matrix inverse during the inference, which is of complexity O(N3). Hence, data was limited for this algorithm. For the adversarial based algorithms [39, 14, 45] and MDMN model, the domain classifier is the uniform, which is a two layer MLP with ReLU nonlinearities and a soft-max top layer. The classification accuracy is compared in Table 1. The top row shows the baseline result on the target domain with the classifier trained on the three other datasets. The proposed model MDMN outperforms the other baselines on all datasets. Note that some domain-adaptation algorithms actually lower the accuracy, revealing that domain-specific features are being transfered. Another problem encountered is the mismatch between the source domain and target domain. For instance, when the target domain is the MNIST-M dataset, it is expected to give large weight to MNIST dataset samples during training. However, algorithms like TCA, SA and DANN equally weight all source domain datasets, making the result worse than MDMN. Acc. % MNIST MNISTM USPS SVHN Baseline 94.6 60.8 89.4 43.7 TCA [34] 78.4 45.2 75.4 39.7 SA [13] 90.8 59.9 86.3 40.2 DAN [28] 97.1 67.0 90.4 51.9 ADDA [39] 89.0 80.3 85.2 43.5 DANN [14] 97.9 68.8 89.3 50.1 MDANs [45] 97.2 68.5 90.1 50.5 MDMN 98.0 83.8 94.5 53.1 domain adaptation is happening because the extracted features are more similar between domains. MDMN has the most clear digit mixing effect. The model finds the digit label features instead of domain specific features. A larger figure of the same result is given in Appendix C.1 for enhanced clarity. 5.2 Result on EEG Time Series Two datasets are used to evaluate performance on Electroencephalography (EEG) data: SEED dataset and an Autism Spectrum Disorder (ASD) dataset. The SEED dataset [46] focuses on analyzing emotion using EEG signal. This dataset has 15 subjects. The EEG signal is recorded when each subject watches 15 movie clips for 3 times at three different days. Each video clip is attached with a negative/neutral/positive emotion label. The sampling rate is at 1000Hz and a 62-electrode layout is used. In our experiment, we downsample the EEG signal to 200Hz. The test scheme is the leave-one-out cross-validation. In each time, one subject is picked out as test and the remaining 14 subjects are used as training and validation. The Autism Spectrum Disorder (ASD) dataset [11] aims at discovering whether there are significant changes in neural activity in a open label clinical trial evaluating the efficacy of a single infusion of autologous cord blood for treatment of ASD [11]. The study involves 22 children from ages 3 to 7 years undergoing treatment for ASD with EEG measurements at baseline (T1), 6 months post treatment (T2), and 12 months post treatment (T3). The signal was recorded when a child was watching a total of three one-minute long videos designed to measure responses to dynamic social and nonsocial stimuli. The data has 121 signal electrodes. The classification task is to predict the treatment stage T1, T2 and T3 to test the effectiveness of the treatment and analyze what features are dynamic in response to the treatment. By examining the features, we can track how neural changes correlate to this treatment stages. We also adopt the leave-one-out cross-validation scheme for this dataset, where one subject is left out as testing, the remaining 21 subjects are separated as training and validation. Leaving complete subjects out better estimates generalization to a population in these types of neural tasks [42]. The classification accuracy using different methods is compared in Table 2. In this setting, we choose our baseline model as the SyncNet [23]. SyncNet is a neural network with structured filter targeting at extracting neuroscience related features. The simplest framework of SyncNet is adopted which only contains one layer of convolutional filters. As in [23], we set the filter number to 10 for both datasets. For TCA, SA and ITL methods, the baseline model was trained as before without a domain adapter on the source domain data. Extracted features from this model were then used to extract features from target domains. MDMN outperforms other competitors on both EEG datasets. A subject by subject plot is shown in Figure 5. Because performance on subjects is highly variable, we only visualize performance relative to baseline, and absolute performance is visualized in Figure 8 in the appendix. Because the source domains are large but each source domain is highly variable, the requirement to find relevant domains is of increased importance on both of the EEG datasets. For the ASD dataset, DANN and MDANs do not match the performance of MDMN mainly because they cannot correctly pick out most related subject from source domains. This is also true for TCA, SA and ITL. Our proposed algorithm MDMN overcomes this problem by computing domain similarity in feature space while performing feature mapping, and a domain relationship graph by subject is given in Figure 2. Each subject is related to all the others with different weight. The missing edges, like the edges to node ‘s10’, are those with weight less than 0.09. Our algorithm automatically finds the relationship and the domain adaptation happens with the calculated weight, instead of treating all domains equally. 6 Conclusion In this work, we propose the Multiple Domain Matching Network (MDMN) that uses feature matching across different source domains. MDMN is able to use pairwise domain feature similarity to give a weight to each training domain, which is of key importance when the number of source domains increases, especially in many neuroscience and biological applications. While performing domain adaptation, MDMN can also extract the relationship between domains. The relationship graph itself is of interest in many applications. Our proposed adversarial training framework further applies this idea on different domain adaptation tasks and shows state-of-the-art performance. Acknowledgements Funding was provided by the Stylli Translational Neuroscience Award, Marcus Foundation, NICHD P50-HD093074, and NIMH 3R01MH099192-05S2.
1. What is the focus of the paper regarding multi-domain adaptation? 2. What is the proposed approach named and how does it differ from traditional methods? 3. How was the algorithm evaluated, and what were the results? 4. How does the reviewer feel about the performance of the method on certain tasks? 5. What are some concerns the reviewer has regarding the scalability of the algorithm? 6. Can the approach be used to outperform the best-known separate baselines of several tasks?
Review
Review This paper addresses the problem of multi-domain adaptation. We are faced with several (closely) related tasks, each one with limited resources to train a classifier. The classical approach is to map from one or several sources domains to the target domain, disregarding potential relations among the source domains. In this paper, it is proposed to also consider weighted relations between the source domains. The approach is named Multiple Domain Matching Network (MDMN). The paper is well written. I'm not an expert of transfer learning, but I was able to follow the motivation and main ideas of this work. The algorithm is evaluated for an image classification task and two tasks of the medical domain. Each time, N-1 tasks are considered as source domain and the Nth task as target. In image classification, the tasks are MNIST, MNISTM, USPS and SVHN. The proposed method nicely outperforms other domain adaption methods. I'm not working in computer vision, but it seems to me that the state-of-the-art performance on MNIST is better than 5.4% error. The goal of the medical tasks is to predict emotion or treatment stage given EEG measurements while subjects are watching dedicated movie clips. Again, MDNN outperforms published work. What is your recommendation when your approach should work best, in comparison to other domain adaptation frameworks ? When we have several source domains which are rather heterogeneous ? Does your algorithm scale with the number of source domains ? I guess that it does not improve on other domain adaptation or transfer learning approaches if you have only one source domain. I also wonder if your approach could be used to outperform the best known separate baselines of several tasks ? For instance, the current state-of-the-art on MNIST, ImageNet and FlickR ?
NIPS
Title Extracting Relationships by Multi-Domain Matching Abstract In many biological and medical contexts, we construct a large labeled corpus by aggregating many sources to use in target prediction tasks. Unfortunately, many of the sources may be irrelevant to our target task, so ignoring the structure of the dataset is detrimental. This work proposes a novel approach, the Multiple Domain Matching Network (MDMN), to exploit this structure. MDMN embeds all data into a shared feature space while learning which domains share strong statistical relationships. These relationships are often insightful in their own right, and they allow domains to share strength without interference from irrelevant data. This methodology builds on existing distribution-matching approaches by assuming that source domains are varied and outcomes multi-factorial. Therefore, each domain should only match a relevant subset. Theoretical analysis shows that the proposed approach can have a tighter generalization bound than existing multiple-domain adaptation approaches. Empirically, we show that the proposed methodology handles higher numbers of source domains (up to 21 empirically), and provides state-of-the-art performance on image, text, and multi-channel time series classification, including clinical outcome data in an open label trial evaluating a novel treatment for Autism Spectrum Disorder. 1 Introduction Deep learning methods have shown unparalleled performance when trained on vast amounts of diverse labeled training data [21], often collected at great cost. In many contexts, especially medical and biological, it is prohibitively expensive to collect or label the number of observations necessary to train an accurate deep neural network classifier. However, a number of related sources, each with “moderate” data, may already be available, which can be combined to construct a large corpus. Naively using the combined source data is often an ineffective strategy; instead, what is needed is unsupervised multiple-domain adaptation. Given labeled data from several source domains (each representing, e.g., one patient in a medical trial, or reviews of one type of product), and unlabeled data from target domains (new patients, or new product categories), we wish to train a classifier that makes accurate predictions about the target domain data at test time. Recent approaches to multiple-domain adaptation involve learning a mapping from each domain into a common feature space, in which observations from the target and source domains have similar distributions [14, 45, 39, 30]. At test time, a target-domain observation is first mapped into this shared feature space, then classified. However, few of the existing works can model the relationship among different domains, which we note is important for several reasons. First, even though data in different domains share labels, their cause and symptoms may be different. Patients with the same 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. condition can be caused by various reasons and diagnosed while sharing only a subset of symptoms. Extracting these relationships between patients is helpful in practice because it limits the model to only relevant information. Second, as mentioned above, a training corpus may be constructed with only a small number of sources within a larger population. For example, we might collect data from many patients with “small” data and domain adaptation is used to generalize to new patients [3]. Therefore, extracting these relationships is of practical importance. In addition to the practical argument, [32] gives a theoretical proof that adding irrelevant source domains harms performance bounds on multiple-domain adaptation. Therefore, it is necessary to automatically choose a weighting over source domains to utilize only relevant domains. There are only a few works that address such a domain weighting strategy [45]. In this manuscript, we extend the proof techniques of [4, 32] to show that a multiple-domain weighting strategy can have a tighter generalization bound than traditional multiple domain approaches. Notably, many recently proposed transfer learning strategies are based on minimizing the Hdivergence between domains in feature space, which was shown to bound generalization error in domain adaptation [4]. Compared to standard L1-divergence, H-divergence limits the hypothesis to a given class, which can be better estimated using finite samples theoretically. The target error bound using H-divergence has the desirable property that it can be estimated by learning a classifier between the source and target domains with finite VC dimension, motivating the Domain Adversarial Neural Network (DANN) [14]. However, neural network usually has large VC dimensions, making the bound using H-divergence loose in practice. In this work, we propose to use a ‘Wasserstein-like’ metric to define domain similarity in the proofs. ‘Wasserstein-like’ distance in our work extends the binary output in H-divergence to real probability output. Our main contribution is our novel approach to multiple-domain adaptation. A key idea from prior work is to match every source domain’s feature-space distribution to that of the target domain [37, 29]. In contrast, we map the distribution (i) among sources and target and (ii) within source domains. It is only necessary and prudent to match one domain to a relevant subset of the others. This makes sense particularly in medical contexts, as nearly all diagnoses address a multi-factorial disease. The Wasserstein distance is chosen to facilitate the mathematical and theoretical operations of pairwise matching in multiple domains. The underlying idea is also closely related to optimal transport for domain adaptation [7, 8], but address multiple domain matching. The proposed method, MDMN, is visualized in Figure 1(b), compared with standard source to target matching scheme (Figure 1(a)), showing the matching of source domains. This tweak allows for already-similar domains to merge and share statistical strength, while keeping distant clusters of domains separate from one another. At test time, only the domains most relevant to the target are used [5, 32]. In essence, this induces a potentially sparse graph on all domains, which is visualized for 22 patients from one of our experiments in Figure 2. Any neural network architecture can be modified to use MDMN, which can be considered a stand-alone domain-matching module. 2 Method Multiple Domain Matching Network (MDMN) is based upon the intuition that in the extracted feature space, inherently similar domains should have similar or identical distributions. By sharing strength within source domains, MDMN can better deal with the overfitting problem within each domain, a common problem in scientific domains. Meanwhile, the relationships between domains can also be learned, which is of interest in addition to classification performance. In the following, suppose we are given N observations {(xi, yi, si)}Ni=1 from S domains, where yi is the desired label for xi and si is the domain. (In the target domain, the label y is not provided and will instead be predicted.) For brevity, we assume source domains are 1, 2, · · · , S 1, and the Sth domain is the single target domain. In fact, our approach works analogously for any number of unlabeled target domains. The whole framework, shown in Figure 3, is composed of an feature extractor (or encoder), a domain adapter (Sec. 2.1) and a label classifier (Sec. 2.2). In this work, we instantiate all three as neural networks. The encoder E maps data points x to feature vectors E(x). These features are then used by the label classifier to make predictions for the supervised task. They are also used by the domain adapter, encouraging extracted features E(x) to be similar across nearby domains. 2.1 Domain Adaptation with Relationship Extraction This section details the structure of the domain adapter. In order to adapt one domain to the others, one approach is to consider a penalty proportional to the distance between each distribution and the weighted mean of the rest. Specifically, let Ps be the distribution over data points x in domain Ds, and P/s = 1S 1 P S s0=1 wss0Ps0 the distribution of data from all other domains D ws /s . Note that the weight ws = [ws1, · · · , wsS ] is domain specific and ws 2 RS , where ws lies on the simplex with ||ws||1 = 1, wss0 0 for s0 = 1, . . . , S and wss = 0, which will be learned in the framework. In the following, we will use Ds to represent for its distribution Ps in order to simplify the notation. Then we can encourage all domains to be close together in the feature space by adding the following term to the loss: LD(E(x;✓E);✓D) = P S s=1 sd(Ds,D ws /s ), (1) where d(·, ·) is a distance between distributions (domains). Here it is used to measure the discrepancy between one domain and a weighted average of the rest. We assume the weight s equals 1S 1 for s = 1, · · · , S 1 and S = 1 to balance the penalty for the source and target domains, although this may be chosen as a tuning parameter. LD is the total domain adapter loss function. For the rest of this manuscript, we have chosen to use the Wasserstein distance as d(·, ·). This approach is facilitated by the use of Kantorovich-Rubenstein dual formulation of the Wasserstein-1 distance [2], which is given for distributions D1 and D2 as d(D1,D2) = sup||f ||L1 Ex⇠P1 [f(E(x))] Ex⇠P2 [f(E(x))], where ||f ||L 1 denotes that the Lipschitz constant of the function f(·) is at most 1, i.e. |f(x0) f(x)| ||x0 x||2. f() is any Lipschitz-smooth nonlinear function, which can be approximated by a neural network [2]. When S is reasonably small (< 100), it is feasible to include S small neural networks fs(·;✓D) to approximate these distances for each domain. In our implementation, we use shared layers in the domain adapter to enhance computational efficiency and the output of the domain adapter is f(·;✓D) = [f1, · · · , fs, · · · , fS ]. The domain loss term is then given as P S s=1 sup||fs||L1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ . (2) To make the domain penalty in (2) feasible, it is necessary to discuss how the penalty can be included in the optimization flow of neural network training. To develop this mathematical approach, let ⇡s be the proportion of the data that comes from the sth domain, then the penalty can be rewritten as 1 S P S s=1 s ⇣ Ex⇠Ds [fs(E(x))] Ex⇠Dws /s [fs(E(x))] ⌘ = Es⇠Uniform(1,...,S) ⇥ Ex⇠Ds [rTs f(E(x))] ⇤ = Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i , (3) where f(E(x)) is the concatenation of fs(E(x)), i.e. f(E(x)) = [f1(E(x)), · · · , fS(E(x))]T . r 2 RS is defined as rs = ⇢ swss0 , s0 6= s s, s 0 = s , s 0 = 1, · · · , S. (4) The form in (3) is natural to include in an optimization loop because the expectation is empirically approximated by a mini-batch of data. Let {(xi, si)}, i = 1, . . . , N denote observations and their associated domain si, and then Es⇠⇡ h Ex⇠Ds [ 1S⇡s ⇥ r T s f(E(x))] i ' 1 SN P N i=1 ⇡ 1 si rT si f(E(xi;✓E);✓D). (5) The weight vector ws for domain Ds should choose to focus only on relevant domains, and the weights on mismatched domains should be very small. As noted previously, adding uncorrelated domains hurts generalization performance [32]. In our Theorem 3.3, we shows that a weighting scheme with these properties decreases the target error bound. Once the function fs(·;✓D) is known, we can estimate ws by using a softmax transformation on the function expectations from fs between any two domains. Specifically, the weight ws to match Ds to other domains is calculated as ws = softmax/s(ls), with lss0 = Ex⇠Ds [fs(E(x))] Ex⇠Ds0 [fs(E(x))] , (6) where ls = [ls1, · · · , lss0 , · · · , lsS ]. The subscript /s means that the value wss is restricted to 0 and lss is excluded from the softmax. The scalar quantity controls how peaked ws is. Note that setting ws in (2) to the closest domain and 0 otherwise would correspond to the ! 1 case, and ! 0 corresponds to an unweighted (e.g. conventional) case. It is beneficial to force the domain regularizer to match to multiple, but not necessarily all, available domains. Practically, we can either modify in the softmax or change the Lipschitz constant used to calculate the distance (as was done). As an example, the learned graph connectivity is shown in Figure 2 is constructed by thresholding 1 2 (wss0 + ws0s) to determine connectivity between nodes. 2.2 Combining the Loss Terms The proposed method uses the loss in (5) to perform the domain matching. A label classifier is also necessary, which is defined as a neural network parameterized by ✓Y . The label classifier in Figure 3 is represented as Y [E(x)], where the classifier Y is applied on the extracted feature vector E(x). The label predictor usually contains several fully connected layers with nonlinear activation functions. The cross entropy loss is used for classification, i.e. LY (x,y;✓Y ,✓E) = P N i=1 P C c=1 yic log Yc[E(xi)], where Yc means the cth entry of the output. The MSE loss is used for regression. With the label prediction loss LY , the complete network loss is given by min✓E ,✓Y max✓D LY (✓Y ,✓E) + ⇢LD(✓D,✓E), (7) where ✓E denotes the parameters in the feature extractor/encoder, ✓D denotes the parameters in the domain adapter, and ✓Y in the label classifier. The pseudo code for training is given in Algorithm 1. Algorithm 1 Multiple Source Domain Adaptation via WDA Input: Source samples from Ds, s = 1, · · · , S 1 and target samples from DS . Note that we assume index 1, · · · , S 1 are for source domains and S is for the target domain. Iteration kY and kD for training label classifier and domain discriminator. Output: Classifier parameters ✓E ,✓Y ,✓D. for iter = 1 to max_iter do Sample a mini-batch of {xs} from {Ds}S 1s=1 and {xt} from DS . for iterY = 1 to kY do Compute lss0 = Ex2Ds [fs(E(x))] Ex2Ds0 [fs(E(x))] for 8s, s0 2 [1, S]. Compute the weight vectors ws = softmax/s(ls) and wss = 0 for 8s 2 [1, S], where ls = (ls1, · · · , lsS). Compute domain loss LW D (xs,xt) and classifier loss LY (xs). Compute r✓Y = @LY@✓Y and r✓E = @LY @✓E + ⇢@LD @✓E Update ✓Y = ✓Y r✓Y , ✓E = ✓E r✓E . end for for iterD = 1 to kD do Update the weight vectors ws, 8s 2 [1, S]. Compute LD(xs,xt) and r✓D = @LD@✓D . Update ✓D = ✓D +r✓D. end for end for During training, the target domain weight S in eq. (1) is always set to one, while sources domain weights are normalized to have sum one. This is because the ultimate goal is to work well on target domain. We use the gradient penalty introduced in [18] to implement the Lipschitz constraint. A concern is that the feature scale may change and impact the Wasserstein distance. One potential solution to this is to include batch normalization to keep the summary statistics of the extracted features constant. In practice, this is not necessary. Adam [20] is used as the optimization method while the gradient descent step in Algorithm 1 reflects the basic strategy. 2.3 Complexity Analysis Although the proposed algorithm computes pairwise domain distance, the computational cost in practice is similar compared to standard DANN model. For the domain loss functions, we share all the bottom layers for all domains. This is similar to the setup of a multi-class domain classifier with softmax output while in our model, the output is a real number. Specifically, the pairwise distance (6) is updated in each mini-batch by averaging samples in the same domain. l̂ss0 ⇡ 1 ns X 8xi2Ds fs(E(xi)) 1 ns0 X 8xi2Ds0 fs(E(xi)) (8) Because these pairwise calculations happen late in the network, their computational cost is dwarfed by feature generation. We believe that the methods will easily scale to hundreds of domains based on computational and memory scaling. We use exponential smoothing during the updates to improve the quality of the estimates, with lt+1 ss0 = 0.9l t ss0 + 0.1l̂ c ss0 . l̂ c ss0 is the value from current iteration’s mini-batch. Then the softmax is applied on the calculated values to get the weight wss0 . This procedure is used to update ws, so those parameters are not included in the backpropagation. The domain weights and network parameters are updated iteratively, as shown in Algorithm 1. 3 Theoretical Results In this section, we investigate the theorems and derivations used to bound the target error with the given method in Section 2. Specifically, the target error is bounded by the source error, the source-target distance plus additional terms which is constant under certain data and hypothesis classes. The theory is developed based on prior theories of source to target adaptation. The adaptation within source domains can be developed in the same way. Additional details and derivations are available in the Supplemental Section A. Let Ds for s = 1, · · · , S and DT represent the source and the target domain, respectively. Note that there is a notation change in the target domain, where the Sth domain was denoted as the target in previous section. Here, it is easier to separate the target domain out. Suppose that there is probabilistic true labeling functions gs, gT : X ! [0, 1] and a probabilistic hypothesis f : X ! [0, 1], which in our case is a neural network. The output value of the labeling function determines the probability that the sample is 0 or 1. gs, gT are assumed Lipschitz smooth with parameters s and T , respectively. This differs from the previous derivation [14] that assumes that the hypothesis and labeling function were deterministic ({0, 1}). In the following, the notation of encoder E() is removed for simplicity. Thus f(x) is actually f(E(x;✓E);✓D). Since we first only focus on the adaptation from source to target, the output of f(·) in this section is a scalar (The last element of f(·)). Same for notation ws, which is the domain similarity of Ds and target. Definition 3.1 (Probabilistic Classifier Discrepancy). The probabilistic classifier discrepancy for domain Ds is defined as s(f, g) = Ex⇠Ds [|f(x) g(x)|]. (9) Note that if the label hypothesis is limited to {0, 1}, this is classification accuracy. In order to construct our main theorem, we use notation ||f ||L 6 to denote -smooth function. Mathematical details are given in Definition A.6 in the appendix. Next we define the weighted Wasserstein-like quantity between sources and the target. Definition 3.2 (Weighted Wasserstein-like quantity). Given S multi-source probability distributions Ps, s = 1, · · · , S and PT for the target domain, the difference between the weighted source domains {Ds}Ss=1 and target domain DT is described as, ↵ (DT , P s wsDs) = maxf :X![0,1],||f ||L Ex⇠DT [f(x)] Ex⇠Ps wsDs [f(x)]. (10) Note that if the bound on the function from 0 to 1 is removed, then this quantity is the KantorovichRubinstein dual form of the Wasserstein-1 distance. As ! 1, this is the same as the commonly used L1-divergence or variation divergence [4]. Thus, we can derive this theorem with H-divergence exactly, but prefer to use the smoothness constraint to match the used Wasserstein distance. We also define f⇤ as an optimal hypothesis that achieves the minimum discrepancy ⇤, which is given in the appendix A.3. Now we come to the main theorem of this work. Theorem 3.3 (Bound on weighted multi-source discrepancy). For a hypothesis f : X ! [0, 1], T (f, gT ) P S s=1 ws s(f, gs) + ↵ T+ ⇤( P S s=1 wsDs,DT ) + ⇤ (11) The quantity ⇤ given in (27) is defined in the appendix and addresses the fundamental mismatch in true labeling functions, which is uncontrollable by domain adaptation. Note that a weighted sum of Lipschitz continuous functions is also Lipschitz continuous. ⇤ is the Lipschitz continuity for the weighted domain combination ⇤ = P S s=1 ws s, where fs() of domain Ds has Lipschitz constant s. We note that in Theorem 3.3 we are dependent on the weighted sum of the source domains, implying that increasing the weight on irrelevant source domain may hurt generalization performance. This matches existing literature. Second, a complex model with high learning capacity will reduce the source error s(f, gs), but the uncertainty introduced by the model will increase the domain discrepancy measurement ↵ + ⇤({Ds}Ss=1,DT ). Compared to the multi-source domain adversarial network’s (MDAN’s) [45] bound, T (f, gT ) maxs s(f, gs) + maxs dH H(Ds,DT ) + ⇤, where the definition of dH H is given in appendix section A.2. Theorem 3.3 reveals that weighting has a tighter bound because an irrelevant domain with little weight will not seriously hurt the generalization bound whereas prior approaches have taken the max over the least relevant domain. Also, the inner domain matching helps prevent spurious relationships between irrelevant domains and the target. Therefore, MDAN can pick out more relevant source domains compared to the alternative methods evaluated. 4 Related Work There is a large history in domain adaptation to transfer source distribution information to the target distribution or vice versa, and has been approached in a variety of manners. Kernel Mean Matching (KMM) is widely used in the assumption that target data can be represented by a weighted combination of samples in the source domain [37, 19, 12, 29, 40]. Clustering [25] and late fusion [1] approaches have also been evaluated. Distribution matching has been explored with the Minimum Mean Discrepancy [29] and optimal transport [8, 7], which is similar to the motivation used in our domain penalization. With the increasing use of neural networks, weight sharing and transfer has emerged as an effective strategy for domain adaptation [15]. With the development of Generative Adversarial Networks (GANs) [17], adversarial domain adaptation has become popular. The Domain Adversarial Neural Network (DANN) is a newly proposed model for feature adaptation rather than simple network weight sharing [14]. Since its publication, the DANN approach has been generalized [39, 43] and extended to multiple domains [45]. In the multiple domain case, a weighted combination of source domains is used for adaptation. [22] is based on the DANN framework, but uses distributional summary statistics in the adversary. Several other methods use source or target sample generation with GANs on single source domain adaptation [35, 27, 26, 33], but extensions to multi-source domains are not straightforward. [3] provides a multi-stage multi-source domain adaptation. There has also been theoretical analysis of error bounds for multi-source domain adaptation. [9] analyzes the theory on distributed weighted combining of multiple source domains. [32] gives a bound on target loss when only using k-nearest source domains. It shows that adding more uncorrelated source domains training data hurts the generalization bound. The bound that [4] gives is also on the target risk loss. It introduces the H-divergence as a measurement of the distance between source and target domains. [5] further analyzes whether source sample quantity can compensate for quality under different methods and different target error measurements. Domain adaptation can be used in a wide variety of applications. [16, 10] uses it for natural language processing tasks. [12] perform video concept detection using multi-source domain adaptation with auxiliary classifiers. [15, 14, 1, 3, 39] focus on image domain transfer learning. The multi-source domain adaptation in previous works is usually limited to fewer than five source domains. Some scientific applications have more challenging situation by adapting from a significantly higher number of source domains [44]. In some neural signals, different methods have been employed to transfer among subjects based on hand crafted EEG features [38, 24]; however, these models need to be trained in several steps, making them less robust. 5 Experiment We tested MDMN by applying it to three classification problems: image recognition, natural-language sentiment detection, and multi-channel time series analysis. The sentiment classification task is given in the Appendix due to limited space. 5.1 Results on Image Datasets We first test the performance of the proposed MDMN model on MNIST, MNISTM, SVHN and USPS datasets. Visualizations of these datasets are given in the Appendix Section C.1. In each test, one dataset is left out as target domain while the remaining three are treated as source domains. The feature extractor E consists of two convolutional layers plus two fully connected layers. Both the label predictor and domain adapter are two layers MLP. ReLU nonlinearity is used between layers. The baseline method is the concatenation of feature extractor and label predictor as a standard CNN but it has no access to any target domain data during training process. While TCA [34] and SA [13] methods can process raw images, the results are significantly stronger following a feature extraction step. The results from these methods are given by following two independent steps. First, a convolutional neural network with the same structure as in our proposed approach is used as a baseline. This model is trained on the source domains, and then features are extracted for all domains to use as inputs into TCA and SA. Another issue is computational complexity for TCA, because this algorithm computes the matrix inverse during the inference, which is of complexity O(N3). Hence, data was limited for this algorithm. For the adversarial based algorithms [39, 14, 45] and MDMN model, the domain classifier is the uniform, which is a two layer MLP with ReLU nonlinearities and a soft-max top layer. The classification accuracy is compared in Table 1. The top row shows the baseline result on the target domain with the classifier trained on the three other datasets. The proposed model MDMN outperforms the other baselines on all datasets. Note that some domain-adaptation algorithms actually lower the accuracy, revealing that domain-specific features are being transfered. Another problem encountered is the mismatch between the source domain and target domain. For instance, when the target domain is the MNIST-M dataset, it is expected to give large weight to MNIST dataset samples during training. However, algorithms like TCA, SA and DANN equally weight all source domain datasets, making the result worse than MDMN. Acc. % MNIST MNISTM USPS SVHN Baseline 94.6 60.8 89.4 43.7 TCA [34] 78.4 45.2 75.4 39.7 SA [13] 90.8 59.9 86.3 40.2 DAN [28] 97.1 67.0 90.4 51.9 ADDA [39] 89.0 80.3 85.2 43.5 DANN [14] 97.9 68.8 89.3 50.1 MDANs [45] 97.2 68.5 90.1 50.5 MDMN 98.0 83.8 94.5 53.1 domain adaptation is happening because the extracted features are more similar between domains. MDMN has the most clear digit mixing effect. The model finds the digit label features instead of domain specific features. A larger figure of the same result is given in Appendix C.1 for enhanced clarity. 5.2 Result on EEG Time Series Two datasets are used to evaluate performance on Electroencephalography (EEG) data: SEED dataset and an Autism Spectrum Disorder (ASD) dataset. The SEED dataset [46] focuses on analyzing emotion using EEG signal. This dataset has 15 subjects. The EEG signal is recorded when each subject watches 15 movie clips for 3 times at three different days. Each video clip is attached with a negative/neutral/positive emotion label. The sampling rate is at 1000Hz and a 62-electrode layout is used. In our experiment, we downsample the EEG signal to 200Hz. The test scheme is the leave-one-out cross-validation. In each time, one subject is picked out as test and the remaining 14 subjects are used as training and validation. The Autism Spectrum Disorder (ASD) dataset [11] aims at discovering whether there are significant changes in neural activity in a open label clinical trial evaluating the efficacy of a single infusion of autologous cord blood for treatment of ASD [11]. The study involves 22 children from ages 3 to 7 years undergoing treatment for ASD with EEG measurements at baseline (T1), 6 months post treatment (T2), and 12 months post treatment (T3). The signal was recorded when a child was watching a total of three one-minute long videos designed to measure responses to dynamic social and nonsocial stimuli. The data has 121 signal electrodes. The classification task is to predict the treatment stage T1, T2 and T3 to test the effectiveness of the treatment and analyze what features are dynamic in response to the treatment. By examining the features, we can track how neural changes correlate to this treatment stages. We also adopt the leave-one-out cross-validation scheme for this dataset, where one subject is left out as testing, the remaining 21 subjects are separated as training and validation. Leaving complete subjects out better estimates generalization to a population in these types of neural tasks [42]. The classification accuracy using different methods is compared in Table 2. In this setting, we choose our baseline model as the SyncNet [23]. SyncNet is a neural network with structured filter targeting at extracting neuroscience related features. The simplest framework of SyncNet is adopted which only contains one layer of convolutional filters. As in [23], we set the filter number to 10 for both datasets. For TCA, SA and ITL methods, the baseline model was trained as before without a domain adapter on the source domain data. Extracted features from this model were then used to extract features from target domains. MDMN outperforms other competitors on both EEG datasets. A subject by subject plot is shown in Figure 5. Because performance on subjects is highly variable, we only visualize performance relative to baseline, and absolute performance is visualized in Figure 8 in the appendix. Because the source domains are large but each source domain is highly variable, the requirement to find relevant domains is of increased importance on both of the EEG datasets. For the ASD dataset, DANN and MDANs do not match the performance of MDMN mainly because they cannot correctly pick out most related subject from source domains. This is also true for TCA, SA and ITL. Our proposed algorithm MDMN overcomes this problem by computing domain similarity in feature space while performing feature mapping, and a domain relationship graph by subject is given in Figure 2. Each subject is related to all the others with different weight. The missing edges, like the edges to node ‘s10’, are those with weight less than 0.09. Our algorithm automatically finds the relationship and the domain adaptation happens with the calculated weight, instead of treating all domains equally. 6 Conclusion In this work, we propose the Multiple Domain Matching Network (MDMN) that uses feature matching across different source domains. MDMN is able to use pairwise domain feature similarity to give a weight to each training domain, which is of key importance when the number of source domains increases, especially in many neuroscience and biological applications. While performing domain adaptation, MDMN can also extract the relationship between domains. The relationship graph itself is of interest in many applications. Our proposed adversarial training framework further applies this idea on different domain adaptation tasks and shows state-of-the-art performance. Acknowledgements Funding was provided by the Stylli Translational Neuroscience Award, Marcus Foundation, NICHD P50-HD093074, and NIMH 3R01MH099192-05S2.
1. What is the main contribution of the paper, and how does it differ from previous approaches to multiple-domain adaptation? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and performance on various classification tasks? 3. What are the weaknesses of the paper, especially regarding the assumption made about the relevance of source domains? 4. How does the proposed method handle the issue of varying feature scales in practice? 5. Is there any concern about the algorithm's ability to assign appropriate weights to irrelevant domains, and what happens if it fails to do so? 6. How does the method scale when applied to a larger number of source domains, and can it be extended or applied to fuzzy domains?
Review
Review Title: Extracting Relationships by Multi-Domain Matching Summary Assuming that a corpus is compiled from many sources belonging to different to domains, of which only a strict subset of domains is suitable to learn how to do prediction in a target domain, this paper proposes a novel approach (called Multiple Domain Matching Network (MDMN)) that aims at learning which domains share strong statistical relationships, and which source domains are best at supporting to learn the target domain prediction tasks. While many approaches to multiple-domain adaptation aim to match the feature-space distribution of *every* source domain to that of the target space, this paper suggests to not only map the distribution between sources and target, but also *within* source domains. The latter allows for identifying subsets of source domains that share a strong statistical relationship. Strengths Paper provides a theoretical analysis that yields a tighter bound on the weighted multi-source discrepancy. Approach yields state-of-the-art performance on image, text and multi-channel time series classification tasks. Weaknesses Tighter bound on multi-source discrepancy depends on the assumption that source domains that are less relevant for the target domain have lower weights. While intuitively, this may seem obvious, there is no guarantee that in practice, the irrelevant source domains can reliably be identified. No commitment that source code may get released. Questions L 96: Is it intended that the sum runs over all domains but s, including the target domain S? L120: Why is the Wasserstein distance not affected by a varying feature scale in practice? There is a shift in notation in Section 3 where the target domain is not denoted by T while the source domains are denoted by s=1...S. In Section 2, the source domains were defined as s=1,...,S-1 while the single target domain was defined as S. Theorem 3.3 shows that weighting yields a tighter bound given that irrelevant domains are assigned small weights. However, what happens if the algorithm fails to assign small weights to irrelevant domains or, in the most adverse case, if the least relevant domains get assigned the highest weights? More general: for which weight distributions does Theorem 3.3 provide tighter bounds? To what number of source domains does the provided method scale? A total of 21 domains may still be a small number if this method were to be applied to other tasks. What potential does MDMN have to be extended or applied to fuzzy domains, i.e., where the source data set does not induce a canonical partitioning into domains? Comments Editorial Notes L 36 is be -> is L 219 develop -> development L 287 An -> A L 320 [add reference] L 321 domains is large L 342 state-of-the0art -> state-of-the-art --------------- Thanks to the authors for their detailed responses and addressing my questions. I am looking forward to the release of the code.
NIPS
Title Label Noise SGD Provably Prefers Flat Global Minimizers Abstract In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to. Motivated by empirical studies that demonstrate that training with noisy labels improves generalization, we study the implicit regularization effect of SGD with label noise. We show that SGD with label noise converges to a stationary point of a regularized loss L(θ)+λR(θ), where L(θ) is the training loss, λ is an effective regularization parameter depending on the step size, strength of the label noise, and the batch size, and R(θ) is an explicit regularizer that penalizes sharp minimizers. Our analysis uncovers an additional regularization effect of large learning rates beyond the linear scaling rule that penalizes large eigenvalues of the Hessian more than small ones. We also prove extensions to classification with general loss functions, significantly strengthening the prior work of Blanc et al. [3] to global convergence and large learning rates and of HaoChen et al. [12] to general models. 1 Introduction One of the central questions in modern machine learning theory is the generalization capability of overparametrized models trained by stochastic gradient descent (SGD). Recent work identifies the implicit regularization effect due to the optimization algorithm as one key factor in explaining the generalization of overparameterized models [27, 11, 19, 10]. This implicit regularization is controlled by many properties of the optimization algorithm including search direction [11], learning rate [20], batch size [26], momentum [21] and dropout [22]. The parameter-dependent noise distribution in SGD is a crucial source of regularization [16, 18]. Blanc et al. [3] initiated the study of the regularization effect of label noise SGD with square loss1 by characterizing the local stability of global minimizers of the training loss. By identifying a data-dependent regularizer R(θ), Blanc et al. [3] proved that label noise SGD locally diverges from the global minimizer θ∗ if and only if θ∗ is not a first-order stationary point of minθ R(θ) subject to L(θ) = 0. The analysis is only able to demonstrate that with sufficiently small step size η, label noise SGD initialized at θ∗ locally diverges by a distance of η0.4 and correspondingly decreases the regularizer by η0.4. This is among the first results that establish that the noise distribution alters the local stability of stochastic gradient descent. However, the parameter movement of η0.4 is required to 1Label noise SGD computes the stochastic gradient by first drawing a sample (xi, yi), perturbing y′i = yi+ with ∼ {−σ, σ}, and computing the gradient with respect to (xi, y′i). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be inversely polynomially small in dimension and condition number and is thus too small to affect the predictions of the model. HaoChen et al. [12], motivated by the local nature of Blanc et al. [3], analyzed label noise SGD in the quadratically-parametrized linear regression model [29, 32, 23]. Under a well-specified sparse linear regression model and with isotropic features, HaoChen et al. [12] proved that label noise SGD recovers the sparse ground-truth despite overparametrization, which demonstrated a global implicit bias towards sparsity in the quadratically-parametrized linear regression model. This work seeks to identify the global implicit regularization effect of label noise SGD. Our primary result, which supports Blanc et al. [3], proves that label noise SGD converges to a stationary point of L(θ) + λR(θ), where the regularizer R(θ) penalizes sharp regions of the loss landscape. The focus of this paper is on label noise SGD due to its strong regularization effects in both real and synthetic experiments [25, 28, 31]. Furthermore, label noise is used in large-batch training as an additional regularizer [25] when the regularization from standard regularizers (e.g. mini-batch, batch-norm, and dropout) is not sufficient. Label noise SGD is also known to be less sensitive to initialization, as shown in HaoChen et al. [12]. In stark contrast, mini-batch SGD remains stuck when initialized at any poor global minimizer. Our analysis demonstrates a global regularization effect of label noise SGD by proving it converges to a stationary point of a regularized loss L(θ) + λR(θ), even when initialized at a zero error global minimum. The learning rate and minibatch size in SGD are known to be important sources of regularization [9]. Our main theorem highlights the importance of learning rate and batch size as the hyperparameters that control the balance between the loss and the regularizer – larger learning rates and smaller batch sizes lead to stronger regularization. Section 2 reviews the notation and assumptions used throughout the paper. Section 2.4 formally states the main result and Section 3 sketches the proof. Section 4 presents experimental results which support our theory. Finally, Section 6 discusses the implications of this work. 2 Problem Setup and Main Result Section 2.1 describes our notation and the SGD with label noise algorithm. Section 2.2 introduces the explicit formula for the regularizer R(θ). Sections 2.3 and 2.4 formally state our main result. 2.1 Notation We focus on the regression setting (see Appendix E for the extension to the classification setting). Let {(xi, yi)}i∈[n] be n datapoints with xi ∈ D and yi ∈ R. Let f : D×Rd → R and let fi(θ) = f(xi, θ) denote the value of f on the datapoint xi. Define `i(θ) = 12 (fi(θ)− yi) 2 and L(θ) = 1n ∑n i=1 `i(θ). Then we will follow Algorithm 1 which adds fresh additive noise to the labels yi at every step before computing the gradient: Algorithm 1: SGD with Label Noise Input: θ0, step size η, noise variance σ2, batch size B, steps T for k = 0 to T − 1 do Sample batch B(k) ⊂ [n]B uniformly and label noise (k)i ∼ {−σ, σ} for i ∈ B(k). Let ˆ̀(k)i (θ) = 1 2 ( fi(θ)− yi − (k)i )2 and L̂(k) = 1B ∑ i∈B(k) ˆ̀(k) i . θk+1 ← θk − η∇L̂(k)(θk) end Note that σ controls the strength of the label noise and will control the strength of the implicit regularization in Theorem 1. Throughout the paper we will use ‖ · ‖ = ‖ · ‖2. We make the following standard assumption on f : Assumption 1 (Smoothness). We assume that each fi is `f -Lipschitz,∇fi is ρf -Lipschitz, and∇2fi is κf -Lipschitz with respect to ‖ · ‖2 for i = 1, . . . , n. We will define ` = `2f to be an upper bound on ‖ 1n ∑ i∇fi(θ)∇fi(θ)T ‖2, which is equal to ‖∇2L(θ)‖2 at any global minimizer θ. Our results extend to any learning rate η ∈ (0, 2` ). However, they do not extend to the limit as η → 2` . Because we still want to track the dependence on 1η , we do not assume η is a fixed constant and instead assume some constant separation: Assumption 2 (Learning Rate Separation). There exists a constant ν ∈ (0, 1) such that η ≤ 2−ν` . In addition, we make the following local Kurdyka-Łojasiewicz assumption (KL assumption) which ensures that there are no regions where the loss is very flat. The KL assumption is very general and holds for some δ > 0 for any analytic function defined on a compact domain (see Lemma 17). Assumption 3 (KL). Let θ∗ be any global minimizer of L. Then there exist KL > 0, µ > 0 and 0 < δ ≤ 1/2 such that if L(θ)− L(θ∗) ≤ KL, then L(θ)− L(θ∗) ≤ µ‖∇L(θ)‖1+δ . We assume L(θ∗) = 0 for any global minimizer θ∗. Note that if L satisfies Assumption 3 for some δ then it also satisfies Assumption 3 for any δ′ < δ. Assumption 3 with δ = 1 is equivalent to the much stronger Polyak-Łojasiewicz condition which is equivalent to local strong convexity. We will use O,Θ,Ω to hide any polynomial dependence on µ, `f , ρf , κf , ν, 1/σ, n, d and Õ to hide additional polynomial dependence on log 1/η, logB. 2.2 The Implicit Regularizer R(θ) For L, σ2, B, η as defined above, we define the implicit regularizer R(θ), the effective regularization parameter λ, and the regularized loss L̃(θ): R(θ) = − 1 2η tr log ( 1− η 2 ∇2L(θ) ) , λ = ησ2 B , L̃(θ) = L(θ) + λR(θ). (1) Here log refers to the matrix logarithm. To better understand the regularizer R(θ), let λ1, . . . , λd be the eigenvalues of ∇2L(θ) and let R(λi) = − 12η log(1− ηλi 2 ). Then, R(θ) = d∑ i=1 R(λi) = d∑ i=1 ( λi 4 + ηλ2i 16 + η2λ3i 48 + . . . ) . In the limit as η → 0, R(θ) → 14 tr∇2L(θ), which matches the regularizer in Blanc et al. [3] for infinitesimal learning rate near a global minimizer. However, in additional to the linear scaling rule, which is implicit in our definition of λ, our analysis uncovers an additional regularization effect of large learning rates that penalizes larger eigenvalues more than smaller ones (see Figure 1 and Section 6.1). The goal of this paper is to show that Algorithm 1 converges to a stationary point of the regularized loss L̃ = L + λR. In particular, we will show convergence to an ( , γ)-stationary point, which is defined in the next section. 2.3 ( , γ)-Stationary Points We begin with the standard definition of an approximate stationary point: Definition 1 ( -stationary point). θ is an -stationary point of f if ‖∇f(θ)‖ ≤ . In stochastic gradient descent it is often necessary to allow λ = ησ 2 B to scale with to reach an -stationary point [8, 15] (e.g., λ may need to be less than 2). However, for λ = O( ), any local minimizer θ∗ is an -stationary point of L̃ = L+ λR. Therefore, reaching a -stationary point of L̃ would be equivalent to finding a local minimizer and would not be evidence for implicit regularization. To address this scaling issue, we consider the rescaled regularized loss: 1 λ L̃ = 1 λ L+R. Reaching an -stationary point of 1λ L̃ requires non-trivially taking the regularizer R into account. However, it is not possible for Algorithm 1 to reach an -stationary point of 1λ L̃ even in the ideal setting when θ is initialized near a global minimizer θ∗ of L̃. The label noise will cause fluctuations of order √ λ around θ∗ (see section 3) so ‖∇L‖ will remain around √ λ. This causes 1λ∇L to become unbounded for λ (and therefore ) sufficiently small, and thus Algorithm 1 cannot converge to an -stationary point. We therefore prove convergence to an ( , γ)-stationary point: Definition 2 (( , γ)-stationary point). θ is an ( , γ)-stationary point of f if there exists some θ∗ such that ‖∇f(θ∗)‖ ≤ and ‖θ − θ∗‖ ≤ γ. Intuitively, Algorithm 1 converges to an ( , γ)-stationary point when it converges to a neighborhood of some -stationary point θ∗. 2.4 Main Result Having defined an ( , γ)-stationary point we can now state our main result: Theorem 1. Assume that f satisfies Assumption 1, η satisfies Assumption 2, and L satisfies Assumption 3, i.e. L(θ) ≤ µ‖∇L(θ)‖1+δ for L(θ) ≤ KL. Let η,B be chosen such that λ := ησ 2 B = Θ̃(min( 2/δ, γ2)), and let T = Θ̃(η−1λ−1−δ) = poly(η−1, γ−1). Assume that θ is initialized within O( √ λ1+δ) of some θ∗ satisfying L(θ∗) = O(λ1+δ). Then for any ζ ∈ (0, 1), with probability at least 1 − ζ, if {θk} follows Algorithm 1 with parameters η, σ, T , there exists k < T such that θk is an ( , γ)-stationary point of 1λ L̃. Theorem 1 guarantees that Algorithm 1 will hit an ( , γ)-stationary point of 1λ L̃ within a polynomial number of steps in −1, γ−1. In particular, when δ = 12 , Theorem 1 guarantees convergence within Õ( −6 + γ−3) steps. The condition that θ0 is close to an approximate global minimizer θ∗ is not a strong assumption as recent methods have shown that overparameterized models can easily achieve zero training loss in the kernel regime (see Appendix C). However, in practice these minimizers of the training loss generalize poorly [1]. Theorem 1 shows that Algorithm 1 can then converge to a stationary point of the regularized loss which has better generalization guarantees (see Section 6.2). Theorem 1 also generalizes the local analysis in Blanc et al. [3] to a global result with weaker assumptions on the learning rate η. For a full comparison with Blanc et al. [3], see section 3.1. 3 Proof Sketch The proof of convergence to an ( , ϕ)-stationary point of 1λ L̃ has two components. In Section 3.1, we pick a reference point θ∗ and analyze the behavior of Algorithm 1 in a neighborhood of θ∗. In Section 3.2, we repeat this local analysis with a sequence of reference points {θ∗m}. 3.1 Local Coupling Let Φk(·) denote k steps of gradient descent on the regularized loss L̃, i.e. Φ0(θ) = θ and Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)), (2) where L̃(θ) = L(θ) + λR(θ) is the regularized loss defined in Equation (1). Lemma 1 states that if θ is initialized at an approximate global minimizer θ∗ and follows Algorithm 1, there is a small mean zero random process ξ such that θk ≈ Φk(θ∗) + ξk: Lemma 1. Let ι = c log d λζ , X = √ 2λndι ν , L = cλ1+δ, D = c √ L ι, M = D ν , T = 1 c2ηX ι , where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let θ follow Algorithm 1 starting at θ∗ and assume thatL(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)− θ∗‖ ≤ 8M , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Note that because M ≥ D , the error term D is at least 8 times smaller than the movement in the direction of the regularized trajectory Φτ (θ∗), which will allow us to prove convergence to an ( , γ)-stationary point of 1λ L̃ in Section 3.2. Toward simplifying the update in Algorithm 1, we define L(k) to be the true loss without label noise on batch B(k). The label-noise update L̂(k)(θk) is an unbiased perturbation of the mini-batch update: ∇L̂(k)(θk) = ∇L(k)(θk)− 1B ∑ i∈B(k) (k) i ∇fi(θk). We decompose the update rule into three parts: θk+1 = θk − η∇L(θk)︸ ︷︷ ︸ gradient descent − η[∇L(k)(θk)−∇L(θk)]︸ ︷︷ ︸ minibatch noise + η B ∑ i∈B(k) (k) i ∇fi(θk)︸ ︷︷ ︸ label noise . (3) Let mk = −η[∇L(k)(θk) − ∇L(θk)] denote the minibatch noise. Throughout the proof we will show that the minibatch noise is dominated by the label noise. We will also decompose the label noise into two terms. The first, ∗k, will represent the label noise if the gradient were evaluated at θ ∗ whose distribution does not vary with k. The other term, zk represents the change in the noise due to evaluating the gradient at θk rather than θ∗. More precisely, we have ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗) and zk = η B ∑ i∈B(k) (k) i [∇fi(θk)−∇fi(θ∗)]. We define G(θ) = 1n ∑ i∇fi(θ)∇fi(θ)T to be the covariance of the model gradients. Note that ∗k has covariance ηλG(θ∗). To simplify notation in the Taylor expansions, we will use the following shorthand to refer to various quantities evaluated at θ∗: G = G(θ∗), ∇2L = ∇2L(θ∗), ∇3L = ∇3L(θ∗), ∇R = ∇R(θ∗). First we need the following standard decompositions of the Hessian: Proposition 1. For any θ ∈ Rd we can decompose ∇2L(θ) = G(θ) + E(θ) where E(θ) = 1 n ∑n i=1(fi(θ)− yi)∇2fi(θ) satisfies ‖E(θ)‖ ≤ √ 2ρfL(θ) where ρf is defined in Assumption 1. The matrix G in Proposition 1 is known as the Gauss-Newton term of the Hessian. We can now Taylor expand Algorithm 1 and Equation (2) to first order around θ∗: Φk+1(θ ∗) ≈ Φk(θ∗)− η [ ∇L+∇2L(Φk(θ∗)− θ∗) ] , θk+1 ≈ θk − η [ ∇L+∇2L(θk − θ∗) ] + ∗k. We define vk = θk − Φk(θ∗) to be the deviation from the regularized trajectory. Then subtracting these two equations gives vk+1 ≈ (I − η∇2L)vk + ∗k ≈ (I − ηG)vk + ∗k, where we used Proposition 1 to replace∇2L with G. Temporarily ignoring the higher order terms, we define the random process ξ by ξk+1 = (I − ηG)ξk + ∗k and ξ0 = 0. (4) The process ξ is referred to as an Ornstein Uhlenbeck process and it encodes the movement of θ to first order around θ∗. We defer the proofs of the following properties of ξ to Appendix B: Proposition 2. For any k ≥ 0, with probability at least 1 − 2de−ι, ‖ξk‖ ≤ X . In addition, as k →∞, E[ξkξTk ]→ λΠG(2− ηG)−1 where ΠG is the projection onto the span of G. We can now analyze the effect of ξk on the second order Taylor expansion. Let rk = θk−Φk(θ∗)−ξk be the deviation of θ from the regularized trajectory after removing the Ornstein Uhlenbeck process ξ. Lemma 1 is equivalent to Pr[‖rτ‖ ≥ D ] ≤ 10τde−ι. We will prove by induction that ‖rk‖ ≤ D for all k ≤ t with probability at least 1− 10tde−ι for all t ≤ τ . The base case follows from r0 = 0 so assume the result for some t ≥ 0. The remainder of this section will be conditioned on the event ‖rk‖ ≤ D for all k ≤ t. O(·) notation will only be used to hide absolute constants that do not change with t and will additionally not hide dependence on the absolute constant c. The following proposition fills in the missing second order terms in the Taylor expansion around θ∗ of rk: Proposition 3. With probability at least 1− 2de−ι, rk+1 = (I − ηG)rk − η [ 1 2 ∇3L(ξk, ξk)− λ∇R ] +mk + zk + Õ ( c5/2ηλ1+δ ) The intuition for the implicit regularizer R(θ) is that by Propositions 1 and 2, E[ξkξTk ]→ ΠGλ(2− ηG)−1 ≈ λ(2− η∇2L)−1. Therefore, when averaged over long timescales, 1 2 E[∇3L(ξk, ξk)] ≈ λ 2 ∇3L [ (2− η∇2L)−1 ] = λ∇ [ − 1 2η tr log ( 1− η 2 ∇2L(θ) )]∣∣∣∣ θ=θ∗ = λ∇R. The second equality follows from the more general equality that for any matrix function A and any scalar function h that acts independently on each eigenvalue, ∇(trh(A(θ))) = (∇A(θ))(h′(A(θ))) which follows from the chain rule. The above equality is the special case when A(θ) = ∇2L(θ) and h(x) = − 1η log ( 1− η2x ) , which satisfies h′(x) = 12−ηx . The remaining details involve concentrating the mean zero error terms mk, zk and showing that E[ξkξTk ] does concentrate in the directions with large eigenvalues and that the directions with small eigenvalues, in which the covariance does not concentrate, do not contribute much to the error. This yields the following bound: Proposition 4. With probability at least 1− 10de−ι, ‖rt+1‖ = Õ ( λ1/2+δ/2√ c ) . The proof of Proposition 4 can be found in Appendix B. Finally, because D = Õ(c5/2λ1/2+δ/2), ‖rt+1‖ ≤ D for sufficiently large c. This completes the induction and the proof of Lemma 1. Comparison with Blanc et al. [3] Like Blanc et al. [3], Lemma 1 shows that θ locally follows the trajectory of gradient descent on an implicit regularizer R(θ). However, there are a few crucial differences: • Because we do not assume we start near a global minimizer where L = 0, we couple to a regularized loss L̃ = L + λR rather than just the regularizer R(θ). In this setting there is an additional correction term to the Hessian (Proposition 1) that requires carefully controlling the value of the loss across reference points to prove convergence to a stationary point. • The analysis in Blanc et al. [3] requires η, τ to be chosen in terms of the condition number of ∇2L which can quickly grow during training as ∇2L is changing. This makes it impossible to directly repeat the argument. We avoid this by precisely analyzing the error incurred by small eigenvalues, allowing us to prove convergence to an ( , γ) stationary point of 1λ L̃ for fixed η, λ even if the smallest nonzero eigenvalue of∇2L converges to 0 during training. • Unlike in Blanc et al. [3], we do not require the learning rate η to be small. Instead, we only require that λ scales with which can be accomplished either by decreasing the learning rate η or increasing the batch size B. This allows for stronger implicit regularization in the setting when η is large (see Section 6.1). In particular, our regularizer R(θ) changes with η and is only equal to the regularizer in Blanc et al. [3] in the limit η → 0. 3.2 Global Convergence In order to prove convergence to an ( , γ)-stationary point of 1η∇L̃, we will define a sequence of reference points θ∗m and coupling times {τm} and repeatedly use a version of Lemma 1 to describe the long term behavior of θ. For notational simplicity, given a sequence of coupling times {τm}, define Tm = ∑ k<m τk to be the total number of steps until we have reached the reference point θ ∗ m. To be able to repeat the local analysis in Lemma 1 with multiple reference points, we need a more general coupling lemma that allows the random process ξ defined in each coupling to continue where the random process in the previous coupling ended. To accomplish this, we define ξ outside the scope of the local coupling lemma: Definition 3. Given a sequence of reference points {θ∗m} and a sequence of coupling times {τm}, we define the random process ξ by ξ0 = 0, and for k ∈ [Tm, Tm+1), ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗m) and ξk+1 = (I − ηG(θ∗m))ξk + ∗k. Then we can prove the following more general coupling lemma: Lemma 2. Let X ,L ,D ,M ,T be defined as in Lemma 1. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let ∆m = θTm−ξTm−θ∗m and assume that ‖∆m‖ ≤ D and L(θ∗m) ≤ L for some 0 < δ ≤ 1/2. Then for any τm ≤ T satisfying maxk∈[Tm,Tm+1) ‖Φk−Tm(θ∗m+∆m)−θ∗m‖ ≤ 8M , with probability at least 1− 10dτme−ι we have simultaneously for all k ∈ (Tm, Tm+1], ‖θk − ξk − Φk−Tm(θ∗m + ∆m)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Unlike in Lemma 1, we couple to the regularized trajectory starting at θ∗m + ∆m rather than at θ ∗ m to avoid accumulating errors (see Figure 2). The proof is otherwise identical to that of Lemma 1. The proof of Theorem 1 easily follows from the following lemma which states that we decrease the regularized loss L̃ by at least F after every coupling: Lemma 3. Let F = D 2 ηνT . Let ∆m = θTm − ξTm − θ∗m and assume ‖∆m‖ ≤ D and L(θ∗m) ≤ L . Then if θTm is not an ( , γ)-stationary point, there exists some τm < T such that if we define θ∗m+1 = Φτn(θ ∗ m + ∆m) and ∆m+1 = θTm+1 − ξTm+1 − θ∗m+1, then with probability 1− 10dτme−ι, L̃(θ∗m+1) ≤ L(θ∗m)−F , ‖∆m+1‖ ≤ D and L(θ∗m+1) ≤ L . We defer the proofs of Lemma 2 and Lemma 3 to Appendix B. Theorem 1 now follows directly from repeated applications of Lemma 3: Proof of Theorem 1. By assumption there exists some θ∗0 such that L(θ ∗ 0) ≤ L and ‖θ0 − θ∗0‖ ≤ D . Then so long as θTm is not an ( , γ)-stationary point, we can inductively apply Lemma 3 to get the existence of coupling times {τm} and reference points {θ∗m} such that for any m ≥ 0, with probability 1 − 10dTme−ι we have L̃(θ∗m) ≤ L̃(θ∗0) − mF . As L̃(θ∗0) − L̃(θ∗m) = O(λ), this can happen for at most m = O ( λ F ) reference points, so at most T = O ( λT F ) = Õ ( η−1λ−1−δ ) iterations of Algorithm 1. By the choice of ι, this happens with probability 1−10dTe−ι ≥ 1−ζ . 4 Experiments In order to test the ability of SGD with label noise to escape poor global minimizers and converge to better minimizers, we initialize Algorithm 1 at global minimizers of the training loss which achieve 100% training accuracy yet generalize poorly to the test set. Minibatch SGD would remain fixed at these initializations because both the gradient and the noise in minibatch SGD vanish at any global minimizer of the training loss. We show that SGD with label noise escapes these poor initializations and converges to flatter minimizers that generalize well, which supports Theorem 1. We run experiments with two initializations: Full Batch Initialization: We run full batch gradient descent with random initialization until convergence to a global minimizer. We call this minimizer the full batch initialization. The final test accuracy of the full batch initialization was 76%. Adversarial Initialization: Following Liu et al. [21], we generate an adversarial initialization with final test accuracy 48% that achieves zero training loss by first teaching the network to memorize random labels and then training it on the true labels. See Appendix D for full details. Experiments were run with ResNet18 on CIFAR10 [17] without data augmentation or weight decay. The experiments were conducted with randomized label flipping with probability 0.2 (see Appendix E for the extension of Theorem 1 to classification with label flipping), cross entropy loss, and batch size 256. Because of the difficulty in computing the regularizer R(θ), we approximate it by its lower bound tr∇2L(θ). Figure 3 shows the test accuracy and tr∇2L throughout training. SGD with label noise escapes both zero training loss initializations and converges to flatter minimizers that generalize much better, reaching the SGD baseline from the fullbatch initialization and getting within 1% of the baseline from the adversarial initialization. The test accuracy in both cases is strongly correlated with tr∇2L. The strength of the regularization is also strongly correlated with η, which supports Theorem 1. 5 Extensions 5.1 SGD with momentum We replace the update in Algorithm 1 with heavy ball momentum with parameter β: θk+1 = θk − η∇L̂(k)(θk) + β(θk − θk−1). (5) We define: R(θ) = 1 + β 2η tr log ( 1− η 2(1 + β) ∇2L(θ) ) , λ = ησ2 B(1− β) , (6) and as before L̃(θ) = L(θ) + λR(θ). Let Φ0(θ) = θ, Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)) + β(Φk(θ)− Φk−1(θ)) (7) represent gradient descent with momentum on L̃. Then we have the following local coupling lemma: Lemma 4. Let X = √ 2λn2ι ν , L = cλ1+δ, D = c √ L ι, T = 1 c2ηX ι , (8) where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η ≤ (2−ν)(1+β)` . Let θ follow Algorithm 1 with momentum β starting at θ∗ and L(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)−θ∗‖ ≤ 8D , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . (9) As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Note that momentum increases the regularization parameter λ by 11−β . For the commonly used momentum parameter β = 0.9, this represents a 10× increase in regularization, which is likely the cause of the improved performance in Figure 4 (β = 0.9) over Figure 3 (β = 0). 5.2 Arbitrary Noise Covariances The analysis in Section 3.1 is not specific to label noise SGD and can be carried out for arbitrary noise schemes. Let θ follow θk+1 = θk − η∇L(θk) + k starting at θ0 where k ∼ N(0, ηλΣ(θk)) and Σ1/2 is Lipschitz. Given a matrix S we define the regularizer RS(θ) = 〈 S,∇2L(θ) 〉 . The matrix S controls the weight of each eigenvalue. As before we can define L̃S(θ) = L(θ) + λRS(θ) and ΦSk+1(θ) = Φ S k (θ) − η∇L̃S(Φk(θ)) to be the regularized loss and the regularized trajectory respectively. Then we have the following version of Lemma 1: Proposition 5. Let θ be initialized at a minimizer θ∗ ofL. Assume∇2L is Lipschitz, letH = ∇2L(θ∗) and assume that Σ(θ∗) CH for some absolute constant C. Let X = √ Cdλι ν , D = cλ 3/4ι, and T = 1c2ηX ι for a sufficiently large constant c. Then there exists a mean zero random process ξ such that for any τ ≤ T satisfying maxk<τ ‖Φk(θ∗)− θ∗‖ ≤ 8D and with probability 1− 10dτe−ι, we have simultaneously for all k ≤ τ : ‖θk − ξk − ΦSk (θ0)‖ ≤ D and ‖ξk‖ ≤X , where S is the unique fixed point of S ← (I − ηH)S(I − ηH) + ηλΣ(θ∗) restricted to span(H). As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Although Proposition 5 couples to gradient descent on RS , S is defined in terms of the Hessian and the noise covariance at θ∗ and therefore depends on the choice of reference point. Because RS is changing, we cannot repeat Proposition 5 as in Section 3.2 to prove convergence to a stationary point because there is no fixed potential. Although it is sometimes possible to relate RS to a fixed potential R, we show in Appendix F.2 that this is not generally possible by providing an example where minibatch SGD perpetually cycles. Exploring the properties of these continuously changing potentials and their connections to generalization is an interesting avenue for future work. 6 Discussion 6.1 Sharpness and the Effect of Large Learning Rates Various factors can control the strength of the implicit regularization in Theorem 1. Most important is the implicit regularization parameter λ = ησ 2 |B| . This supports the hypothesis that large learning rates and small batch sizes are necessary for implicit regularization [9, 26], and agrees with the standard linear scaling rule which proposes that for constant regularization strength, the learning rate η needs to be inversely proportional to the batch size |B|. However, our analysis also uncovers an additional regularization effect of large learning rates. Unlike the regularizer in Blanc et al. [3], the implicit regularizer R(θ) defined in Equation (1) is dependent on η. It is not possible to directly analyze the behavior of R(θ) as η → 2/λ1 where λ1 is the largest eigenvalue of ∇2L, as in this regime, R(θ) → ∞ (see Figure 1). If we let η = 2−νλ1 , then we can better understand the behavior of R(θ) by normalizing it by log 2/ν. This gives2 R(θ) log 2/ν = ∑ i R(λi) log 2/ν = ‖∇2L(θ)‖2 +O ( 1 log 2/ν ) ν→0−−−→ ‖∇2L(θ)‖2 so after normalization, R(θ) becomes a better and better approximation of the spectral norm ‖∇2L(θ)‖ as η → 2/λ1. R(θ) can therefore be seen as interpolating between tr∇2L(θ), when η ≈ 0, and ‖∇2L(θ)‖2 when η ≈ 2/λ1. This also suggests that SGD with large learning rates may be more resilient to the edge of stability phenomenon observed in Cohen et al. [4] as the implicit regularization works harder to control eigenvalues approaching 2/η. The sharpness-aware algorithm (SAM) of [7] is also closely related to R(θ). SAM proposes to minimize max‖δ‖2≤ L(θ + δ). At a global minimizer of the training loss, max ‖δ‖2≤ L(θ∗ + δ) = max ‖δ‖2≤ 1 2 δ>∇2L(θ∗)δ +O( 3) ≈ 2 2 ‖∇2L(θ∗)‖2. The SAM algorithm is therefore explicitly regularizing the spectral norm of∇2L(θ), which is closely connected to the large learning rate regularization effect of R(θ) when η ≈ 2/λ1. 6.2 Generalization Bounds The implicit regularizer R(θ) is intimately connected to data-dependent generalization bounds, which measure the Lipschitzness of the network via the network Jacobian. Specifically, Wei and Ma [30] propose the all-layer margin, which bounds the generalization error . ∑L l=1 Cl√ n √ 1 n ∑n i=1 1 mF (xi,yi)2 , where Cl depends only on the norm of the parameters and mF is the all-layer margin. The norm of the parameters is generally controlled by weight decay regularization, so we focus our discussion on the all-layer margin. Ignoring higher-order secondary terms, Wei and Ma [30, Heuristic derivation of Lemma 3.1] showed for a feed-forward network f(θ;x) = θLσ(θL−1 . . . σ(θ1x)), the all-layer margin satisfies3: 1 mF (x, y) . ‖{ ∂f∂θl }l∈[L]‖2 output margin of (x, y) =⇒ generalization error . ∑L l=1 Cl√ n √ R(θ) output margin as R(θ) is an upper bound on the squared norm of the Jacobian at any global minimizer θ. We emphasize this bound is informal as we discarded the higher-order terms in controlling the all-layer margin, but it accurately reflects that the regularizer R(θ) lower bounds the all-layer margin mF up to higher-order terms. Therefore SGD with label noise implicitly regularizes the all-layer margin. Acknowledgments and Disclosure of Funding AD acknowledges support from a NSF Graduate Research Fellowship. TM acknowledges support of Google Faculty Award and NSF IIS 2045685. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, NSF CCF 2002272, and an ONR Young Investigator Award. The experiments in this paper were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology’s High Performance Computing Center and Visualization Laboratory at Princeton University. We would also like to thank Honglin Yuan and Jeff Z. HaoChen for useful discussions throughout various stages of the project. 2Here we assume λ1 > λ2. If instead λ1 = . . . = λk > λk+1, this limit will be k‖∇2L(θ)‖2. 3The output margin is defined as mini fi(θ)yi. The following uses Equation (3.3) and the first-order approximation provided Wei and Ma [30] and the chain rule ∂f ∂θl = ∂f ∂hl ∂hl ∂θl−1 = ∂f ∂hl h>l−1.
1. What is the main contribution of the paper regarding global convergence? 2. What are the strengths and weaknesses of the paper's theoretical analysis? 3. Do you have any questions or concerns regarding the paper's results and their connection to the title and experiments? 4. How does the paper's theory apply to linear regression, and how can it be interpreted? 5. Can you elaborate on the choice of the learning rate in the theorem and its relation to the considered model? 6. How does the paper's result justify the effect of label noise regularization in escaping poor global minima? 7. Why is minimizing a lower bound of the objective used in the experiment instead of an upper bound? 8. Are there any suggestions for future revisions to improve the paper's clarity and impact?
Summary Of The Paper Review
Summary Of The Paper This paper studies the global convergence of SGD/GD with label noise. Under suitable assumptions, it shows that if initialized properly and with suitable stepsize/batch size, label noise SGD/GD converges approximately to some stationary point of a regularized loss function. The results reveal a novel implicit regularization effect of label noise that approximately equals to adding an explicit regularizer. Notably, this regularization effect is more favorable than the implicit bias of vanilla SGD, as vanilla SGD cannot escape from poor global minima, while adding label noise regularizes certain Hessian norm and thus helps the iterates to escape and converge to flat/good minima (that achieve both small main loss and small regularization loss). Empirical results are provided to demonstrate this advantage of label noise regularization. Review Pros: The theoretic results provide intuition to understand the implicit regularization effect of adding label noise, which is shown empirically to be better than SGD implicit bias in some cases (like initialization from poor minima). The introduced ( ϵ , γ ) -stationary points that characterize a neighbor of a stationary point seem to be interesting for understanding the properties of a regularized objective. Empirical observations about label noise are partly justified by the presented theory. Cons: l.50. Based on my knowledge, the initial learning rate is generally large for large batch SGD (e.g. linear scaling rule). Could you provide references here? Definition 2. The order of ϵ vs. γ seems to be important, could you elaborate on this? What prevents us from setting γ ≈ ϵ as happening in linear regression? I am trying to interpret Thm1 for linear regression but find several places are not fully clear. Discussions on how the theorem applies to linear regression could be helpful (e.g., how to setup the parameters in the thm when the considered model is linear regression). I am not sure how Thm1 helps in terms of justifying the title. The title claims label noise SGD prefers flat global "minimizers", but thm1 only shows convergence to approximate stationary point. Please elaborate. I am not sure how Thm1 helps in terms of justifying the experiments. Indeed θ ∗ could be a poor global minimizer of L so that SGD cannot escape. But it remains unclear to me why θ k in thm 1 is a good minimizer. I am not sure how Thm1 helps in terms of justifying the claims on the effect of initial large learning rate. Note that Thm1 requires η / B ≲ γ 2 , then why η is considered to be "large" (e.g., l.276)? Are you allowing some annealing learning rate here? My next question is also about Thm1: it seems to me the whole considered iterates are in a neighbor of θ ∗ (please correct me if not). Then why thm1 could justify the "escaping" behavior of label noise SGD in the experiments? l.267. In the experiment a lower bound is adopted in order to compute the proposed regularizer. As we are considering a minimization problem, why does it make sense to minimize a lower bound (instead of an upper bound) of the objective? Small issues: l.125. θ ∗ -> θ l.138. ϕ -> γ Overall My current feeling for this paper is a weak reject, as the claims/title/experiments are not properly justified by the theorem. Authors' feedback is welcome to help me better understand this paper. Post-rebuttal: after further discussions with the authors, my initial concerns are all solved. Therefore I would like to raise the score and recommend to accept the paper. A suggestion for future revision is to provide examples to illustrate the order of the important quantities in the theorem, e.g., ϵ and γ and others.
NIPS
Title Label Noise SGD Provably Prefers Flat Global Minimizers Abstract In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to. Motivated by empirical studies that demonstrate that training with noisy labels improves generalization, we study the implicit regularization effect of SGD with label noise. We show that SGD with label noise converges to a stationary point of a regularized loss L(θ)+λR(θ), where L(θ) is the training loss, λ is an effective regularization parameter depending on the step size, strength of the label noise, and the batch size, and R(θ) is an explicit regularizer that penalizes sharp minimizers. Our analysis uncovers an additional regularization effect of large learning rates beyond the linear scaling rule that penalizes large eigenvalues of the Hessian more than small ones. We also prove extensions to classification with general loss functions, significantly strengthening the prior work of Blanc et al. [3] to global convergence and large learning rates and of HaoChen et al. [12] to general models. 1 Introduction One of the central questions in modern machine learning theory is the generalization capability of overparametrized models trained by stochastic gradient descent (SGD). Recent work identifies the implicit regularization effect due to the optimization algorithm as one key factor in explaining the generalization of overparameterized models [27, 11, 19, 10]. This implicit regularization is controlled by many properties of the optimization algorithm including search direction [11], learning rate [20], batch size [26], momentum [21] and dropout [22]. The parameter-dependent noise distribution in SGD is a crucial source of regularization [16, 18]. Blanc et al. [3] initiated the study of the regularization effect of label noise SGD with square loss1 by characterizing the local stability of global minimizers of the training loss. By identifying a data-dependent regularizer R(θ), Blanc et al. [3] proved that label noise SGD locally diverges from the global minimizer θ∗ if and only if θ∗ is not a first-order stationary point of minθ R(θ) subject to L(θ) = 0. The analysis is only able to demonstrate that with sufficiently small step size η, label noise SGD initialized at θ∗ locally diverges by a distance of η0.4 and correspondingly decreases the regularizer by η0.4. This is among the first results that establish that the noise distribution alters the local stability of stochastic gradient descent. However, the parameter movement of η0.4 is required to 1Label noise SGD computes the stochastic gradient by first drawing a sample (xi, yi), perturbing y′i = yi+ with ∼ {−σ, σ}, and computing the gradient with respect to (xi, y′i). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be inversely polynomially small in dimension and condition number and is thus too small to affect the predictions of the model. HaoChen et al. [12], motivated by the local nature of Blanc et al. [3], analyzed label noise SGD in the quadratically-parametrized linear regression model [29, 32, 23]. Under a well-specified sparse linear regression model and with isotropic features, HaoChen et al. [12] proved that label noise SGD recovers the sparse ground-truth despite overparametrization, which demonstrated a global implicit bias towards sparsity in the quadratically-parametrized linear regression model. This work seeks to identify the global implicit regularization effect of label noise SGD. Our primary result, which supports Blanc et al. [3], proves that label noise SGD converges to a stationary point of L(θ) + λR(θ), where the regularizer R(θ) penalizes sharp regions of the loss landscape. The focus of this paper is on label noise SGD due to its strong regularization effects in both real and synthetic experiments [25, 28, 31]. Furthermore, label noise is used in large-batch training as an additional regularizer [25] when the regularization from standard regularizers (e.g. mini-batch, batch-norm, and dropout) is not sufficient. Label noise SGD is also known to be less sensitive to initialization, as shown in HaoChen et al. [12]. In stark contrast, mini-batch SGD remains stuck when initialized at any poor global minimizer. Our analysis demonstrates a global regularization effect of label noise SGD by proving it converges to a stationary point of a regularized loss L(θ) + λR(θ), even when initialized at a zero error global minimum. The learning rate and minibatch size in SGD are known to be important sources of regularization [9]. Our main theorem highlights the importance of learning rate and batch size as the hyperparameters that control the balance between the loss and the regularizer – larger learning rates and smaller batch sizes lead to stronger regularization. Section 2 reviews the notation and assumptions used throughout the paper. Section 2.4 formally states the main result and Section 3 sketches the proof. Section 4 presents experimental results which support our theory. Finally, Section 6 discusses the implications of this work. 2 Problem Setup and Main Result Section 2.1 describes our notation and the SGD with label noise algorithm. Section 2.2 introduces the explicit formula for the regularizer R(θ). Sections 2.3 and 2.4 formally state our main result. 2.1 Notation We focus on the regression setting (see Appendix E for the extension to the classification setting). Let {(xi, yi)}i∈[n] be n datapoints with xi ∈ D and yi ∈ R. Let f : D×Rd → R and let fi(θ) = f(xi, θ) denote the value of f on the datapoint xi. Define `i(θ) = 12 (fi(θ)− yi) 2 and L(θ) = 1n ∑n i=1 `i(θ). Then we will follow Algorithm 1 which adds fresh additive noise to the labels yi at every step before computing the gradient: Algorithm 1: SGD with Label Noise Input: θ0, step size η, noise variance σ2, batch size B, steps T for k = 0 to T − 1 do Sample batch B(k) ⊂ [n]B uniformly and label noise (k)i ∼ {−σ, σ} for i ∈ B(k). Let ˆ̀(k)i (θ) = 1 2 ( fi(θ)− yi − (k)i )2 and L̂(k) = 1B ∑ i∈B(k) ˆ̀(k) i . θk+1 ← θk − η∇L̂(k)(θk) end Note that σ controls the strength of the label noise and will control the strength of the implicit regularization in Theorem 1. Throughout the paper we will use ‖ · ‖ = ‖ · ‖2. We make the following standard assumption on f : Assumption 1 (Smoothness). We assume that each fi is `f -Lipschitz,∇fi is ρf -Lipschitz, and∇2fi is κf -Lipschitz with respect to ‖ · ‖2 for i = 1, . . . , n. We will define ` = `2f to be an upper bound on ‖ 1n ∑ i∇fi(θ)∇fi(θ)T ‖2, which is equal to ‖∇2L(θ)‖2 at any global minimizer θ. Our results extend to any learning rate η ∈ (0, 2` ). However, they do not extend to the limit as η → 2` . Because we still want to track the dependence on 1η , we do not assume η is a fixed constant and instead assume some constant separation: Assumption 2 (Learning Rate Separation). There exists a constant ν ∈ (0, 1) such that η ≤ 2−ν` . In addition, we make the following local Kurdyka-Łojasiewicz assumption (KL assumption) which ensures that there are no regions where the loss is very flat. The KL assumption is very general and holds for some δ > 0 for any analytic function defined on a compact domain (see Lemma 17). Assumption 3 (KL). Let θ∗ be any global minimizer of L. Then there exist KL > 0, µ > 0 and 0 < δ ≤ 1/2 such that if L(θ)− L(θ∗) ≤ KL, then L(θ)− L(θ∗) ≤ µ‖∇L(θ)‖1+δ . We assume L(θ∗) = 0 for any global minimizer θ∗. Note that if L satisfies Assumption 3 for some δ then it also satisfies Assumption 3 for any δ′ < δ. Assumption 3 with δ = 1 is equivalent to the much stronger Polyak-Łojasiewicz condition which is equivalent to local strong convexity. We will use O,Θ,Ω to hide any polynomial dependence on µ, `f , ρf , κf , ν, 1/σ, n, d and Õ to hide additional polynomial dependence on log 1/η, logB. 2.2 The Implicit Regularizer R(θ) For L, σ2, B, η as defined above, we define the implicit regularizer R(θ), the effective regularization parameter λ, and the regularized loss L̃(θ): R(θ) = − 1 2η tr log ( 1− η 2 ∇2L(θ) ) , λ = ησ2 B , L̃(θ) = L(θ) + λR(θ). (1) Here log refers to the matrix logarithm. To better understand the regularizer R(θ), let λ1, . . . , λd be the eigenvalues of ∇2L(θ) and let R(λi) = − 12η log(1− ηλi 2 ). Then, R(θ) = d∑ i=1 R(λi) = d∑ i=1 ( λi 4 + ηλ2i 16 + η2λ3i 48 + . . . ) . In the limit as η → 0, R(θ) → 14 tr∇2L(θ), which matches the regularizer in Blanc et al. [3] for infinitesimal learning rate near a global minimizer. However, in additional to the linear scaling rule, which is implicit in our definition of λ, our analysis uncovers an additional regularization effect of large learning rates that penalizes larger eigenvalues more than smaller ones (see Figure 1 and Section 6.1). The goal of this paper is to show that Algorithm 1 converges to a stationary point of the regularized loss L̃ = L + λR. In particular, we will show convergence to an ( , γ)-stationary point, which is defined in the next section. 2.3 ( , γ)-Stationary Points We begin with the standard definition of an approximate stationary point: Definition 1 ( -stationary point). θ is an -stationary point of f if ‖∇f(θ)‖ ≤ . In stochastic gradient descent it is often necessary to allow λ = ησ 2 B to scale with to reach an -stationary point [8, 15] (e.g., λ may need to be less than 2). However, for λ = O( ), any local minimizer θ∗ is an -stationary point of L̃ = L+ λR. Therefore, reaching a -stationary point of L̃ would be equivalent to finding a local minimizer and would not be evidence for implicit regularization. To address this scaling issue, we consider the rescaled regularized loss: 1 λ L̃ = 1 λ L+R. Reaching an -stationary point of 1λ L̃ requires non-trivially taking the regularizer R into account. However, it is not possible for Algorithm 1 to reach an -stationary point of 1λ L̃ even in the ideal setting when θ is initialized near a global minimizer θ∗ of L̃. The label noise will cause fluctuations of order √ λ around θ∗ (see section 3) so ‖∇L‖ will remain around √ λ. This causes 1λ∇L to become unbounded for λ (and therefore ) sufficiently small, and thus Algorithm 1 cannot converge to an -stationary point. We therefore prove convergence to an ( , γ)-stationary point: Definition 2 (( , γ)-stationary point). θ is an ( , γ)-stationary point of f if there exists some θ∗ such that ‖∇f(θ∗)‖ ≤ and ‖θ − θ∗‖ ≤ γ. Intuitively, Algorithm 1 converges to an ( , γ)-stationary point when it converges to a neighborhood of some -stationary point θ∗. 2.4 Main Result Having defined an ( , γ)-stationary point we can now state our main result: Theorem 1. Assume that f satisfies Assumption 1, η satisfies Assumption 2, and L satisfies Assumption 3, i.e. L(θ) ≤ µ‖∇L(θ)‖1+δ for L(θ) ≤ KL. Let η,B be chosen such that λ := ησ 2 B = Θ̃(min( 2/δ, γ2)), and let T = Θ̃(η−1λ−1−δ) = poly(η−1, γ−1). Assume that θ is initialized within O( √ λ1+δ) of some θ∗ satisfying L(θ∗) = O(λ1+δ). Then for any ζ ∈ (0, 1), with probability at least 1 − ζ, if {θk} follows Algorithm 1 with parameters η, σ, T , there exists k < T such that θk is an ( , γ)-stationary point of 1λ L̃. Theorem 1 guarantees that Algorithm 1 will hit an ( , γ)-stationary point of 1λ L̃ within a polynomial number of steps in −1, γ−1. In particular, when δ = 12 , Theorem 1 guarantees convergence within Õ( −6 + γ−3) steps. The condition that θ0 is close to an approximate global minimizer θ∗ is not a strong assumption as recent methods have shown that overparameterized models can easily achieve zero training loss in the kernel regime (see Appendix C). However, in practice these minimizers of the training loss generalize poorly [1]. Theorem 1 shows that Algorithm 1 can then converge to a stationary point of the regularized loss which has better generalization guarantees (see Section 6.2). Theorem 1 also generalizes the local analysis in Blanc et al. [3] to a global result with weaker assumptions on the learning rate η. For a full comparison with Blanc et al. [3], see section 3.1. 3 Proof Sketch The proof of convergence to an ( , ϕ)-stationary point of 1λ L̃ has two components. In Section 3.1, we pick a reference point θ∗ and analyze the behavior of Algorithm 1 in a neighborhood of θ∗. In Section 3.2, we repeat this local analysis with a sequence of reference points {θ∗m}. 3.1 Local Coupling Let Φk(·) denote k steps of gradient descent on the regularized loss L̃, i.e. Φ0(θ) = θ and Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)), (2) where L̃(θ) = L(θ) + λR(θ) is the regularized loss defined in Equation (1). Lemma 1 states that if θ is initialized at an approximate global minimizer θ∗ and follows Algorithm 1, there is a small mean zero random process ξ such that θk ≈ Φk(θ∗) + ξk: Lemma 1. Let ι = c log d λζ , X = √ 2λndι ν , L = cλ1+δ, D = c √ L ι, M = D ν , T = 1 c2ηX ι , where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let θ follow Algorithm 1 starting at θ∗ and assume thatL(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)− θ∗‖ ≤ 8M , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Note that because M ≥ D , the error term D is at least 8 times smaller than the movement in the direction of the regularized trajectory Φτ (θ∗), which will allow us to prove convergence to an ( , γ)-stationary point of 1λ L̃ in Section 3.2. Toward simplifying the update in Algorithm 1, we define L(k) to be the true loss without label noise on batch B(k). The label-noise update L̂(k)(θk) is an unbiased perturbation of the mini-batch update: ∇L̂(k)(θk) = ∇L(k)(θk)− 1B ∑ i∈B(k) (k) i ∇fi(θk). We decompose the update rule into three parts: θk+1 = θk − η∇L(θk)︸ ︷︷ ︸ gradient descent − η[∇L(k)(θk)−∇L(θk)]︸ ︷︷ ︸ minibatch noise + η B ∑ i∈B(k) (k) i ∇fi(θk)︸ ︷︷ ︸ label noise . (3) Let mk = −η[∇L(k)(θk) − ∇L(θk)] denote the minibatch noise. Throughout the proof we will show that the minibatch noise is dominated by the label noise. We will also decompose the label noise into two terms. The first, ∗k, will represent the label noise if the gradient were evaluated at θ ∗ whose distribution does not vary with k. The other term, zk represents the change in the noise due to evaluating the gradient at θk rather than θ∗. More precisely, we have ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗) and zk = η B ∑ i∈B(k) (k) i [∇fi(θk)−∇fi(θ∗)]. We define G(θ) = 1n ∑ i∇fi(θ)∇fi(θ)T to be the covariance of the model gradients. Note that ∗k has covariance ηλG(θ∗). To simplify notation in the Taylor expansions, we will use the following shorthand to refer to various quantities evaluated at θ∗: G = G(θ∗), ∇2L = ∇2L(θ∗), ∇3L = ∇3L(θ∗), ∇R = ∇R(θ∗). First we need the following standard decompositions of the Hessian: Proposition 1. For any θ ∈ Rd we can decompose ∇2L(θ) = G(θ) + E(θ) where E(θ) = 1 n ∑n i=1(fi(θ)− yi)∇2fi(θ) satisfies ‖E(θ)‖ ≤ √ 2ρfL(θ) where ρf is defined in Assumption 1. The matrix G in Proposition 1 is known as the Gauss-Newton term of the Hessian. We can now Taylor expand Algorithm 1 and Equation (2) to first order around θ∗: Φk+1(θ ∗) ≈ Φk(θ∗)− η [ ∇L+∇2L(Φk(θ∗)− θ∗) ] , θk+1 ≈ θk − η [ ∇L+∇2L(θk − θ∗) ] + ∗k. We define vk = θk − Φk(θ∗) to be the deviation from the regularized trajectory. Then subtracting these two equations gives vk+1 ≈ (I − η∇2L)vk + ∗k ≈ (I − ηG)vk + ∗k, where we used Proposition 1 to replace∇2L with G. Temporarily ignoring the higher order terms, we define the random process ξ by ξk+1 = (I − ηG)ξk + ∗k and ξ0 = 0. (4) The process ξ is referred to as an Ornstein Uhlenbeck process and it encodes the movement of θ to first order around θ∗. We defer the proofs of the following properties of ξ to Appendix B: Proposition 2. For any k ≥ 0, with probability at least 1 − 2de−ι, ‖ξk‖ ≤ X . In addition, as k →∞, E[ξkξTk ]→ λΠG(2− ηG)−1 where ΠG is the projection onto the span of G. We can now analyze the effect of ξk on the second order Taylor expansion. Let rk = θk−Φk(θ∗)−ξk be the deviation of θ from the regularized trajectory after removing the Ornstein Uhlenbeck process ξ. Lemma 1 is equivalent to Pr[‖rτ‖ ≥ D ] ≤ 10τde−ι. We will prove by induction that ‖rk‖ ≤ D for all k ≤ t with probability at least 1− 10tde−ι for all t ≤ τ . The base case follows from r0 = 0 so assume the result for some t ≥ 0. The remainder of this section will be conditioned on the event ‖rk‖ ≤ D for all k ≤ t. O(·) notation will only be used to hide absolute constants that do not change with t and will additionally not hide dependence on the absolute constant c. The following proposition fills in the missing second order terms in the Taylor expansion around θ∗ of rk: Proposition 3. With probability at least 1− 2de−ι, rk+1 = (I − ηG)rk − η [ 1 2 ∇3L(ξk, ξk)− λ∇R ] +mk + zk + Õ ( c5/2ηλ1+δ ) The intuition for the implicit regularizer R(θ) is that by Propositions 1 and 2, E[ξkξTk ]→ ΠGλ(2− ηG)−1 ≈ λ(2− η∇2L)−1. Therefore, when averaged over long timescales, 1 2 E[∇3L(ξk, ξk)] ≈ λ 2 ∇3L [ (2− η∇2L)−1 ] = λ∇ [ − 1 2η tr log ( 1− η 2 ∇2L(θ) )]∣∣∣∣ θ=θ∗ = λ∇R. The second equality follows from the more general equality that for any matrix function A and any scalar function h that acts independently on each eigenvalue, ∇(trh(A(θ))) = (∇A(θ))(h′(A(θ))) which follows from the chain rule. The above equality is the special case when A(θ) = ∇2L(θ) and h(x) = − 1η log ( 1− η2x ) , which satisfies h′(x) = 12−ηx . The remaining details involve concentrating the mean zero error terms mk, zk and showing that E[ξkξTk ] does concentrate in the directions with large eigenvalues and that the directions with small eigenvalues, in which the covariance does not concentrate, do not contribute much to the error. This yields the following bound: Proposition 4. With probability at least 1− 10de−ι, ‖rt+1‖ = Õ ( λ1/2+δ/2√ c ) . The proof of Proposition 4 can be found in Appendix B. Finally, because D = Õ(c5/2λ1/2+δ/2), ‖rt+1‖ ≤ D for sufficiently large c. This completes the induction and the proof of Lemma 1. Comparison with Blanc et al. [3] Like Blanc et al. [3], Lemma 1 shows that θ locally follows the trajectory of gradient descent on an implicit regularizer R(θ). However, there are a few crucial differences: • Because we do not assume we start near a global minimizer where L = 0, we couple to a regularized loss L̃ = L + λR rather than just the regularizer R(θ). In this setting there is an additional correction term to the Hessian (Proposition 1) that requires carefully controlling the value of the loss across reference points to prove convergence to a stationary point. • The analysis in Blanc et al. [3] requires η, τ to be chosen in terms of the condition number of ∇2L which can quickly grow during training as ∇2L is changing. This makes it impossible to directly repeat the argument. We avoid this by precisely analyzing the error incurred by small eigenvalues, allowing us to prove convergence to an ( , γ) stationary point of 1λ L̃ for fixed η, λ even if the smallest nonzero eigenvalue of∇2L converges to 0 during training. • Unlike in Blanc et al. [3], we do not require the learning rate η to be small. Instead, we only require that λ scales with which can be accomplished either by decreasing the learning rate η or increasing the batch size B. This allows for stronger implicit regularization in the setting when η is large (see Section 6.1). In particular, our regularizer R(θ) changes with η and is only equal to the regularizer in Blanc et al. [3] in the limit η → 0. 3.2 Global Convergence In order to prove convergence to an ( , γ)-stationary point of 1η∇L̃, we will define a sequence of reference points θ∗m and coupling times {τm} and repeatedly use a version of Lemma 1 to describe the long term behavior of θ. For notational simplicity, given a sequence of coupling times {τm}, define Tm = ∑ k<m τk to be the total number of steps until we have reached the reference point θ ∗ m. To be able to repeat the local analysis in Lemma 1 with multiple reference points, we need a more general coupling lemma that allows the random process ξ defined in each coupling to continue where the random process in the previous coupling ended. To accomplish this, we define ξ outside the scope of the local coupling lemma: Definition 3. Given a sequence of reference points {θ∗m} and a sequence of coupling times {τm}, we define the random process ξ by ξ0 = 0, and for k ∈ [Tm, Tm+1), ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗m) and ξk+1 = (I − ηG(θ∗m))ξk + ∗k. Then we can prove the following more general coupling lemma: Lemma 2. Let X ,L ,D ,M ,T be defined as in Lemma 1. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let ∆m = θTm−ξTm−θ∗m and assume that ‖∆m‖ ≤ D and L(θ∗m) ≤ L for some 0 < δ ≤ 1/2. Then for any τm ≤ T satisfying maxk∈[Tm,Tm+1) ‖Φk−Tm(θ∗m+∆m)−θ∗m‖ ≤ 8M , with probability at least 1− 10dτme−ι we have simultaneously for all k ∈ (Tm, Tm+1], ‖θk − ξk − Φk−Tm(θ∗m + ∆m)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Unlike in Lemma 1, we couple to the regularized trajectory starting at θ∗m + ∆m rather than at θ ∗ m to avoid accumulating errors (see Figure 2). The proof is otherwise identical to that of Lemma 1. The proof of Theorem 1 easily follows from the following lemma which states that we decrease the regularized loss L̃ by at least F after every coupling: Lemma 3. Let F = D 2 ηνT . Let ∆m = θTm − ξTm − θ∗m and assume ‖∆m‖ ≤ D and L(θ∗m) ≤ L . Then if θTm is not an ( , γ)-stationary point, there exists some τm < T such that if we define θ∗m+1 = Φτn(θ ∗ m + ∆m) and ∆m+1 = θTm+1 − ξTm+1 − θ∗m+1, then with probability 1− 10dτme−ι, L̃(θ∗m+1) ≤ L(θ∗m)−F , ‖∆m+1‖ ≤ D and L(θ∗m+1) ≤ L . We defer the proofs of Lemma 2 and Lemma 3 to Appendix B. Theorem 1 now follows directly from repeated applications of Lemma 3: Proof of Theorem 1. By assumption there exists some θ∗0 such that L(θ ∗ 0) ≤ L and ‖θ0 − θ∗0‖ ≤ D . Then so long as θTm is not an ( , γ)-stationary point, we can inductively apply Lemma 3 to get the existence of coupling times {τm} and reference points {θ∗m} such that for any m ≥ 0, with probability 1 − 10dTme−ι we have L̃(θ∗m) ≤ L̃(θ∗0) − mF . As L̃(θ∗0) − L̃(θ∗m) = O(λ), this can happen for at most m = O ( λ F ) reference points, so at most T = O ( λT F ) = Õ ( η−1λ−1−δ ) iterations of Algorithm 1. By the choice of ι, this happens with probability 1−10dTe−ι ≥ 1−ζ . 4 Experiments In order to test the ability of SGD with label noise to escape poor global minimizers and converge to better minimizers, we initialize Algorithm 1 at global minimizers of the training loss which achieve 100% training accuracy yet generalize poorly to the test set. Minibatch SGD would remain fixed at these initializations because both the gradient and the noise in minibatch SGD vanish at any global minimizer of the training loss. We show that SGD with label noise escapes these poor initializations and converges to flatter minimizers that generalize well, which supports Theorem 1. We run experiments with two initializations: Full Batch Initialization: We run full batch gradient descent with random initialization until convergence to a global minimizer. We call this minimizer the full batch initialization. The final test accuracy of the full batch initialization was 76%. Adversarial Initialization: Following Liu et al. [21], we generate an adversarial initialization with final test accuracy 48% that achieves zero training loss by first teaching the network to memorize random labels and then training it on the true labels. See Appendix D for full details. Experiments were run with ResNet18 on CIFAR10 [17] without data augmentation or weight decay. The experiments were conducted with randomized label flipping with probability 0.2 (see Appendix E for the extension of Theorem 1 to classification with label flipping), cross entropy loss, and batch size 256. Because of the difficulty in computing the regularizer R(θ), we approximate it by its lower bound tr∇2L(θ). Figure 3 shows the test accuracy and tr∇2L throughout training. SGD with label noise escapes both zero training loss initializations and converges to flatter minimizers that generalize much better, reaching the SGD baseline from the fullbatch initialization and getting within 1% of the baseline from the adversarial initialization. The test accuracy in both cases is strongly correlated with tr∇2L. The strength of the regularization is also strongly correlated with η, which supports Theorem 1. 5 Extensions 5.1 SGD with momentum We replace the update in Algorithm 1 with heavy ball momentum with parameter β: θk+1 = θk − η∇L̂(k)(θk) + β(θk − θk−1). (5) We define: R(θ) = 1 + β 2η tr log ( 1− η 2(1 + β) ∇2L(θ) ) , λ = ησ2 B(1− β) , (6) and as before L̃(θ) = L(θ) + λR(θ). Let Φ0(θ) = θ, Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)) + β(Φk(θ)− Φk−1(θ)) (7) represent gradient descent with momentum on L̃. Then we have the following local coupling lemma: Lemma 4. Let X = √ 2λn2ι ν , L = cλ1+δ, D = c √ L ι, T = 1 c2ηX ι , (8) where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η ≤ (2−ν)(1+β)` . Let θ follow Algorithm 1 with momentum β starting at θ∗ and L(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)−θ∗‖ ≤ 8D , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . (9) As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Note that momentum increases the regularization parameter λ by 11−β . For the commonly used momentum parameter β = 0.9, this represents a 10× increase in regularization, which is likely the cause of the improved performance in Figure 4 (β = 0.9) over Figure 3 (β = 0). 5.2 Arbitrary Noise Covariances The analysis in Section 3.1 is not specific to label noise SGD and can be carried out for arbitrary noise schemes. Let θ follow θk+1 = θk − η∇L(θk) + k starting at θ0 where k ∼ N(0, ηλΣ(θk)) and Σ1/2 is Lipschitz. Given a matrix S we define the regularizer RS(θ) = 〈 S,∇2L(θ) 〉 . The matrix S controls the weight of each eigenvalue. As before we can define L̃S(θ) = L(θ) + λRS(θ) and ΦSk+1(θ) = Φ S k (θ) − η∇L̃S(Φk(θ)) to be the regularized loss and the regularized trajectory respectively. Then we have the following version of Lemma 1: Proposition 5. Let θ be initialized at a minimizer θ∗ ofL. Assume∇2L is Lipschitz, letH = ∇2L(θ∗) and assume that Σ(θ∗) CH for some absolute constant C. Let X = √ Cdλι ν , D = cλ 3/4ι, and T = 1c2ηX ι for a sufficiently large constant c. Then there exists a mean zero random process ξ such that for any τ ≤ T satisfying maxk<τ ‖Φk(θ∗)− θ∗‖ ≤ 8D and with probability 1− 10dτe−ι, we have simultaneously for all k ≤ τ : ‖θk − ξk − ΦSk (θ0)‖ ≤ D and ‖ξk‖ ≤X , where S is the unique fixed point of S ← (I − ηH)S(I − ηH) + ηλΣ(θ∗) restricted to span(H). As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Although Proposition 5 couples to gradient descent on RS , S is defined in terms of the Hessian and the noise covariance at θ∗ and therefore depends on the choice of reference point. Because RS is changing, we cannot repeat Proposition 5 as in Section 3.2 to prove convergence to a stationary point because there is no fixed potential. Although it is sometimes possible to relate RS to a fixed potential R, we show in Appendix F.2 that this is not generally possible by providing an example where minibatch SGD perpetually cycles. Exploring the properties of these continuously changing potentials and their connections to generalization is an interesting avenue for future work. 6 Discussion 6.1 Sharpness and the Effect of Large Learning Rates Various factors can control the strength of the implicit regularization in Theorem 1. Most important is the implicit regularization parameter λ = ησ 2 |B| . This supports the hypothesis that large learning rates and small batch sizes are necessary for implicit regularization [9, 26], and agrees with the standard linear scaling rule which proposes that for constant regularization strength, the learning rate η needs to be inversely proportional to the batch size |B|. However, our analysis also uncovers an additional regularization effect of large learning rates. Unlike the regularizer in Blanc et al. [3], the implicit regularizer R(θ) defined in Equation (1) is dependent on η. It is not possible to directly analyze the behavior of R(θ) as η → 2/λ1 where λ1 is the largest eigenvalue of ∇2L, as in this regime, R(θ) → ∞ (see Figure 1). If we let η = 2−νλ1 , then we can better understand the behavior of R(θ) by normalizing it by log 2/ν. This gives2 R(θ) log 2/ν = ∑ i R(λi) log 2/ν = ‖∇2L(θ)‖2 +O ( 1 log 2/ν ) ν→0−−−→ ‖∇2L(θ)‖2 so after normalization, R(θ) becomes a better and better approximation of the spectral norm ‖∇2L(θ)‖ as η → 2/λ1. R(θ) can therefore be seen as interpolating between tr∇2L(θ), when η ≈ 0, and ‖∇2L(θ)‖2 when η ≈ 2/λ1. This also suggests that SGD with large learning rates may be more resilient to the edge of stability phenomenon observed in Cohen et al. [4] as the implicit regularization works harder to control eigenvalues approaching 2/η. The sharpness-aware algorithm (SAM) of [7] is also closely related to R(θ). SAM proposes to minimize max‖δ‖2≤ L(θ + δ). At a global minimizer of the training loss, max ‖δ‖2≤ L(θ∗ + δ) = max ‖δ‖2≤ 1 2 δ>∇2L(θ∗)δ +O( 3) ≈ 2 2 ‖∇2L(θ∗)‖2. The SAM algorithm is therefore explicitly regularizing the spectral norm of∇2L(θ), which is closely connected to the large learning rate regularization effect of R(θ) when η ≈ 2/λ1. 6.2 Generalization Bounds The implicit regularizer R(θ) is intimately connected to data-dependent generalization bounds, which measure the Lipschitzness of the network via the network Jacobian. Specifically, Wei and Ma [30] propose the all-layer margin, which bounds the generalization error . ∑L l=1 Cl√ n √ 1 n ∑n i=1 1 mF (xi,yi)2 , where Cl depends only on the norm of the parameters and mF is the all-layer margin. The norm of the parameters is generally controlled by weight decay regularization, so we focus our discussion on the all-layer margin. Ignoring higher-order secondary terms, Wei and Ma [30, Heuristic derivation of Lemma 3.1] showed for a feed-forward network f(θ;x) = θLσ(θL−1 . . . σ(θ1x)), the all-layer margin satisfies3: 1 mF (x, y) . ‖{ ∂f∂θl }l∈[L]‖2 output margin of (x, y) =⇒ generalization error . ∑L l=1 Cl√ n √ R(θ) output margin as R(θ) is an upper bound on the squared norm of the Jacobian at any global minimizer θ. We emphasize this bound is informal as we discarded the higher-order terms in controlling the all-layer margin, but it accurately reflects that the regularizer R(θ) lower bounds the all-layer margin mF up to higher-order terms. Therefore SGD with label noise implicitly regularizes the all-layer margin. Acknowledgments and Disclosure of Funding AD acknowledges support from a NSF Graduate Research Fellowship. TM acknowledges support of Google Faculty Award and NSF IIS 2045685. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, NSF CCF 2002272, and an ONR Young Investigator Award. The experiments in this paper were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology’s High Performance Computing Center and Visualization Laboratory at Princeton University. We would also like to thank Honglin Yuan and Jeff Z. HaoChen for useful discussions throughout various stages of the project. 2Here we assume λ1 > λ2. If instead λ1 = . . . = λk > λk+1, this limit will be k‖∇2L(θ)‖2. 3The output margin is defined as mini fi(θ)yi. The following uses Equation (3.3) and the first-order approximation provided Wei and Ma [30] and the chain rule ∂f ∂θl = ∂f ∂hl ∂hl ∂θl−1 = ∂f ∂hl h>l−1.
1. What is the focus of the paper regarding SGD's behavior with label noise? 2. What are the strengths of the theoretical analysis provided in the paper? 3. Do you have any concerns or questions about the link between the analysis and denoise SM? 4. How does the distribution or property of the noise affect the analysis or results? 5. Can you explain the significance and impact of the analysis in simpler terms? 6. What are some minor clarifications that could be added to improve understanding?
Summary Of The Paper Review
Summary Of The Paper -The paper studies the behavior of SGD with label noise for over-parametrized models. The label noise setting is linked to an implicit regularization scheme, which is helpful to improve the search for "better" optimal as well as to understand the optimization landscape. The study focuses on the analysis for the "flat" regions. Theoretical analysis has been provided and empirical studies on ResNet18 and CIFAR10 are shown. Review -The paper studies an interesting idea on optimization landscape with perturbation from noise. Specifically, the work presents the behavior of optimization for regions where the gradient is small, which is termed as "flat" region. The notion of flatness is described via \epsilon. The neighborhood of the "flat" region, within norm \gamma in parameter space, has been considered. The idea is to link SGD optimizing procedure for data with the presence of label noise to implicitly regularized learning objectives and studies the conditions where the algorithm reaches the "flat" region or its neighborhood. It is a concrete study that provides fruitful theoretical insights on the particular analysis. I believe this can be useful for the community to better understand the SGD procedure in various stances. However, there are some questions to be further addressed as well as related concerns (please see below). -In the classification setting, label noise the class-conditional noise is coherent with the setting. However, in the regression setting in the main paper, label noise may not be the best term. As this setting is related to the denoise score matching (SM) scheme, how does the analysis related to denoise SM? Algorithm 1 is presenting noise in the uniform labels, which is confusing. Moreover, in the regression setting, what is the distribution of noise? and how does the distribution or property of the noise (e.g. variance) affect the analysis or the results, (for instance, in terms of strength of implicit regularization)? Maybe linking to the behavior of denoise SM will be helpful for the community to further understand SGD with the type of regularization presented, as well as linking the important ideas for learning procedures. In Theorem 1, the analysis shows that the optimization reaches the (\epsilon,\gamma)-stationary region. Is such a region always better than a sharp local optimal? -I am not an expert in the area of this type of analysis. Overall, the idea and analysis are interesting. The proof is not carefully checked. The major significance and impact of the analysis are not clear enough. For people who may not be familiar with the particular analysis methodology, it is not easy to understand and appreciate the setting. It is also unclear that what goes next when SGD reaches (\epsilon,\gamma)-stationary region. Adding more explanations would help understand the full SGD procedure better. -Some minor clarifications In section 2.2, it will be useful to provide more explicit links between the trace log regularization scheme to the noisy label setting to support the implicit regularizer claim. In definition2, more intuition can be provided for better understanding. It is an interesting metric to consider for the convergence analysis, but it will be easier to have both technical claims and interpretable explanations.
NIPS
Title Label Noise SGD Provably Prefers Flat Global Minimizers Abstract In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to. Motivated by empirical studies that demonstrate that training with noisy labels improves generalization, we study the implicit regularization effect of SGD with label noise. We show that SGD with label noise converges to a stationary point of a regularized loss L(θ)+λR(θ), where L(θ) is the training loss, λ is an effective regularization parameter depending on the step size, strength of the label noise, and the batch size, and R(θ) is an explicit regularizer that penalizes sharp minimizers. Our analysis uncovers an additional regularization effect of large learning rates beyond the linear scaling rule that penalizes large eigenvalues of the Hessian more than small ones. We also prove extensions to classification with general loss functions, significantly strengthening the prior work of Blanc et al. [3] to global convergence and large learning rates and of HaoChen et al. [12] to general models. 1 Introduction One of the central questions in modern machine learning theory is the generalization capability of overparametrized models trained by stochastic gradient descent (SGD). Recent work identifies the implicit regularization effect due to the optimization algorithm as one key factor in explaining the generalization of overparameterized models [27, 11, 19, 10]. This implicit regularization is controlled by many properties of the optimization algorithm including search direction [11], learning rate [20], batch size [26], momentum [21] and dropout [22]. The parameter-dependent noise distribution in SGD is a crucial source of regularization [16, 18]. Blanc et al. [3] initiated the study of the regularization effect of label noise SGD with square loss1 by characterizing the local stability of global minimizers of the training loss. By identifying a data-dependent regularizer R(θ), Blanc et al. [3] proved that label noise SGD locally diverges from the global minimizer θ∗ if and only if θ∗ is not a first-order stationary point of minθ R(θ) subject to L(θ) = 0. The analysis is only able to demonstrate that with sufficiently small step size η, label noise SGD initialized at θ∗ locally diverges by a distance of η0.4 and correspondingly decreases the regularizer by η0.4. This is among the first results that establish that the noise distribution alters the local stability of stochastic gradient descent. However, the parameter movement of η0.4 is required to 1Label noise SGD computes the stochastic gradient by first drawing a sample (xi, yi), perturbing y′i = yi+ with ∼ {−σ, σ}, and computing the gradient with respect to (xi, y′i). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be inversely polynomially small in dimension and condition number and is thus too small to affect the predictions of the model. HaoChen et al. [12], motivated by the local nature of Blanc et al. [3], analyzed label noise SGD in the quadratically-parametrized linear regression model [29, 32, 23]. Under a well-specified sparse linear regression model and with isotropic features, HaoChen et al. [12] proved that label noise SGD recovers the sparse ground-truth despite overparametrization, which demonstrated a global implicit bias towards sparsity in the quadratically-parametrized linear regression model. This work seeks to identify the global implicit regularization effect of label noise SGD. Our primary result, which supports Blanc et al. [3], proves that label noise SGD converges to a stationary point of L(θ) + λR(θ), where the regularizer R(θ) penalizes sharp regions of the loss landscape. The focus of this paper is on label noise SGD due to its strong regularization effects in both real and synthetic experiments [25, 28, 31]. Furthermore, label noise is used in large-batch training as an additional regularizer [25] when the regularization from standard regularizers (e.g. mini-batch, batch-norm, and dropout) is not sufficient. Label noise SGD is also known to be less sensitive to initialization, as shown in HaoChen et al. [12]. In stark contrast, mini-batch SGD remains stuck when initialized at any poor global minimizer. Our analysis demonstrates a global regularization effect of label noise SGD by proving it converges to a stationary point of a regularized loss L(θ) + λR(θ), even when initialized at a zero error global minimum. The learning rate and minibatch size in SGD are known to be important sources of regularization [9]. Our main theorem highlights the importance of learning rate and batch size as the hyperparameters that control the balance between the loss and the regularizer – larger learning rates and smaller batch sizes lead to stronger regularization. Section 2 reviews the notation and assumptions used throughout the paper. Section 2.4 formally states the main result and Section 3 sketches the proof. Section 4 presents experimental results which support our theory. Finally, Section 6 discusses the implications of this work. 2 Problem Setup and Main Result Section 2.1 describes our notation and the SGD with label noise algorithm. Section 2.2 introduces the explicit formula for the regularizer R(θ). Sections 2.3 and 2.4 formally state our main result. 2.1 Notation We focus on the regression setting (see Appendix E for the extension to the classification setting). Let {(xi, yi)}i∈[n] be n datapoints with xi ∈ D and yi ∈ R. Let f : D×Rd → R and let fi(θ) = f(xi, θ) denote the value of f on the datapoint xi. Define `i(θ) = 12 (fi(θ)− yi) 2 and L(θ) = 1n ∑n i=1 `i(θ). Then we will follow Algorithm 1 which adds fresh additive noise to the labels yi at every step before computing the gradient: Algorithm 1: SGD with Label Noise Input: θ0, step size η, noise variance σ2, batch size B, steps T for k = 0 to T − 1 do Sample batch B(k) ⊂ [n]B uniformly and label noise (k)i ∼ {−σ, σ} for i ∈ B(k). Let ˆ̀(k)i (θ) = 1 2 ( fi(θ)− yi − (k)i )2 and L̂(k) = 1B ∑ i∈B(k) ˆ̀(k) i . θk+1 ← θk − η∇L̂(k)(θk) end Note that σ controls the strength of the label noise and will control the strength of the implicit regularization in Theorem 1. Throughout the paper we will use ‖ · ‖ = ‖ · ‖2. We make the following standard assumption on f : Assumption 1 (Smoothness). We assume that each fi is `f -Lipschitz,∇fi is ρf -Lipschitz, and∇2fi is κf -Lipschitz with respect to ‖ · ‖2 for i = 1, . . . , n. We will define ` = `2f to be an upper bound on ‖ 1n ∑ i∇fi(θ)∇fi(θ)T ‖2, which is equal to ‖∇2L(θ)‖2 at any global minimizer θ. Our results extend to any learning rate η ∈ (0, 2` ). However, they do not extend to the limit as η → 2` . Because we still want to track the dependence on 1η , we do not assume η is a fixed constant and instead assume some constant separation: Assumption 2 (Learning Rate Separation). There exists a constant ν ∈ (0, 1) such that η ≤ 2−ν` . In addition, we make the following local Kurdyka-Łojasiewicz assumption (KL assumption) which ensures that there are no regions where the loss is very flat. The KL assumption is very general and holds for some δ > 0 for any analytic function defined on a compact domain (see Lemma 17). Assumption 3 (KL). Let θ∗ be any global minimizer of L. Then there exist KL > 0, µ > 0 and 0 < δ ≤ 1/2 such that if L(θ)− L(θ∗) ≤ KL, then L(θ)− L(θ∗) ≤ µ‖∇L(θ)‖1+δ . We assume L(θ∗) = 0 for any global minimizer θ∗. Note that if L satisfies Assumption 3 for some δ then it also satisfies Assumption 3 for any δ′ < δ. Assumption 3 with δ = 1 is equivalent to the much stronger Polyak-Łojasiewicz condition which is equivalent to local strong convexity. We will use O,Θ,Ω to hide any polynomial dependence on µ, `f , ρf , κf , ν, 1/σ, n, d and Õ to hide additional polynomial dependence on log 1/η, logB. 2.2 The Implicit Regularizer R(θ) For L, σ2, B, η as defined above, we define the implicit regularizer R(θ), the effective regularization parameter λ, and the regularized loss L̃(θ): R(θ) = − 1 2η tr log ( 1− η 2 ∇2L(θ) ) , λ = ησ2 B , L̃(θ) = L(θ) + λR(θ). (1) Here log refers to the matrix logarithm. To better understand the regularizer R(θ), let λ1, . . . , λd be the eigenvalues of ∇2L(θ) and let R(λi) = − 12η log(1− ηλi 2 ). Then, R(θ) = d∑ i=1 R(λi) = d∑ i=1 ( λi 4 + ηλ2i 16 + η2λ3i 48 + . . . ) . In the limit as η → 0, R(θ) → 14 tr∇2L(θ), which matches the regularizer in Blanc et al. [3] for infinitesimal learning rate near a global minimizer. However, in additional to the linear scaling rule, which is implicit in our definition of λ, our analysis uncovers an additional regularization effect of large learning rates that penalizes larger eigenvalues more than smaller ones (see Figure 1 and Section 6.1). The goal of this paper is to show that Algorithm 1 converges to a stationary point of the regularized loss L̃ = L + λR. In particular, we will show convergence to an ( , γ)-stationary point, which is defined in the next section. 2.3 ( , γ)-Stationary Points We begin with the standard definition of an approximate stationary point: Definition 1 ( -stationary point). θ is an -stationary point of f if ‖∇f(θ)‖ ≤ . In stochastic gradient descent it is often necessary to allow λ = ησ 2 B to scale with to reach an -stationary point [8, 15] (e.g., λ may need to be less than 2). However, for λ = O( ), any local minimizer θ∗ is an -stationary point of L̃ = L+ λR. Therefore, reaching a -stationary point of L̃ would be equivalent to finding a local minimizer and would not be evidence for implicit regularization. To address this scaling issue, we consider the rescaled regularized loss: 1 λ L̃ = 1 λ L+R. Reaching an -stationary point of 1λ L̃ requires non-trivially taking the regularizer R into account. However, it is not possible for Algorithm 1 to reach an -stationary point of 1λ L̃ even in the ideal setting when θ is initialized near a global minimizer θ∗ of L̃. The label noise will cause fluctuations of order √ λ around θ∗ (see section 3) so ‖∇L‖ will remain around √ λ. This causes 1λ∇L to become unbounded for λ (and therefore ) sufficiently small, and thus Algorithm 1 cannot converge to an -stationary point. We therefore prove convergence to an ( , γ)-stationary point: Definition 2 (( , γ)-stationary point). θ is an ( , γ)-stationary point of f if there exists some θ∗ such that ‖∇f(θ∗)‖ ≤ and ‖θ − θ∗‖ ≤ γ. Intuitively, Algorithm 1 converges to an ( , γ)-stationary point when it converges to a neighborhood of some -stationary point θ∗. 2.4 Main Result Having defined an ( , γ)-stationary point we can now state our main result: Theorem 1. Assume that f satisfies Assumption 1, η satisfies Assumption 2, and L satisfies Assumption 3, i.e. L(θ) ≤ µ‖∇L(θ)‖1+δ for L(θ) ≤ KL. Let η,B be chosen such that λ := ησ 2 B = Θ̃(min( 2/δ, γ2)), and let T = Θ̃(η−1λ−1−δ) = poly(η−1, γ−1). Assume that θ is initialized within O( √ λ1+δ) of some θ∗ satisfying L(θ∗) = O(λ1+δ). Then for any ζ ∈ (0, 1), with probability at least 1 − ζ, if {θk} follows Algorithm 1 with parameters η, σ, T , there exists k < T such that θk is an ( , γ)-stationary point of 1λ L̃. Theorem 1 guarantees that Algorithm 1 will hit an ( , γ)-stationary point of 1λ L̃ within a polynomial number of steps in −1, γ−1. In particular, when δ = 12 , Theorem 1 guarantees convergence within Õ( −6 + γ−3) steps. The condition that θ0 is close to an approximate global minimizer θ∗ is not a strong assumption as recent methods have shown that overparameterized models can easily achieve zero training loss in the kernel regime (see Appendix C). However, in practice these minimizers of the training loss generalize poorly [1]. Theorem 1 shows that Algorithm 1 can then converge to a stationary point of the regularized loss which has better generalization guarantees (see Section 6.2). Theorem 1 also generalizes the local analysis in Blanc et al. [3] to a global result with weaker assumptions on the learning rate η. For a full comparison with Blanc et al. [3], see section 3.1. 3 Proof Sketch The proof of convergence to an ( , ϕ)-stationary point of 1λ L̃ has two components. In Section 3.1, we pick a reference point θ∗ and analyze the behavior of Algorithm 1 in a neighborhood of θ∗. In Section 3.2, we repeat this local analysis with a sequence of reference points {θ∗m}. 3.1 Local Coupling Let Φk(·) denote k steps of gradient descent on the regularized loss L̃, i.e. Φ0(θ) = θ and Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)), (2) where L̃(θ) = L(θ) + λR(θ) is the regularized loss defined in Equation (1). Lemma 1 states that if θ is initialized at an approximate global minimizer θ∗ and follows Algorithm 1, there is a small mean zero random process ξ such that θk ≈ Φk(θ∗) + ξk: Lemma 1. Let ι = c log d λζ , X = √ 2λndι ν , L = cλ1+δ, D = c √ L ι, M = D ν , T = 1 c2ηX ι , where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let θ follow Algorithm 1 starting at θ∗ and assume thatL(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)− θ∗‖ ≤ 8M , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Note that because M ≥ D , the error term D is at least 8 times smaller than the movement in the direction of the regularized trajectory Φτ (θ∗), which will allow us to prove convergence to an ( , γ)-stationary point of 1λ L̃ in Section 3.2. Toward simplifying the update in Algorithm 1, we define L(k) to be the true loss without label noise on batch B(k). The label-noise update L̂(k)(θk) is an unbiased perturbation of the mini-batch update: ∇L̂(k)(θk) = ∇L(k)(θk)− 1B ∑ i∈B(k) (k) i ∇fi(θk). We decompose the update rule into three parts: θk+1 = θk − η∇L(θk)︸ ︷︷ ︸ gradient descent − η[∇L(k)(θk)−∇L(θk)]︸ ︷︷ ︸ minibatch noise + η B ∑ i∈B(k) (k) i ∇fi(θk)︸ ︷︷ ︸ label noise . (3) Let mk = −η[∇L(k)(θk) − ∇L(θk)] denote the minibatch noise. Throughout the proof we will show that the minibatch noise is dominated by the label noise. We will also decompose the label noise into two terms. The first, ∗k, will represent the label noise if the gradient were evaluated at θ ∗ whose distribution does not vary with k. The other term, zk represents the change in the noise due to evaluating the gradient at θk rather than θ∗. More precisely, we have ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗) and zk = η B ∑ i∈B(k) (k) i [∇fi(θk)−∇fi(θ∗)]. We define G(θ) = 1n ∑ i∇fi(θ)∇fi(θ)T to be the covariance of the model gradients. Note that ∗k has covariance ηλG(θ∗). To simplify notation in the Taylor expansions, we will use the following shorthand to refer to various quantities evaluated at θ∗: G = G(θ∗), ∇2L = ∇2L(θ∗), ∇3L = ∇3L(θ∗), ∇R = ∇R(θ∗). First we need the following standard decompositions of the Hessian: Proposition 1. For any θ ∈ Rd we can decompose ∇2L(θ) = G(θ) + E(θ) where E(θ) = 1 n ∑n i=1(fi(θ)− yi)∇2fi(θ) satisfies ‖E(θ)‖ ≤ √ 2ρfL(θ) where ρf is defined in Assumption 1. The matrix G in Proposition 1 is known as the Gauss-Newton term of the Hessian. We can now Taylor expand Algorithm 1 and Equation (2) to first order around θ∗: Φk+1(θ ∗) ≈ Φk(θ∗)− η [ ∇L+∇2L(Φk(θ∗)− θ∗) ] , θk+1 ≈ θk − η [ ∇L+∇2L(θk − θ∗) ] + ∗k. We define vk = θk − Φk(θ∗) to be the deviation from the regularized trajectory. Then subtracting these two equations gives vk+1 ≈ (I − η∇2L)vk + ∗k ≈ (I − ηG)vk + ∗k, where we used Proposition 1 to replace∇2L with G. Temporarily ignoring the higher order terms, we define the random process ξ by ξk+1 = (I − ηG)ξk + ∗k and ξ0 = 0. (4) The process ξ is referred to as an Ornstein Uhlenbeck process and it encodes the movement of θ to first order around θ∗. We defer the proofs of the following properties of ξ to Appendix B: Proposition 2. For any k ≥ 0, with probability at least 1 − 2de−ι, ‖ξk‖ ≤ X . In addition, as k →∞, E[ξkξTk ]→ λΠG(2− ηG)−1 where ΠG is the projection onto the span of G. We can now analyze the effect of ξk on the second order Taylor expansion. Let rk = θk−Φk(θ∗)−ξk be the deviation of θ from the regularized trajectory after removing the Ornstein Uhlenbeck process ξ. Lemma 1 is equivalent to Pr[‖rτ‖ ≥ D ] ≤ 10τde−ι. We will prove by induction that ‖rk‖ ≤ D for all k ≤ t with probability at least 1− 10tde−ι for all t ≤ τ . The base case follows from r0 = 0 so assume the result for some t ≥ 0. The remainder of this section will be conditioned on the event ‖rk‖ ≤ D for all k ≤ t. O(·) notation will only be used to hide absolute constants that do not change with t and will additionally not hide dependence on the absolute constant c. The following proposition fills in the missing second order terms in the Taylor expansion around θ∗ of rk: Proposition 3. With probability at least 1− 2de−ι, rk+1 = (I − ηG)rk − η [ 1 2 ∇3L(ξk, ξk)− λ∇R ] +mk + zk + Õ ( c5/2ηλ1+δ ) The intuition for the implicit regularizer R(θ) is that by Propositions 1 and 2, E[ξkξTk ]→ ΠGλ(2− ηG)−1 ≈ λ(2− η∇2L)−1. Therefore, when averaged over long timescales, 1 2 E[∇3L(ξk, ξk)] ≈ λ 2 ∇3L [ (2− η∇2L)−1 ] = λ∇ [ − 1 2η tr log ( 1− η 2 ∇2L(θ) )]∣∣∣∣ θ=θ∗ = λ∇R. The second equality follows from the more general equality that for any matrix function A and any scalar function h that acts independently on each eigenvalue, ∇(trh(A(θ))) = (∇A(θ))(h′(A(θ))) which follows from the chain rule. The above equality is the special case when A(θ) = ∇2L(θ) and h(x) = − 1η log ( 1− η2x ) , which satisfies h′(x) = 12−ηx . The remaining details involve concentrating the mean zero error terms mk, zk and showing that E[ξkξTk ] does concentrate in the directions with large eigenvalues and that the directions with small eigenvalues, in which the covariance does not concentrate, do not contribute much to the error. This yields the following bound: Proposition 4. With probability at least 1− 10de−ι, ‖rt+1‖ = Õ ( λ1/2+δ/2√ c ) . The proof of Proposition 4 can be found in Appendix B. Finally, because D = Õ(c5/2λ1/2+δ/2), ‖rt+1‖ ≤ D for sufficiently large c. This completes the induction and the proof of Lemma 1. Comparison with Blanc et al. [3] Like Blanc et al. [3], Lemma 1 shows that θ locally follows the trajectory of gradient descent on an implicit regularizer R(θ). However, there are a few crucial differences: • Because we do not assume we start near a global minimizer where L = 0, we couple to a regularized loss L̃ = L + λR rather than just the regularizer R(θ). In this setting there is an additional correction term to the Hessian (Proposition 1) that requires carefully controlling the value of the loss across reference points to prove convergence to a stationary point. • The analysis in Blanc et al. [3] requires η, τ to be chosen in terms of the condition number of ∇2L which can quickly grow during training as ∇2L is changing. This makes it impossible to directly repeat the argument. We avoid this by precisely analyzing the error incurred by small eigenvalues, allowing us to prove convergence to an ( , γ) stationary point of 1λ L̃ for fixed η, λ even if the smallest nonzero eigenvalue of∇2L converges to 0 during training. • Unlike in Blanc et al. [3], we do not require the learning rate η to be small. Instead, we only require that λ scales with which can be accomplished either by decreasing the learning rate η or increasing the batch size B. This allows for stronger implicit regularization in the setting when η is large (see Section 6.1). In particular, our regularizer R(θ) changes with η and is only equal to the regularizer in Blanc et al. [3] in the limit η → 0. 3.2 Global Convergence In order to prove convergence to an ( , γ)-stationary point of 1η∇L̃, we will define a sequence of reference points θ∗m and coupling times {τm} and repeatedly use a version of Lemma 1 to describe the long term behavior of θ. For notational simplicity, given a sequence of coupling times {τm}, define Tm = ∑ k<m τk to be the total number of steps until we have reached the reference point θ ∗ m. To be able to repeat the local analysis in Lemma 1 with multiple reference points, we need a more general coupling lemma that allows the random process ξ defined in each coupling to continue where the random process in the previous coupling ended. To accomplish this, we define ξ outside the scope of the local coupling lemma: Definition 3. Given a sequence of reference points {θ∗m} and a sequence of coupling times {τm}, we define the random process ξ by ξ0 = 0, and for k ∈ [Tm, Tm+1), ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗m) and ξk+1 = (I − ηG(θ∗m))ξk + ∗k. Then we can prove the following more general coupling lemma: Lemma 2. Let X ,L ,D ,M ,T be defined as in Lemma 1. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let ∆m = θTm−ξTm−θ∗m and assume that ‖∆m‖ ≤ D and L(θ∗m) ≤ L for some 0 < δ ≤ 1/2. Then for any τm ≤ T satisfying maxk∈[Tm,Tm+1) ‖Φk−Tm(θ∗m+∆m)−θ∗m‖ ≤ 8M , with probability at least 1− 10dτme−ι we have simultaneously for all k ∈ (Tm, Tm+1], ‖θk − ξk − Φk−Tm(θ∗m + ∆m)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Unlike in Lemma 1, we couple to the regularized trajectory starting at θ∗m + ∆m rather than at θ ∗ m to avoid accumulating errors (see Figure 2). The proof is otherwise identical to that of Lemma 1. The proof of Theorem 1 easily follows from the following lemma which states that we decrease the regularized loss L̃ by at least F after every coupling: Lemma 3. Let F = D 2 ηνT . Let ∆m = θTm − ξTm − θ∗m and assume ‖∆m‖ ≤ D and L(θ∗m) ≤ L . Then if θTm is not an ( , γ)-stationary point, there exists some τm < T such that if we define θ∗m+1 = Φτn(θ ∗ m + ∆m) and ∆m+1 = θTm+1 − ξTm+1 − θ∗m+1, then with probability 1− 10dτme−ι, L̃(θ∗m+1) ≤ L(θ∗m)−F , ‖∆m+1‖ ≤ D and L(θ∗m+1) ≤ L . We defer the proofs of Lemma 2 and Lemma 3 to Appendix B. Theorem 1 now follows directly from repeated applications of Lemma 3: Proof of Theorem 1. By assumption there exists some θ∗0 such that L(θ ∗ 0) ≤ L and ‖θ0 − θ∗0‖ ≤ D . Then so long as θTm is not an ( , γ)-stationary point, we can inductively apply Lemma 3 to get the existence of coupling times {τm} and reference points {θ∗m} such that for any m ≥ 0, with probability 1 − 10dTme−ι we have L̃(θ∗m) ≤ L̃(θ∗0) − mF . As L̃(θ∗0) − L̃(θ∗m) = O(λ), this can happen for at most m = O ( λ F ) reference points, so at most T = O ( λT F ) = Õ ( η−1λ−1−δ ) iterations of Algorithm 1. By the choice of ι, this happens with probability 1−10dTe−ι ≥ 1−ζ . 4 Experiments In order to test the ability of SGD with label noise to escape poor global minimizers and converge to better minimizers, we initialize Algorithm 1 at global minimizers of the training loss which achieve 100% training accuracy yet generalize poorly to the test set. Minibatch SGD would remain fixed at these initializations because both the gradient and the noise in minibatch SGD vanish at any global minimizer of the training loss. We show that SGD with label noise escapes these poor initializations and converges to flatter minimizers that generalize well, which supports Theorem 1. We run experiments with two initializations: Full Batch Initialization: We run full batch gradient descent with random initialization until convergence to a global minimizer. We call this minimizer the full batch initialization. The final test accuracy of the full batch initialization was 76%. Adversarial Initialization: Following Liu et al. [21], we generate an adversarial initialization with final test accuracy 48% that achieves zero training loss by first teaching the network to memorize random labels and then training it on the true labels. See Appendix D for full details. Experiments were run with ResNet18 on CIFAR10 [17] without data augmentation or weight decay. The experiments were conducted with randomized label flipping with probability 0.2 (see Appendix E for the extension of Theorem 1 to classification with label flipping), cross entropy loss, and batch size 256. Because of the difficulty in computing the regularizer R(θ), we approximate it by its lower bound tr∇2L(θ). Figure 3 shows the test accuracy and tr∇2L throughout training. SGD with label noise escapes both zero training loss initializations and converges to flatter minimizers that generalize much better, reaching the SGD baseline from the fullbatch initialization and getting within 1% of the baseline from the adversarial initialization. The test accuracy in both cases is strongly correlated with tr∇2L. The strength of the regularization is also strongly correlated with η, which supports Theorem 1. 5 Extensions 5.1 SGD with momentum We replace the update in Algorithm 1 with heavy ball momentum with parameter β: θk+1 = θk − η∇L̂(k)(θk) + β(θk − θk−1). (5) We define: R(θ) = 1 + β 2η tr log ( 1− η 2(1 + β) ∇2L(θ) ) , λ = ησ2 B(1− β) , (6) and as before L̃(θ) = L(θ) + λR(θ). Let Φ0(θ) = θ, Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)) + β(Φk(θ)− Φk−1(θ)) (7) represent gradient descent with momentum on L̃. Then we have the following local coupling lemma: Lemma 4. Let X = √ 2λn2ι ν , L = cλ1+δ, D = c √ L ι, T = 1 c2ηX ι , (8) where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η ≤ (2−ν)(1+β)` . Let θ follow Algorithm 1 with momentum β starting at θ∗ and L(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)−θ∗‖ ≤ 8D , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . (9) As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Note that momentum increases the regularization parameter λ by 11−β . For the commonly used momentum parameter β = 0.9, this represents a 10× increase in regularization, which is likely the cause of the improved performance in Figure 4 (β = 0.9) over Figure 3 (β = 0). 5.2 Arbitrary Noise Covariances The analysis in Section 3.1 is not specific to label noise SGD and can be carried out for arbitrary noise schemes. Let θ follow θk+1 = θk − η∇L(θk) + k starting at θ0 where k ∼ N(0, ηλΣ(θk)) and Σ1/2 is Lipschitz. Given a matrix S we define the regularizer RS(θ) = 〈 S,∇2L(θ) 〉 . The matrix S controls the weight of each eigenvalue. As before we can define L̃S(θ) = L(θ) + λRS(θ) and ΦSk+1(θ) = Φ S k (θ) − η∇L̃S(Φk(θ)) to be the regularized loss and the regularized trajectory respectively. Then we have the following version of Lemma 1: Proposition 5. Let θ be initialized at a minimizer θ∗ ofL. Assume∇2L is Lipschitz, letH = ∇2L(θ∗) and assume that Σ(θ∗) CH for some absolute constant C. Let X = √ Cdλι ν , D = cλ 3/4ι, and T = 1c2ηX ι for a sufficiently large constant c. Then there exists a mean zero random process ξ such that for any τ ≤ T satisfying maxk<τ ‖Φk(θ∗)− θ∗‖ ≤ 8D and with probability 1− 10dτe−ι, we have simultaneously for all k ≤ τ : ‖θk − ξk − ΦSk (θ0)‖ ≤ D and ‖ξk‖ ≤X , where S is the unique fixed point of S ← (I − ηH)S(I − ηH) + ηλΣ(θ∗) restricted to span(H). As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Although Proposition 5 couples to gradient descent on RS , S is defined in terms of the Hessian and the noise covariance at θ∗ and therefore depends on the choice of reference point. Because RS is changing, we cannot repeat Proposition 5 as in Section 3.2 to prove convergence to a stationary point because there is no fixed potential. Although it is sometimes possible to relate RS to a fixed potential R, we show in Appendix F.2 that this is not generally possible by providing an example where minibatch SGD perpetually cycles. Exploring the properties of these continuously changing potentials and their connections to generalization is an interesting avenue for future work. 6 Discussion 6.1 Sharpness and the Effect of Large Learning Rates Various factors can control the strength of the implicit regularization in Theorem 1. Most important is the implicit regularization parameter λ = ησ 2 |B| . This supports the hypothesis that large learning rates and small batch sizes are necessary for implicit regularization [9, 26], and agrees with the standard linear scaling rule which proposes that for constant regularization strength, the learning rate η needs to be inversely proportional to the batch size |B|. However, our analysis also uncovers an additional regularization effect of large learning rates. Unlike the regularizer in Blanc et al. [3], the implicit regularizer R(θ) defined in Equation (1) is dependent on η. It is not possible to directly analyze the behavior of R(θ) as η → 2/λ1 where λ1 is the largest eigenvalue of ∇2L, as in this regime, R(θ) → ∞ (see Figure 1). If we let η = 2−νλ1 , then we can better understand the behavior of R(θ) by normalizing it by log 2/ν. This gives2 R(θ) log 2/ν = ∑ i R(λi) log 2/ν = ‖∇2L(θ)‖2 +O ( 1 log 2/ν ) ν→0−−−→ ‖∇2L(θ)‖2 so after normalization, R(θ) becomes a better and better approximation of the spectral norm ‖∇2L(θ)‖ as η → 2/λ1. R(θ) can therefore be seen as interpolating between tr∇2L(θ), when η ≈ 0, and ‖∇2L(θ)‖2 when η ≈ 2/λ1. This also suggests that SGD with large learning rates may be more resilient to the edge of stability phenomenon observed in Cohen et al. [4] as the implicit regularization works harder to control eigenvalues approaching 2/η. The sharpness-aware algorithm (SAM) of [7] is also closely related to R(θ). SAM proposes to minimize max‖δ‖2≤ L(θ + δ). At a global minimizer of the training loss, max ‖δ‖2≤ L(θ∗ + δ) = max ‖δ‖2≤ 1 2 δ>∇2L(θ∗)δ +O( 3) ≈ 2 2 ‖∇2L(θ∗)‖2. The SAM algorithm is therefore explicitly regularizing the spectral norm of∇2L(θ), which is closely connected to the large learning rate regularization effect of R(θ) when η ≈ 2/λ1. 6.2 Generalization Bounds The implicit regularizer R(θ) is intimately connected to data-dependent generalization bounds, which measure the Lipschitzness of the network via the network Jacobian. Specifically, Wei and Ma [30] propose the all-layer margin, which bounds the generalization error . ∑L l=1 Cl√ n √ 1 n ∑n i=1 1 mF (xi,yi)2 , where Cl depends only on the norm of the parameters and mF is the all-layer margin. The norm of the parameters is generally controlled by weight decay regularization, so we focus our discussion on the all-layer margin. Ignoring higher-order secondary terms, Wei and Ma [30, Heuristic derivation of Lemma 3.1] showed for a feed-forward network f(θ;x) = θLσ(θL−1 . . . σ(θ1x)), the all-layer margin satisfies3: 1 mF (x, y) . ‖{ ∂f∂θl }l∈[L]‖2 output margin of (x, y) =⇒ generalization error . ∑L l=1 Cl√ n √ R(θ) output margin as R(θ) is an upper bound on the squared norm of the Jacobian at any global minimizer θ. We emphasize this bound is informal as we discarded the higher-order terms in controlling the all-layer margin, but it accurately reflects that the regularizer R(θ) lower bounds the all-layer margin mF up to higher-order terms. Therefore SGD with label noise implicitly regularizes the all-layer margin. Acknowledgments and Disclosure of Funding AD acknowledges support from a NSF Graduate Research Fellowship. TM acknowledges support of Google Faculty Award and NSF IIS 2045685. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, NSF CCF 2002272, and an ONR Young Investigator Award. The experiments in this paper were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology’s High Performance Computing Center and Visualization Laboratory at Princeton University. We would also like to thank Honglin Yuan and Jeff Z. HaoChen for useful discussions throughout various stages of the project. 2Here we assume λ1 > λ2. If instead λ1 = . . . = λk > λk+1, this limit will be k‖∇2L(θ)‖2. 3The output margin is defined as mini fi(θ)yi. The following uses Equation (3.3) and the first-order approximation provided Wei and Ma [30] and the chain rule ∂f ∂θl = ∂f ∂hl ∂hl ∂θl−1 = ∂f ∂hl h>l−1.
1. What is the focus of the paper regarding stochastic gradient descent with noisy labels? 2. What are the strengths of the proposed approach, particularly in terms of its convergence properties? 3. Do you have any concerns or questions about the assumptions made in the paper, such as the initialization of SGD near an approximate global minimizer? 4. How does the result of this work compare to prior studies on implicit regularization, such as [1]? 5. Are there any limitations or potential improvements to the method presented in the paper?
Summary Of The Paper Review
Summary Of The Paper This work investigates the implicit regularization effect of stochastic gradient descent (SGD) with noisy labels. The authors show that SGD with label noise converges to a stationary point of a regularized loss function. The regularization weight can be controlled with the step size, the batch size, and the noise level of the labels. The authors assume that the loss function to be minimized satisfies standard regularity conditions, e.g., lipschitness and smoothness, the step size is not too large (relative to the lipschitzness constant), and that the loss is not too flat around any global minimizer. Assuming that the SGD is initialized close to an approximate global minimizer of the loss L satisfying the above properties, the authors show that SGD with noisy labels will converge to an approximate stationary point of the regularized loss L ~ (they give the expression of the regularized loss as a function of the step size, batch size, and noise level). Since the authors do not assume the step size to tend to 0 as in [1] they find that the reguralization term interpolates between penalizing all eigenvalues of ∇ 2 L equally that happens for small learning rates and penalizing larger eigenvalues more (happens for large learning rates). Review I think the result presented in this work is interesting and a good step towards better understanding of the implicit regularization effect of SGD. Moreover, I found interesting the interplay between step size and regularization showed in this work (in [1] the step size had to be very small). The paper is well-written, easy to follow in general, and the results are technically non-trivial and interesting. I believe that this work meets the standards of NeurIPS and I recommend acceptance. In Theorem 1 it is assumed that SGD is initialized close to some approximate minimizer of L . Since, at a high level, it seems that initializing the SGD at the global minimizer of L makes it harder for it to escape, is this assumption really necessary? Since we have that if the KL condition holds for some δ it also holds for any δ ′ < δ , is δ supposed to be in ( 0 , 1 / 2 ] in Assumption 3? Why not simply have δ ∈ ( 0 , 1 ] ? [1]:G. Blanc, N. Gupta, G. Valiant, and P. Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process.
NIPS
Title Label Noise SGD Provably Prefers Flat Global Minimizers Abstract In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to. Motivated by empirical studies that demonstrate that training with noisy labels improves generalization, we study the implicit regularization effect of SGD with label noise. We show that SGD with label noise converges to a stationary point of a regularized loss L(θ)+λR(θ), where L(θ) is the training loss, λ is an effective regularization parameter depending on the step size, strength of the label noise, and the batch size, and R(θ) is an explicit regularizer that penalizes sharp minimizers. Our analysis uncovers an additional regularization effect of large learning rates beyond the linear scaling rule that penalizes large eigenvalues of the Hessian more than small ones. We also prove extensions to classification with general loss functions, significantly strengthening the prior work of Blanc et al. [3] to global convergence and large learning rates and of HaoChen et al. [12] to general models. 1 Introduction One of the central questions in modern machine learning theory is the generalization capability of overparametrized models trained by stochastic gradient descent (SGD). Recent work identifies the implicit regularization effect due to the optimization algorithm as one key factor in explaining the generalization of overparameterized models [27, 11, 19, 10]. This implicit regularization is controlled by many properties of the optimization algorithm including search direction [11], learning rate [20], batch size [26], momentum [21] and dropout [22]. The parameter-dependent noise distribution in SGD is a crucial source of regularization [16, 18]. Blanc et al. [3] initiated the study of the regularization effect of label noise SGD with square loss1 by characterizing the local stability of global minimizers of the training loss. By identifying a data-dependent regularizer R(θ), Blanc et al. [3] proved that label noise SGD locally diverges from the global minimizer θ∗ if and only if θ∗ is not a first-order stationary point of minθ R(θ) subject to L(θ) = 0. The analysis is only able to demonstrate that with sufficiently small step size η, label noise SGD initialized at θ∗ locally diverges by a distance of η0.4 and correspondingly decreases the regularizer by η0.4. This is among the first results that establish that the noise distribution alters the local stability of stochastic gradient descent. However, the parameter movement of η0.4 is required to 1Label noise SGD computes the stochastic gradient by first drawing a sample (xi, yi), perturbing y′i = yi+ with ∼ {−σ, σ}, and computing the gradient with respect to (xi, y′i). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be inversely polynomially small in dimension and condition number and is thus too small to affect the predictions of the model. HaoChen et al. [12], motivated by the local nature of Blanc et al. [3], analyzed label noise SGD in the quadratically-parametrized linear regression model [29, 32, 23]. Under a well-specified sparse linear regression model and with isotropic features, HaoChen et al. [12] proved that label noise SGD recovers the sparse ground-truth despite overparametrization, which demonstrated a global implicit bias towards sparsity in the quadratically-parametrized linear regression model. This work seeks to identify the global implicit regularization effect of label noise SGD. Our primary result, which supports Blanc et al. [3], proves that label noise SGD converges to a stationary point of L(θ) + λR(θ), where the regularizer R(θ) penalizes sharp regions of the loss landscape. The focus of this paper is on label noise SGD due to its strong regularization effects in both real and synthetic experiments [25, 28, 31]. Furthermore, label noise is used in large-batch training as an additional regularizer [25] when the regularization from standard regularizers (e.g. mini-batch, batch-norm, and dropout) is not sufficient. Label noise SGD is also known to be less sensitive to initialization, as shown in HaoChen et al. [12]. In stark contrast, mini-batch SGD remains stuck when initialized at any poor global minimizer. Our analysis demonstrates a global regularization effect of label noise SGD by proving it converges to a stationary point of a regularized loss L(θ) + λR(θ), even when initialized at a zero error global minimum. The learning rate and minibatch size in SGD are known to be important sources of regularization [9]. Our main theorem highlights the importance of learning rate and batch size as the hyperparameters that control the balance between the loss and the regularizer – larger learning rates and smaller batch sizes lead to stronger regularization. Section 2 reviews the notation and assumptions used throughout the paper. Section 2.4 formally states the main result and Section 3 sketches the proof. Section 4 presents experimental results which support our theory. Finally, Section 6 discusses the implications of this work. 2 Problem Setup and Main Result Section 2.1 describes our notation and the SGD with label noise algorithm. Section 2.2 introduces the explicit formula for the regularizer R(θ). Sections 2.3 and 2.4 formally state our main result. 2.1 Notation We focus on the regression setting (see Appendix E for the extension to the classification setting). Let {(xi, yi)}i∈[n] be n datapoints with xi ∈ D and yi ∈ R. Let f : D×Rd → R and let fi(θ) = f(xi, θ) denote the value of f on the datapoint xi. Define `i(θ) = 12 (fi(θ)− yi) 2 and L(θ) = 1n ∑n i=1 `i(θ). Then we will follow Algorithm 1 which adds fresh additive noise to the labels yi at every step before computing the gradient: Algorithm 1: SGD with Label Noise Input: θ0, step size η, noise variance σ2, batch size B, steps T for k = 0 to T − 1 do Sample batch B(k) ⊂ [n]B uniformly and label noise (k)i ∼ {−σ, σ} for i ∈ B(k). Let ˆ̀(k)i (θ) = 1 2 ( fi(θ)− yi − (k)i )2 and L̂(k) = 1B ∑ i∈B(k) ˆ̀(k) i . θk+1 ← θk − η∇L̂(k)(θk) end Note that σ controls the strength of the label noise and will control the strength of the implicit regularization in Theorem 1. Throughout the paper we will use ‖ · ‖ = ‖ · ‖2. We make the following standard assumption on f : Assumption 1 (Smoothness). We assume that each fi is `f -Lipschitz,∇fi is ρf -Lipschitz, and∇2fi is κf -Lipschitz with respect to ‖ · ‖2 for i = 1, . . . , n. We will define ` = `2f to be an upper bound on ‖ 1n ∑ i∇fi(θ)∇fi(θ)T ‖2, which is equal to ‖∇2L(θ)‖2 at any global minimizer θ. Our results extend to any learning rate η ∈ (0, 2` ). However, they do not extend to the limit as η → 2` . Because we still want to track the dependence on 1η , we do not assume η is a fixed constant and instead assume some constant separation: Assumption 2 (Learning Rate Separation). There exists a constant ν ∈ (0, 1) such that η ≤ 2−ν` . In addition, we make the following local Kurdyka-Łojasiewicz assumption (KL assumption) which ensures that there are no regions where the loss is very flat. The KL assumption is very general and holds for some δ > 0 for any analytic function defined on a compact domain (see Lemma 17). Assumption 3 (KL). Let θ∗ be any global minimizer of L. Then there exist KL > 0, µ > 0 and 0 < δ ≤ 1/2 such that if L(θ)− L(θ∗) ≤ KL, then L(θ)− L(θ∗) ≤ µ‖∇L(θ)‖1+δ . We assume L(θ∗) = 0 for any global minimizer θ∗. Note that if L satisfies Assumption 3 for some δ then it also satisfies Assumption 3 for any δ′ < δ. Assumption 3 with δ = 1 is equivalent to the much stronger Polyak-Łojasiewicz condition which is equivalent to local strong convexity. We will use O,Θ,Ω to hide any polynomial dependence on µ, `f , ρf , κf , ν, 1/σ, n, d and Õ to hide additional polynomial dependence on log 1/η, logB. 2.2 The Implicit Regularizer R(θ) For L, σ2, B, η as defined above, we define the implicit regularizer R(θ), the effective regularization parameter λ, and the regularized loss L̃(θ): R(θ) = − 1 2η tr log ( 1− η 2 ∇2L(θ) ) , λ = ησ2 B , L̃(θ) = L(θ) + λR(θ). (1) Here log refers to the matrix logarithm. To better understand the regularizer R(θ), let λ1, . . . , λd be the eigenvalues of ∇2L(θ) and let R(λi) = − 12η log(1− ηλi 2 ). Then, R(θ) = d∑ i=1 R(λi) = d∑ i=1 ( λi 4 + ηλ2i 16 + η2λ3i 48 + . . . ) . In the limit as η → 0, R(θ) → 14 tr∇2L(θ), which matches the regularizer in Blanc et al. [3] for infinitesimal learning rate near a global minimizer. However, in additional to the linear scaling rule, which is implicit in our definition of λ, our analysis uncovers an additional regularization effect of large learning rates that penalizes larger eigenvalues more than smaller ones (see Figure 1 and Section 6.1). The goal of this paper is to show that Algorithm 1 converges to a stationary point of the regularized loss L̃ = L + λR. In particular, we will show convergence to an ( , γ)-stationary point, which is defined in the next section. 2.3 ( , γ)-Stationary Points We begin with the standard definition of an approximate stationary point: Definition 1 ( -stationary point). θ is an -stationary point of f if ‖∇f(θ)‖ ≤ . In stochastic gradient descent it is often necessary to allow λ = ησ 2 B to scale with to reach an -stationary point [8, 15] (e.g., λ may need to be less than 2). However, for λ = O( ), any local minimizer θ∗ is an -stationary point of L̃ = L+ λR. Therefore, reaching a -stationary point of L̃ would be equivalent to finding a local minimizer and would not be evidence for implicit regularization. To address this scaling issue, we consider the rescaled regularized loss: 1 λ L̃ = 1 λ L+R. Reaching an -stationary point of 1λ L̃ requires non-trivially taking the regularizer R into account. However, it is not possible for Algorithm 1 to reach an -stationary point of 1λ L̃ even in the ideal setting when θ is initialized near a global minimizer θ∗ of L̃. The label noise will cause fluctuations of order √ λ around θ∗ (see section 3) so ‖∇L‖ will remain around √ λ. This causes 1λ∇L to become unbounded for λ (and therefore ) sufficiently small, and thus Algorithm 1 cannot converge to an -stationary point. We therefore prove convergence to an ( , γ)-stationary point: Definition 2 (( , γ)-stationary point). θ is an ( , γ)-stationary point of f if there exists some θ∗ such that ‖∇f(θ∗)‖ ≤ and ‖θ − θ∗‖ ≤ γ. Intuitively, Algorithm 1 converges to an ( , γ)-stationary point when it converges to a neighborhood of some -stationary point θ∗. 2.4 Main Result Having defined an ( , γ)-stationary point we can now state our main result: Theorem 1. Assume that f satisfies Assumption 1, η satisfies Assumption 2, and L satisfies Assumption 3, i.e. L(θ) ≤ µ‖∇L(θ)‖1+δ for L(θ) ≤ KL. Let η,B be chosen such that λ := ησ 2 B = Θ̃(min( 2/δ, γ2)), and let T = Θ̃(η−1λ−1−δ) = poly(η−1, γ−1). Assume that θ is initialized within O( √ λ1+δ) of some θ∗ satisfying L(θ∗) = O(λ1+δ). Then for any ζ ∈ (0, 1), with probability at least 1 − ζ, if {θk} follows Algorithm 1 with parameters η, σ, T , there exists k < T such that θk is an ( , γ)-stationary point of 1λ L̃. Theorem 1 guarantees that Algorithm 1 will hit an ( , γ)-stationary point of 1λ L̃ within a polynomial number of steps in −1, γ−1. In particular, when δ = 12 , Theorem 1 guarantees convergence within Õ( −6 + γ−3) steps. The condition that θ0 is close to an approximate global minimizer θ∗ is not a strong assumption as recent methods have shown that overparameterized models can easily achieve zero training loss in the kernel regime (see Appendix C). However, in practice these minimizers of the training loss generalize poorly [1]. Theorem 1 shows that Algorithm 1 can then converge to a stationary point of the regularized loss which has better generalization guarantees (see Section 6.2). Theorem 1 also generalizes the local analysis in Blanc et al. [3] to a global result with weaker assumptions on the learning rate η. For a full comparison with Blanc et al. [3], see section 3.1. 3 Proof Sketch The proof of convergence to an ( , ϕ)-stationary point of 1λ L̃ has two components. In Section 3.1, we pick a reference point θ∗ and analyze the behavior of Algorithm 1 in a neighborhood of θ∗. In Section 3.2, we repeat this local analysis with a sequence of reference points {θ∗m}. 3.1 Local Coupling Let Φk(·) denote k steps of gradient descent on the regularized loss L̃, i.e. Φ0(θ) = θ and Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)), (2) where L̃(θ) = L(θ) + λR(θ) is the regularized loss defined in Equation (1). Lemma 1 states that if θ is initialized at an approximate global minimizer θ∗ and follows Algorithm 1, there is a small mean zero random process ξ such that θk ≈ Φk(θ∗) + ξk: Lemma 1. Let ι = c log d λζ , X = √ 2λndι ν , L = cλ1+δ, D = c √ L ι, M = D ν , T = 1 c2ηX ι , where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let θ follow Algorithm 1 starting at θ∗ and assume thatL(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)− θ∗‖ ≤ 8M , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Note that because M ≥ D , the error term D is at least 8 times smaller than the movement in the direction of the regularized trajectory Φτ (θ∗), which will allow us to prove convergence to an ( , γ)-stationary point of 1λ L̃ in Section 3.2. Toward simplifying the update in Algorithm 1, we define L(k) to be the true loss without label noise on batch B(k). The label-noise update L̂(k)(θk) is an unbiased perturbation of the mini-batch update: ∇L̂(k)(θk) = ∇L(k)(θk)− 1B ∑ i∈B(k) (k) i ∇fi(θk). We decompose the update rule into three parts: θk+1 = θk − η∇L(θk)︸ ︷︷ ︸ gradient descent − η[∇L(k)(θk)−∇L(θk)]︸ ︷︷ ︸ minibatch noise + η B ∑ i∈B(k) (k) i ∇fi(θk)︸ ︷︷ ︸ label noise . (3) Let mk = −η[∇L(k)(θk) − ∇L(θk)] denote the minibatch noise. Throughout the proof we will show that the minibatch noise is dominated by the label noise. We will also decompose the label noise into two terms. The first, ∗k, will represent the label noise if the gradient were evaluated at θ ∗ whose distribution does not vary with k. The other term, zk represents the change in the noise due to evaluating the gradient at θk rather than θ∗. More precisely, we have ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗) and zk = η B ∑ i∈B(k) (k) i [∇fi(θk)−∇fi(θ∗)]. We define G(θ) = 1n ∑ i∇fi(θ)∇fi(θ)T to be the covariance of the model gradients. Note that ∗k has covariance ηλG(θ∗). To simplify notation in the Taylor expansions, we will use the following shorthand to refer to various quantities evaluated at θ∗: G = G(θ∗), ∇2L = ∇2L(θ∗), ∇3L = ∇3L(θ∗), ∇R = ∇R(θ∗). First we need the following standard decompositions of the Hessian: Proposition 1. For any θ ∈ Rd we can decompose ∇2L(θ) = G(θ) + E(θ) where E(θ) = 1 n ∑n i=1(fi(θ)− yi)∇2fi(θ) satisfies ‖E(θ)‖ ≤ √ 2ρfL(θ) where ρf is defined in Assumption 1. The matrix G in Proposition 1 is known as the Gauss-Newton term of the Hessian. We can now Taylor expand Algorithm 1 and Equation (2) to first order around θ∗: Φk+1(θ ∗) ≈ Φk(θ∗)− η [ ∇L+∇2L(Φk(θ∗)− θ∗) ] , θk+1 ≈ θk − η [ ∇L+∇2L(θk − θ∗) ] + ∗k. We define vk = θk − Φk(θ∗) to be the deviation from the regularized trajectory. Then subtracting these two equations gives vk+1 ≈ (I − η∇2L)vk + ∗k ≈ (I − ηG)vk + ∗k, where we used Proposition 1 to replace∇2L with G. Temporarily ignoring the higher order terms, we define the random process ξ by ξk+1 = (I − ηG)ξk + ∗k and ξ0 = 0. (4) The process ξ is referred to as an Ornstein Uhlenbeck process and it encodes the movement of θ to first order around θ∗. We defer the proofs of the following properties of ξ to Appendix B: Proposition 2. For any k ≥ 0, with probability at least 1 − 2de−ι, ‖ξk‖ ≤ X . In addition, as k →∞, E[ξkξTk ]→ λΠG(2− ηG)−1 where ΠG is the projection onto the span of G. We can now analyze the effect of ξk on the second order Taylor expansion. Let rk = θk−Φk(θ∗)−ξk be the deviation of θ from the regularized trajectory after removing the Ornstein Uhlenbeck process ξ. Lemma 1 is equivalent to Pr[‖rτ‖ ≥ D ] ≤ 10τde−ι. We will prove by induction that ‖rk‖ ≤ D for all k ≤ t with probability at least 1− 10tde−ι for all t ≤ τ . The base case follows from r0 = 0 so assume the result for some t ≥ 0. The remainder of this section will be conditioned on the event ‖rk‖ ≤ D for all k ≤ t. O(·) notation will only be used to hide absolute constants that do not change with t and will additionally not hide dependence on the absolute constant c. The following proposition fills in the missing second order terms in the Taylor expansion around θ∗ of rk: Proposition 3. With probability at least 1− 2de−ι, rk+1 = (I − ηG)rk − η [ 1 2 ∇3L(ξk, ξk)− λ∇R ] +mk + zk + Õ ( c5/2ηλ1+δ ) The intuition for the implicit regularizer R(θ) is that by Propositions 1 and 2, E[ξkξTk ]→ ΠGλ(2− ηG)−1 ≈ λ(2− η∇2L)−1. Therefore, when averaged over long timescales, 1 2 E[∇3L(ξk, ξk)] ≈ λ 2 ∇3L [ (2− η∇2L)−1 ] = λ∇ [ − 1 2η tr log ( 1− η 2 ∇2L(θ) )]∣∣∣∣ θ=θ∗ = λ∇R. The second equality follows from the more general equality that for any matrix function A and any scalar function h that acts independently on each eigenvalue, ∇(trh(A(θ))) = (∇A(θ))(h′(A(θ))) which follows from the chain rule. The above equality is the special case when A(θ) = ∇2L(θ) and h(x) = − 1η log ( 1− η2x ) , which satisfies h′(x) = 12−ηx . The remaining details involve concentrating the mean zero error terms mk, zk and showing that E[ξkξTk ] does concentrate in the directions with large eigenvalues and that the directions with small eigenvalues, in which the covariance does not concentrate, do not contribute much to the error. This yields the following bound: Proposition 4. With probability at least 1− 10de−ι, ‖rt+1‖ = Õ ( λ1/2+δ/2√ c ) . The proof of Proposition 4 can be found in Appendix B. Finally, because D = Õ(c5/2λ1/2+δ/2), ‖rt+1‖ ≤ D for sufficiently large c. This completes the induction and the proof of Lemma 1. Comparison with Blanc et al. [3] Like Blanc et al. [3], Lemma 1 shows that θ locally follows the trajectory of gradient descent on an implicit regularizer R(θ). However, there are a few crucial differences: • Because we do not assume we start near a global minimizer where L = 0, we couple to a regularized loss L̃ = L + λR rather than just the regularizer R(θ). In this setting there is an additional correction term to the Hessian (Proposition 1) that requires carefully controlling the value of the loss across reference points to prove convergence to a stationary point. • The analysis in Blanc et al. [3] requires η, τ to be chosen in terms of the condition number of ∇2L which can quickly grow during training as ∇2L is changing. This makes it impossible to directly repeat the argument. We avoid this by precisely analyzing the error incurred by small eigenvalues, allowing us to prove convergence to an ( , γ) stationary point of 1λ L̃ for fixed η, λ even if the smallest nonzero eigenvalue of∇2L converges to 0 during training. • Unlike in Blanc et al. [3], we do not require the learning rate η to be small. Instead, we only require that λ scales with which can be accomplished either by decreasing the learning rate η or increasing the batch size B. This allows for stronger implicit regularization in the setting when η is large (see Section 6.1). In particular, our regularizer R(θ) changes with η and is only equal to the regularizer in Blanc et al. [3] in the limit η → 0. 3.2 Global Convergence In order to prove convergence to an ( , γ)-stationary point of 1η∇L̃, we will define a sequence of reference points θ∗m and coupling times {τm} and repeatedly use a version of Lemma 1 to describe the long term behavior of θ. For notational simplicity, given a sequence of coupling times {τm}, define Tm = ∑ k<m τk to be the total number of steps until we have reached the reference point θ ∗ m. To be able to repeat the local analysis in Lemma 1 with multiple reference points, we need a more general coupling lemma that allows the random process ξ defined in each coupling to continue where the random process in the previous coupling ended. To accomplish this, we define ξ outside the scope of the local coupling lemma: Definition 3. Given a sequence of reference points {θ∗m} and a sequence of coupling times {τm}, we define the random process ξ by ξ0 = 0, and for k ∈ [Tm, Tm+1), ∗k = η B ∑ i∈B(k) (k) i ∇fi(θ∗m) and ξk+1 = (I − ηG(θ∗m))ξk + ∗k. Then we can prove the following more general coupling lemma: Lemma 2. Let X ,L ,D ,M ,T be defined as in Lemma 1. Assume f satisfies Assumption 1 and η satisfies Assumption 2. Let ∆m = θTm−ξTm−θ∗m and assume that ‖∆m‖ ≤ D and L(θ∗m) ≤ L for some 0 < δ ≤ 1/2. Then for any τm ≤ T satisfying maxk∈[Tm,Tm+1) ‖Φk−Tm(θ∗m+∆m)−θ∗m‖ ≤ 8M , with probability at least 1− 10dτme−ι we have simultaneously for all k ∈ (Tm, Tm+1], ‖θk − ξk − Φk−Tm(θ∗m + ∆m)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . Unlike in Lemma 1, we couple to the regularized trajectory starting at θ∗m + ∆m rather than at θ ∗ m to avoid accumulating errors (see Figure 2). The proof is otherwise identical to that of Lemma 1. The proof of Theorem 1 easily follows from the following lemma which states that we decrease the regularized loss L̃ by at least F after every coupling: Lemma 3. Let F = D 2 ηνT . Let ∆m = θTm − ξTm − θ∗m and assume ‖∆m‖ ≤ D and L(θ∗m) ≤ L . Then if θTm is not an ( , γ)-stationary point, there exists some τm < T such that if we define θ∗m+1 = Φτn(θ ∗ m + ∆m) and ∆m+1 = θTm+1 − ξTm+1 − θ∗m+1, then with probability 1− 10dτme−ι, L̃(θ∗m+1) ≤ L(θ∗m)−F , ‖∆m+1‖ ≤ D and L(θ∗m+1) ≤ L . We defer the proofs of Lemma 2 and Lemma 3 to Appendix B. Theorem 1 now follows directly from repeated applications of Lemma 3: Proof of Theorem 1. By assumption there exists some θ∗0 such that L(θ ∗ 0) ≤ L and ‖θ0 − θ∗0‖ ≤ D . Then so long as θTm is not an ( , γ)-stationary point, we can inductively apply Lemma 3 to get the existence of coupling times {τm} and reference points {θ∗m} such that for any m ≥ 0, with probability 1 − 10dTme−ι we have L̃(θ∗m) ≤ L̃(θ∗0) − mF . As L̃(θ∗0) − L̃(θ∗m) = O(λ), this can happen for at most m = O ( λ F ) reference points, so at most T = O ( λT F ) = Õ ( η−1λ−1−δ ) iterations of Algorithm 1. By the choice of ι, this happens with probability 1−10dTe−ι ≥ 1−ζ . 4 Experiments In order to test the ability of SGD with label noise to escape poor global minimizers and converge to better minimizers, we initialize Algorithm 1 at global minimizers of the training loss which achieve 100% training accuracy yet generalize poorly to the test set. Minibatch SGD would remain fixed at these initializations because both the gradient and the noise in minibatch SGD vanish at any global minimizer of the training loss. We show that SGD with label noise escapes these poor initializations and converges to flatter minimizers that generalize well, which supports Theorem 1. We run experiments with two initializations: Full Batch Initialization: We run full batch gradient descent with random initialization until convergence to a global minimizer. We call this minimizer the full batch initialization. The final test accuracy of the full batch initialization was 76%. Adversarial Initialization: Following Liu et al. [21], we generate an adversarial initialization with final test accuracy 48% that achieves zero training loss by first teaching the network to memorize random labels and then training it on the true labels. See Appendix D for full details. Experiments were run with ResNet18 on CIFAR10 [17] without data augmentation or weight decay. The experiments were conducted with randomized label flipping with probability 0.2 (see Appendix E for the extension of Theorem 1 to classification with label flipping), cross entropy loss, and batch size 256. Because of the difficulty in computing the regularizer R(θ), we approximate it by its lower bound tr∇2L(θ). Figure 3 shows the test accuracy and tr∇2L throughout training. SGD with label noise escapes both zero training loss initializations and converges to flatter minimizers that generalize much better, reaching the SGD baseline from the fullbatch initialization and getting within 1% of the baseline from the adversarial initialization. The test accuracy in both cases is strongly correlated with tr∇2L. The strength of the regularization is also strongly correlated with η, which supports Theorem 1. 5 Extensions 5.1 SGD with momentum We replace the update in Algorithm 1 with heavy ball momentum with parameter β: θk+1 = θk − η∇L̂(k)(θk) + β(θk − θk−1). (5) We define: R(θ) = 1 + β 2η tr log ( 1− η 2(1 + β) ∇2L(θ) ) , λ = ησ2 B(1− β) , (6) and as before L̃(θ) = L(θ) + λR(θ). Let Φ0(θ) = θ, Φk+1(θ) = Φk(θ)− η∇L̃(Φk(θ)) + β(Φk(θ)− Φk−1(θ)) (7) represent gradient descent with momentum on L̃. Then we have the following local coupling lemma: Lemma 4. Let X = √ 2λn2ι ν , L = cλ1+δ, D = c √ L ι, T = 1 c2ηX ι , (8) where c is a sufficiently large constant. Assume f satisfies Assumption 1 and η ≤ (2−ν)(1+β)` . Let θ follow Algorithm 1 with momentum β starting at θ∗ and L(θ∗) ≤ L for some 0 < δ ≤ 1/2. Then there exists a random process {ξk} such that for any τ ≤ T satisfying maxk≤τ ‖Φk(θ∗)−θ∗‖ ≤ 8D , with probability at least 1− 10dτe−ι we have simultaneously for all k ≤ τ , ‖θk − ξk − Φk(θ∗)‖ ≤ D , E[ξk] = 0, and ‖ξk‖ ≤X . (9) As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Note that momentum increases the regularization parameter λ by 11−β . For the commonly used momentum parameter β = 0.9, this represents a 10× increase in regularization, which is likely the cause of the improved performance in Figure 4 (β = 0.9) over Figure 3 (β = 0). 5.2 Arbitrary Noise Covariances The analysis in Section 3.1 is not specific to label noise SGD and can be carried out for arbitrary noise schemes. Let θ follow θk+1 = θk − η∇L(θk) + k starting at θ0 where k ∼ N(0, ηλΣ(θk)) and Σ1/2 is Lipschitz. Given a matrix S we define the regularizer RS(θ) = 〈 S,∇2L(θ) 〉 . The matrix S controls the weight of each eigenvalue. As before we can define L̃S(θ) = L(θ) + λRS(θ) and ΦSk+1(θ) = Φ S k (θ) − η∇L̃S(Φk(θ)) to be the regularized loss and the regularized trajectory respectively. Then we have the following version of Lemma 1: Proposition 5. Let θ be initialized at a minimizer θ∗ ofL. Assume∇2L is Lipschitz, letH = ∇2L(θ∗) and assume that Σ(θ∗) CH for some absolute constant C. Let X = √ Cdλι ν , D = cλ 3/4ι, and T = 1c2ηX ι for a sufficiently large constant c. Then there exists a mean zero random process ξ such that for any τ ≤ T satisfying maxk<τ ‖Φk(θ∗)− θ∗‖ ≤ 8D and with probability 1− 10dτe−ι, we have simultaneously for all k ≤ τ : ‖θk − ξk − ΦSk (θ0)‖ ≤ D and ‖ξk‖ ≤X , where S is the unique fixed point of S ← (I − ηH)S(I − ηH) + ηλΣ(θ∗) restricted to span(H). As in Lemma 1, the error is 8 times smaller than the maximum movement of the regularized trajectory. Although Proposition 5 couples to gradient descent on RS , S is defined in terms of the Hessian and the noise covariance at θ∗ and therefore depends on the choice of reference point. Because RS is changing, we cannot repeat Proposition 5 as in Section 3.2 to prove convergence to a stationary point because there is no fixed potential. Although it is sometimes possible to relate RS to a fixed potential R, we show in Appendix F.2 that this is not generally possible by providing an example where minibatch SGD perpetually cycles. Exploring the properties of these continuously changing potentials and their connections to generalization is an interesting avenue for future work. 6 Discussion 6.1 Sharpness and the Effect of Large Learning Rates Various factors can control the strength of the implicit regularization in Theorem 1. Most important is the implicit regularization parameter λ = ησ 2 |B| . This supports the hypothesis that large learning rates and small batch sizes are necessary for implicit regularization [9, 26], and agrees with the standard linear scaling rule which proposes that for constant regularization strength, the learning rate η needs to be inversely proportional to the batch size |B|. However, our analysis also uncovers an additional regularization effect of large learning rates. Unlike the regularizer in Blanc et al. [3], the implicit regularizer R(θ) defined in Equation (1) is dependent on η. It is not possible to directly analyze the behavior of R(θ) as η → 2/λ1 where λ1 is the largest eigenvalue of ∇2L, as in this regime, R(θ) → ∞ (see Figure 1). If we let η = 2−νλ1 , then we can better understand the behavior of R(θ) by normalizing it by log 2/ν. This gives2 R(θ) log 2/ν = ∑ i R(λi) log 2/ν = ‖∇2L(θ)‖2 +O ( 1 log 2/ν ) ν→0−−−→ ‖∇2L(θ)‖2 so after normalization, R(θ) becomes a better and better approximation of the spectral norm ‖∇2L(θ)‖ as η → 2/λ1. R(θ) can therefore be seen as interpolating between tr∇2L(θ), when η ≈ 0, and ‖∇2L(θ)‖2 when η ≈ 2/λ1. This also suggests that SGD with large learning rates may be more resilient to the edge of stability phenomenon observed in Cohen et al. [4] as the implicit regularization works harder to control eigenvalues approaching 2/η. The sharpness-aware algorithm (SAM) of [7] is also closely related to R(θ). SAM proposes to minimize max‖δ‖2≤ L(θ + δ). At a global minimizer of the training loss, max ‖δ‖2≤ L(θ∗ + δ) = max ‖δ‖2≤ 1 2 δ>∇2L(θ∗)δ +O( 3) ≈ 2 2 ‖∇2L(θ∗)‖2. The SAM algorithm is therefore explicitly regularizing the spectral norm of∇2L(θ), which is closely connected to the large learning rate regularization effect of R(θ) when η ≈ 2/λ1. 6.2 Generalization Bounds The implicit regularizer R(θ) is intimately connected to data-dependent generalization bounds, which measure the Lipschitzness of the network via the network Jacobian. Specifically, Wei and Ma [30] propose the all-layer margin, which bounds the generalization error . ∑L l=1 Cl√ n √ 1 n ∑n i=1 1 mF (xi,yi)2 , where Cl depends only on the norm of the parameters and mF is the all-layer margin. The norm of the parameters is generally controlled by weight decay regularization, so we focus our discussion on the all-layer margin. Ignoring higher-order secondary terms, Wei and Ma [30, Heuristic derivation of Lemma 3.1] showed for a feed-forward network f(θ;x) = θLσ(θL−1 . . . σ(θ1x)), the all-layer margin satisfies3: 1 mF (x, y) . ‖{ ∂f∂θl }l∈[L]‖2 output margin of (x, y) =⇒ generalization error . ∑L l=1 Cl√ n √ R(θ) output margin as R(θ) is an upper bound on the squared norm of the Jacobian at any global minimizer θ. We emphasize this bound is informal as we discarded the higher-order terms in controlling the all-layer margin, but it accurately reflects that the regularizer R(θ) lower bounds the all-layer margin mF up to higher-order terms. Therefore SGD with label noise implicitly regularizes the all-layer margin. Acknowledgments and Disclosure of Funding AD acknowledges support from a NSF Graduate Research Fellowship. TM acknowledges support of Google Faculty Award and NSF IIS 2045685. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, NSF CCF 2002272, and an ONR Young Investigator Award. The experiments in this paper were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology’s High Performance Computing Center and Visualization Laboratory at Princeton University. We would also like to thank Honglin Yuan and Jeff Z. HaoChen for useful discussions throughout various stages of the project. 2Here we assume λ1 > λ2. If instead λ1 = . . . = λk > λk+1, this limit will be k‖∇2L(θ)‖2. 3The output margin is defined as mini fi(θ)yi. The following uses Equation (3.3) and the first-order approximation provided Wei and Ma [30] and the chain rule ∂f ∂θl = ∂f ∂hl ∂hl ∂θl−1 = ∂f ∂hl h>l−1.
1. What is the focus of the reviewed paper, and how does it build upon prior works in the field? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its theoretical foundations and assumptions? 3. How does the reviewer assess the novelty and limitations of the paper's contributions? 4. Are there any concerns or questions regarding the paper's methodology, results, or conclusions? 5. How does the reviewer evaluate the overall quality and impact of the paper, and what suggestions do they provide for improvement?
Summary Of The Paper Review
Summary Of The Paper Post-rebuttal I thank the authors for their clarifications. We had a long discussion about the technical details, and some aspects became more clear to me, but it did not lead to a resolution of the concern that the theory is somewhat weak. I remain concerned about the restrictive nature of the assumptions regarding the quality of the local minimizer. The authors argue that existing theory guarantees us convergence to such minimizes, but there seems to be a mismatch between the requirements of this work and the proved statements in the cited papers. ###### This paper studies the question of minimizing a smooth loss function with noise injection inside labels. The authors show that SGD with label noise converges to a stationary point of a certain regularized loss function, where the regularization depends on the amount of noise, batch size, and the stepsize. The paper improves the results of prior work, namely it does not need small stepsizes of the work by Blanc et al., and the model is much more general compared to the paper of HaoChen et al. I find the topic of the work to be quite interesting and worthy of investigation, although it is hard for me to call this paper very insightful. The results have a lot of novelty but they are also somewhat incremental and many limitations are still present in this paper. I particularly criticize the following aspects of the results: restrictive assumptions, requirement for local initialization (close to a global solution), and decreasing effect of regularization when the target stationarity is small. Review My main concern about this work is that it does move towards a good answer but the results are still under restrictive assumptions. First of all, in contrast to standard guarantees on convergence of SGD, the theory additionally requires Lipschitzness of stochastic functions and Hessians in addition to the standard assumption on Lipschitz gradients, which are all not applicable to non-smooth deep networks. I understand that at least some assumptions are required and I could see this as a necessary requirement to keep the theory somewhat simple, but the function also needs to be KL for global minima and Theorem 1 assumes that initialization is close to a global minimum. This essentially eliminates the possibility of encountering a local minimum by assumptions rather than by analysis, and I find this unrealistic to apply the obtained results to practical scenarios such as training neural networks. I understand the authors' argument that it is fine to assume local initialization since overparameterization helps to achieve almost zero loss, but the resulting local theory is not that interesting when combined with local-KL assumption. It is also disappointing to see that one needs to have smaller lambda for smaller epsilon, which, as far as I can see, also requires decreasing the stepsize. Ideally I'd hope to see a result that allows to leverage overparameterization and achieve guarantees for non-decreasing regularization. Otherwise, it is not even clear why we should want to eliminate the assumption on small stepsizes of Blanc et al. I still give the paper a weak accept because I think that the results may lead us towards understanding generalization and the factors that contribute to it. However, I cannot recommend a higher score as so many limitations are currently present. Minor I have difficulty with understanding the right column in Figure 3. The figure caption reads "the right column displays their correlation" while the y-label is "Test accuracy". Could the authors clarify what exactly is shown there?
NIPS
Title PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals Abstract Learning with sparse rewards remains a significant challenge in reinforcement learning (RL), especially when the aim is to train a policy capable of achieving multiple different goals. To date, the most successful approaches for dealing with multi-goal, sparse reward environments have been model-free RL algorithms. In this work we propose PlanGAN, a model-based algorithm specifically designed for solving multi-goal tasks in environments with sparse rewards. Our method builds on the fact that any trajectory of experience collected by an agent contains useful information about how to achieve the goals observed during that trajectory. We use this to train an ensemble of conditional generative models (GANs) to generate plausible trajectories that lead the agent from its current state towards a specified goal. We then combine these imagined trajectories into a novel planning algorithm in order to achieve the desired goal as efficiently as possible. The performance of PlanGAN has been tested on a number of robotic navigation/manipulation tasks in comparison with a range of model-free reinforcement learning baselines, including Hindsight Experience Replay. Our studies indicate that PlanGAN can achieve comparable performance whilst being around 4-8 times more sample efficient. 1 Introduction One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games [1, 2], board games [3, 4] and robotic control tasks [5, 6, 7]. Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL [8, 9], where the aim is to train a policy that can achieve a variety of different goals within the environment. Much of RL’s success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems [7, 10, 11]. One 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches [7, 10] are able to achieve comparable performance to some of the best model-free approaches [12, 13]. However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting. To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) [8], a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve any goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success — hence obtaining a much stronger learning signal. In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparsereward environments with multiple goals. The core of our method builds upon the same principle that underlies HER — namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) [14] which learn to generate plausible future trajectories conditioned on achieving a particular goal. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner. We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency. 2 Related Work A number of model-based approaches have utilised explicit planning algorithms, but have mostly been applied to single tasks with relatively dense rewards. Nagabandi et al. [15] use iterative random shooting within a deterministic neural network dynamics model in order to solve a number of continuous control tasks. Hafner et al. [16] learn a latent representation from images and then plan within this latent space using CEM. Nagabandi et al. [17] use a similar planning algorithm (MPPI) [18] within an ensemble of learned models in order to perform dexterous manipulation tasks. Other methods have had success with a hybrid approach, combining elements of model-based and model-free RL, and as in this work often use ensembles of models in order to improve robustness. STEVE [19] uses rollouts produced by an ensemble of models and Q-functions in order to obtain a robust estimate for the Q-learning target. Model-Ensemble TRPO [20] uses an ensemble of models as a simulator for running a model-free RL algorithm (trust-region policy optimisation) whilst maintaining some level of uncertainty for when the model’s predictions are valid. I2A [21] learns to interpret imagined trajectories generated by a model to augment the model-free training of a policy/value function. Temporal Difference Models (TDMs) [22] try to link model-based and model-free RL in the context of time-dependent, goal-conditioned value functions. Here, the model is itself the goal-conditioned value function, and is learned with model-free, off-policy RL. However, they require a meaningful distance metric between states to be defined and so do not work with fully sparse rewards. Nasiriany et al. [23] combine TDMs as an implicit model with a planning algorithm that allows them to plan over multiple abstract sub-goals. They apply this to solve long-horizon, goal-conditioned tasks directly from images. Azizzadenesheli et al. [24] use a Wasserstein GAN with spectral normalisation to learn a predictive model that they use with Monte-Carlo Tree Search to solve ATARI games. Although they do not find particularly strong results overall, they show that they are able to learn an extremely accurate model with stable training of the GAN even in a non-stationary environment. A significant difference with our work is that they train a GAN that takes an action and a state and predicts the next state, whereas we train the GANs to imagine full trajectories (also their focus is on image-based environments). GANs have also been used for curriculum learning in goal-conditioned RL [25], where a generator was trained to propose goals at an appropriate level of difficulty for the current agent to achieve. In terms of learning with sparse rewards, a number of approaches have had success by providing the agent with intrinsic rewards in order to aid with exploration [26, 27, 28]. However, in the multi-goal setting a majority of the most successful approaches have built upon Hindsight Experience Replay (HER) [8]. Zhao & Tresp [29] improve HER’s performance on certain robotics environments by more frequently resampling trajectories where the objects have higher energy. Fang et al. [30] propose an adaptive mechanism to select failed experiences based on a combination of the diversity of the achieved goals and their proximity to the desired goals. Liu et al. [31] propose a complementary re-labelling scheme in the context of a competitive exploration game between two agents in order to supplement HER. He at al. [32] introduce a method that combines HER with maximum entropy RL. Taking a different approach (but still closely related to HER), Ghosh et al. [33] introduce a method that learns goal-conditioned policies without explicitly using reinforcement learning. They use supervised behavioural cloning (a form of imitation learning) to train a policy to reach the goals that have been observed on the trajectories the agent itself has generated. Whilst simpler than HER, it does not use a model and does not claim to significantly improve upon HER’s sample efficiency. 3 Preliminaries 3.1 Goal-Conditioned Reinforcement Learning We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal g from a set of possible goals G. We assume that the environment is fully observable and can be described by: a set of states, S; a set of possible actions, A; a distribution of initial states, p(s0); and a transition function P (st+1|st, at) (st, st+1 ∈ S, at ∈ A). In the standard reinforcement setting we have a reward function, R(st, at, st+1). In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. R(st, at, st+1, g). Assuming that goals are sampled from some distribution p(G), the aim of goal-conditioned RL is to learn a policy, π(st, g), that maximises the expected discounted sum of future rewards: Es0∼p(s0) g∼p(G) at∼π(st,g) st+1∼P (st+1|st,at) [ ∞∑ t=0 γtR(st, at, st+1, g) ] (1) where γ ∈ [0, 1] is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal: R(st, at, st+1, g) = 1(st+1, g) = { 1, if st+1 achieves g, 0, otherwise (2) i.e. we have some criteria that tells us whether any given state s achieves any given goal g, and only provide a reward when this is satisfied. 3.2 Hindsight Experience Replay (HER) In complex environments it is extremely unlikely that the specified goal g will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don’t achieve the specified goal still contain useful information about how to achieve other goals — namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN [34] or DDPG [35] it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal g with an alternative goal g′, and subsequently re-compute the reward. For example, if (st, at, rt, st+1, g) is sampled from a replay buffer of past experience, g can be replaced with another goal g′ that occurs later in the trajectory, and then a reward for this new goal can be recomputed: r′t = R(st, at, st+1, g ′). This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent’s action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved. 4 Methods The key insight of our method is that the same principle underlying HER — i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory — has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate plausible transitions leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible. Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs [14], since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) [36] with spectral normalisation [37], as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary [24]. 4.1 Training the GAN(s) The aim of the first major component of our method is to train a generative model that can take in the current state st along with a desired goal g and produce an imagined action at and next state st+1 that moves the agent towards achieving g. We approach this by training an ensemble of N conditional-GANs, each consisting of a generator Gφi and a discriminator Dθi where {θi}Ni=1, {φi}Ni=1 are the parameters of the neural networks that represent these functions. The generators take in the current state st, a noise vector z and the target goal g in order to produce an imagined action at and next state st+1. The discriminators take in st, at, st+1 and g and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal g or an example created by the generator. We also consider a variation where concurrently we train an ensemble of Nm deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair (st, at) and predict the difference between the next state and the current state, st+1 − st, as in [15]. We denote these models as fβj , where {βj} Nm j=1 represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models — although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure 1. The loss for the ith generator is as follows: L(i)generator = Ez∼p(z) st,g∼R st+1,at∼Gφi (z,st,g) Dθi(st, g, st+1, at) + λ 1Nm Nm∑ j=1 ((st+1 − st)− fβj (st, at))2 (3) where R is a replay buffer of real experienced trajectories, z ∼ p(z) is a noise vector where each component is sampled independently from the standard normal N (0, 1) and λ is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the ith discriminator is: L(i)discriminator = E st,at,st+1,g∼R [Dθi(st, g, st+1, at)]− Ez∼p(z)st,g∼R st+1,at∼Gφi (z,st,g) [Dθi(st, g, st+1, at)] (4) The replay buffer R is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added toR whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory. The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of τ time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained by backpropagating through these unrolled trajectories. In more detail, we sample batches of real trajectories made up of τ transitions from the buffer: (s0, a0, g0, s1, a1, g1, . . . , sτ−1, aτ−1, gτ−1, sτ ), where each goal gi is an achieved goal at a later time along that same trajectory (we found that choosing a different goal at each time step worked better than just a single goal for the whole trajectory). We then use the generator to generate a trajectory (ŝ0 = s0, â0, g0, ŝ1, â1, g1, . . . , ŝτ−1, âτ−1, gτ−1, ŝτ ), where ŝt, ât−1 = Gφ(zt, ŝt−1, gt−1). Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations 3 and 4. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance. Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL [17], this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5. 4.2 Planning to achieve a goal Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, g. Our planner’s basic structure shares similarities with a number of other model-predictive control based approaches [15, 16, 17, 38] — make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models. To plan the next action to take from the current state st towards a desired goal g, we first sample a set of Y initial actions and next states, {ayt , s y t+1}Yy=1. For each y, a y t and s y t+1 are generated from a random generator in the ensemble, conditioned on st, g, i.e. a y t , s y t+1 = Gφi(st, g, z), where i ∼ Uniform{1, . . . , N}. Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal g. A good score here should reflect the fact that we want the next action to be moving us towards g as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps. To obtain such a score we duplicate each of the Y initial seed actions and next states C times. Each next state {sy,kt+1}Y Cy=1,k=1 is then used as the starting point for a trajectory of length T . These hypothetical trajectories are all generated using a different randomly chosen GAN at each timestep, so for example sy,ct+w is generated from a random generator in the ensemble conditioned on (sy,ct+w−1, g). Once we have generated these trajectories, we give each of them a score based on the fraction of time they spend achieving the goal. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within T steps are given a score of zero. We can then score each of the initial seed actions {ayt }Yy=1 based on the average score of all the imagined trajectories that started with that action. These scores are normalised and denoted as ny, and we define weights wy = eαny , where α > 0 is a hyperparameter. The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, at = ∑Y y=1 wyay∑Y y′=1 wy′ . The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN. 5 Experiments We perform experiments in four continuous environments built in MuJoCo [39] — Four Rooms Navigation, Three-link Reacher, Fetch Push and Fetch Pick And Place (see Figure 3). Full details about these environments, along with the hyper parameters used for the experiments, can be found in the Appendix. We evaluate the performance in terms of the percentage of goals that the agent is able to reach vs. the number of time steps it has interacted with the real environment1. 5.1 Comparisons We have compared the performance of our algorithm against a number of leading model-free methods designed for multi-goal, sparse reward environments (Figure 4). The most natural baseline to compare 1Videos of results are available here: https://sites.google.com/view/plangan/home Algorithm 1: PlanGAN initialise: generators {Gφm}Mm=1, discriminators {Dθm}Mm=1, one-step models {fβk}Kk=1, replay bufferR, environment Env begin for j = 1 : J do Append random trajectory (s0, a0, g0, . . . , aT−1, sT , gT ) toR for y = 1 : Y do train() for e = 1 : E do Sample goal g from environment (s0, a0, g0, . . . , sT , gT ) = planner(g) Append (s0, a0, g0, . . . , sT , gT ) toR for p = 1 : P do train() procedure train() for m = 1 :M do Sample batch of Bg trajectories fromR: (s0, a0, ĝ0, . . . , ĝτ−1, sτ ) Bg b=1 Use Gφm to generate a batch of Bg imagined trajectories, starting from the real s0 values and conditioning on the same goals ĝ0, . . . , ĝτ−1 as in the real trajectories Train Gφm , Dθm with equations 3 and 4 for k = 1 : K do Sample batch of Bm transitions fromR: (st, at, st+1) Train fβk to minimise: E [ ||fβj (st, at)− (st+1 − st)||22 ] procedure planner(g) s0, g0 ←− Env.reset(); Trajectory = (s0, g0) for t = 0 : T − 1 do InitAcs = {}; Scores = {} for y = 1 : Y do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝyt+1, â y t = Gφi(z, st, g) InitAcs.append(âyt ) ImaginedTrajs = {} for c = 1 : C do sy,ct+1 = s y t+1 for t′ = t+ 1 : t+ T do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝy,ct′+1, â y,d t′ = Gφi(z, ŝ y,c t′ , g) ImaginedTrajs.append(ŝy,ct+1, . . . , ŝ y,c t+T ) score[y] = 1T+1 ∑t+T t′=t 1(ŝ y,c t′ , g) scores.append(score[y]) scores = Normalise(scores) at = ∑Y y=1 e α scores[y]âyt∑Y y′=1 e α scores[y′] st+1, gt+1 = Env.step(at) Trajectory.append(at, st+1, gt+1) return Trajectory Figure 3: Environments that we evaluate PlanGAN on. (a) Four rooms navigation. (b) Reacher (Three Links). (c) Fetch Push. (d) Fetch Pick And Place. with is HER (using DDPG as the core RL algorithm [8]), as this is based on a similar underlying principle to PlanGAN. We also include DDPG without HER to demonstrate how standard model-free RL methods struggle with these tasks. For both of these we use the implementations found in OpenAI Baselines [40]. We also include comparisons with two recently proposed modifications to HER, “Curriculum-Guided HER" [30] (CHER) and “Soft HER" [32]2 (SHER). We also include a model-based baseline (PETS)[41], which also makes use of ensembles of models but which is not designed specifically with multi-goal, sparse reward tasks in mind. Note that it is computationally prohibitive to run this method for as long as the model-free methods, however we run it for at least as many steps as we run PlanGAN. Finally, for the Fetch Push and Fetch Pick And Place environments, 2using the official implementations found here and here respectively we include comparisons with a recent method “Simulated Locomotion Demonstrations" (SLD) [42], which requires an object to be defined. SLD uses the fact that with a simulator objects can move by themselves, so a separate object policy can be learned where the object moves itself to the desired goal. SLD leverages this object policy to guide the learning of the full robot policy. This gives it a significant advantage over PlanGAN as it makes use of separately learned self-generated demonstrations to guide the training, however we see that PlanGAN still achieves significantly better data efficiency. All plots are based on running each experiment using 5 different random seeds, with the solid line representing the median performance and the shaded area representing one standard deviation around the mean. We also include a line showing the average asymptotic performance of HER (as this is the most directly comparable method). Note that the environment interactions recorded on the training curves for PlanGAN do include both the initial random trajectories as well as any trajectories that are not stored in the buffer (when the final goal is identical to the initial goal). In all of the tasks considered we find that PlanGAN is significantly more sample efficient than any of the other methods model-free methods, requiring between 4-8 times less data to reach the same performance as HER. This is comparable to the sample efficiency gains reported in [15] for a model-based approach to dense reward tasks over leading model-free methods. It also substantially outperforms the model-based baseline (PETS) which is not designed for sparse reward, multi-goal environments. 5.2 Ablation studies In this section we study how various decisions we have made affect PlanGAN’s performance by performing ablation studies on the two more complicated environments considered (Fetch Push and Fetch Pick And Place). Firstly, we study whether the planner is a crucial component of our set-up. The first panel in Figure 1 in the Appendix shows a comparison of the full PlanGAN with a couple of variations that more directly use the actions proposed by the GANs. Both of these lead to significantly lower success rates, suggesting that the planner we use is crucial. We then consider how the number of GANs in the ensemble effects PlanGAN’s performance. The second panel in Figure 1 (Appendix) shows results for ensembles made up of 1, 3 and 5 GANs respectively. Whilst less significant than the inclusion of the planner, we find that using only a single GAN leads to slower and significantly less stable training. We also see that the larger ensemble (5 GANs) outperforms the smaller ensemble (3 GANs), but the difference in performance is relatively small. Finally, we consider running the algorithm with λ = 0, i.e. without any regularisation from the one-step predictive model. We see that the one-step model regularisation provides only a very minor improvement, suggesting that it is not a crucial component of PlanGAN. 6 Conclusions We proposed PlanGAN, a model-based method for solving multi-goal environments with sparse rewards. We showed how to train a generative model in order to generate plausible future trajectories that lead from a given state towards a desired goal, and how these can be used within a planning algorithm to achieve these goals efficiently. We demonstrated that this approach leads to a substantial increase in sample efficiency when compared to leading model-free RL methods that can cope with sparse rewards and multiple goals. In the future we would like to extend this work so that it can be applied to more complex environments. One of the main limitations with the current approach is the planner. When the number of time steps required to complete a task becomes large the planner becomes computationally expensive, since at each step we have to simulate a large number of future steps out until completion. We also need these trajectories to be at least reasonably accurate over a large number of time steps, as imagined future trajectories that do not reach the desired goal are given a score of zero. If no imagined trajectories reach the goal then the planner is unable to meaningfully choose an action. Future work which may more efficiently deal with longer horizon tasks could involve combining the GAN training with a model-free goal-conditioned value function (creating a hybrid method, similar to STEVE [19] and Dreamer [7]) which could learn to give a value to the actions proposed by the GANs, removing the need for a planner entirely. Statement of Broader Impact Since our work involves foundational research in the field of model-based reinforcement learning it is unlikely to have any large, immediate impacts on society. Nevertheless, in the longer term the impact of reinforcement learning agents capable of learning to autonomously make decisions could be huge. In principle one could discuss a huge range of potential impacts over different time frames, but we choose to focus on some potential medium-term impacts of robots that can learn autonomously from sparse rewards. Robots are pervasive in the modern world and are used in a wide range of manufacturing industries for carrying out tedious, repetitive work. Introducing robots that are capable of autonomously learning a set of skills from easy to specify reward functions has the potential to vastly increase the scope of possible tasks that they can be used for. In particular, it removes the requirement for their behaviours to be carefully engineered in a manual fashion for every possible scenario they might encounter. This has the potential to allow for many tasks that currently can only be carried out by human workers to become fully or partially automated. Whilst this could provide a huge economic boost to some manufacturing companies, it is important that this benefit is weighed against the potential negative impacts (both social and economic) that losing these manufacturing jobs could have — particularly if large scale changes were to occur in a short period of time. We feel that this is an important question for both economists/ policy advisors as well as researchers working in the field to think about. Acknowledgements/ Funding Disclosure This work was partially funded by Catapult (High Value Manufacturing) within Warwick Manufacturing Group.
1. What is the focus and contribution of the paper on model-based goal-conditioned reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its performance and data efficiency? 3. What are the weaknesses of the paper regarding its lack of theoretical analysis and missing model-based baselines? 4. How does the reviewer assess the significance of the proposed method compared to prior works? 5. Are there any questions or concerns regarding the paper's content that the reviewer has?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a model-based goal-conditioned reinforcement learning method. In contrast to the commonly used model-free method, HER, the proposed method uses a goal-conditioned predictive model of dynamics p(s',a|s,g) and a shooting-based planning method. The model is trained on off-policy data from the replay buffer. It is shown that this method reaches the asymptotic performance of the model-free HER, while being several times more data-efficient. --- Decision --- The paper presents an approach for model-based goal-conditioned reinforcement learning which works well in practice and is more data-efficient that the model-free state-of-the-art. While there are unresolved questions about theoretical interpretation and comparison to prior work, I believe the paper is likely to be impactful. I recommend acceptance. --- Update --- The rebuttal does not require a response. I urge the authors to perform the crucial comparison to a model-based method such as PETS. Strengths The paper proposes an interesting model-based method for an important problem of goal-conditioned reinforcement learning. The method empirically performs well, and is more data-efficient than existing model-free methods, suggesting possible wide adoption. Weaknesses There are two main weaknesses of the paper: lacking theoretical analysis and missing model-based baselines. First, the paper contains no theoretical analysis. It is thus unknown whether the method is expected to converge to the optimal solution. Intuitively, since the learned distribution of trajectories is fit to the replay buffer, the samples will be biased in a certain way. As the planning averages the samples, it is not guaranteed to converge to the optimal plan. Furthermore, it is unclear what objective the model is optimizing, as well as what objective the planning is optimizing. This will prevent future researchers from improving the method as it's unclear what the basic principles of designing a model-based goal-conditioned method are. The paper contains an extensive comparison to prior model-free work such as HER or DDPG. However, there is no comparison to any model-based method. Some appropriate comparisons would be Chua'18 or Nasiriany'20 Chua'18, Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.
NIPS
Title PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals Abstract Learning with sparse rewards remains a significant challenge in reinforcement learning (RL), especially when the aim is to train a policy capable of achieving multiple different goals. To date, the most successful approaches for dealing with multi-goal, sparse reward environments have been model-free RL algorithms. In this work we propose PlanGAN, a model-based algorithm specifically designed for solving multi-goal tasks in environments with sparse rewards. Our method builds on the fact that any trajectory of experience collected by an agent contains useful information about how to achieve the goals observed during that trajectory. We use this to train an ensemble of conditional generative models (GANs) to generate plausible trajectories that lead the agent from its current state towards a specified goal. We then combine these imagined trajectories into a novel planning algorithm in order to achieve the desired goal as efficiently as possible. The performance of PlanGAN has been tested on a number of robotic navigation/manipulation tasks in comparison with a range of model-free reinforcement learning baselines, including Hindsight Experience Replay. Our studies indicate that PlanGAN can achieve comparable performance whilst being around 4-8 times more sample efficient. 1 Introduction One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games [1, 2], board games [3, 4] and robotic control tasks [5, 6, 7]. Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL [8, 9], where the aim is to train a policy that can achieve a variety of different goals within the environment. Much of RL’s success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems [7, 10, 11]. One 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches [7, 10] are able to achieve comparable performance to some of the best model-free approaches [12, 13]. However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting. To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) [8], a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve any goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success — hence obtaining a much stronger learning signal. In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparsereward environments with multiple goals. The core of our method builds upon the same principle that underlies HER — namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) [14] which learn to generate plausible future trajectories conditioned on achieving a particular goal. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner. We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency. 2 Related Work A number of model-based approaches have utilised explicit planning algorithms, but have mostly been applied to single tasks with relatively dense rewards. Nagabandi et al. [15] use iterative random shooting within a deterministic neural network dynamics model in order to solve a number of continuous control tasks. Hafner et al. [16] learn a latent representation from images and then plan within this latent space using CEM. Nagabandi et al. [17] use a similar planning algorithm (MPPI) [18] within an ensemble of learned models in order to perform dexterous manipulation tasks. Other methods have had success with a hybrid approach, combining elements of model-based and model-free RL, and as in this work often use ensembles of models in order to improve robustness. STEVE [19] uses rollouts produced by an ensemble of models and Q-functions in order to obtain a robust estimate for the Q-learning target. Model-Ensemble TRPO [20] uses an ensemble of models as a simulator for running a model-free RL algorithm (trust-region policy optimisation) whilst maintaining some level of uncertainty for when the model’s predictions are valid. I2A [21] learns to interpret imagined trajectories generated by a model to augment the model-free training of a policy/value function. Temporal Difference Models (TDMs) [22] try to link model-based and model-free RL in the context of time-dependent, goal-conditioned value functions. Here, the model is itself the goal-conditioned value function, and is learned with model-free, off-policy RL. However, they require a meaningful distance metric between states to be defined and so do not work with fully sparse rewards. Nasiriany et al. [23] combine TDMs as an implicit model with a planning algorithm that allows them to plan over multiple abstract sub-goals. They apply this to solve long-horizon, goal-conditioned tasks directly from images. Azizzadenesheli et al. [24] use a Wasserstein GAN with spectral normalisation to learn a predictive model that they use with Monte-Carlo Tree Search to solve ATARI games. Although they do not find particularly strong results overall, they show that they are able to learn an extremely accurate model with stable training of the GAN even in a non-stationary environment. A significant difference with our work is that they train a GAN that takes an action and a state and predicts the next state, whereas we train the GANs to imagine full trajectories (also their focus is on image-based environments). GANs have also been used for curriculum learning in goal-conditioned RL [25], where a generator was trained to propose goals at an appropriate level of difficulty for the current agent to achieve. In terms of learning with sparse rewards, a number of approaches have had success by providing the agent with intrinsic rewards in order to aid with exploration [26, 27, 28]. However, in the multi-goal setting a majority of the most successful approaches have built upon Hindsight Experience Replay (HER) [8]. Zhao & Tresp [29] improve HER’s performance on certain robotics environments by more frequently resampling trajectories where the objects have higher energy. Fang et al. [30] propose an adaptive mechanism to select failed experiences based on a combination of the diversity of the achieved goals and their proximity to the desired goals. Liu et al. [31] propose a complementary re-labelling scheme in the context of a competitive exploration game between two agents in order to supplement HER. He at al. [32] introduce a method that combines HER with maximum entropy RL. Taking a different approach (but still closely related to HER), Ghosh et al. [33] introduce a method that learns goal-conditioned policies without explicitly using reinforcement learning. They use supervised behavioural cloning (a form of imitation learning) to train a policy to reach the goals that have been observed on the trajectories the agent itself has generated. Whilst simpler than HER, it does not use a model and does not claim to significantly improve upon HER’s sample efficiency. 3 Preliminaries 3.1 Goal-Conditioned Reinforcement Learning We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal g from a set of possible goals G. We assume that the environment is fully observable and can be described by: a set of states, S; a set of possible actions, A; a distribution of initial states, p(s0); and a transition function P (st+1|st, at) (st, st+1 ∈ S, at ∈ A). In the standard reinforcement setting we have a reward function, R(st, at, st+1). In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. R(st, at, st+1, g). Assuming that goals are sampled from some distribution p(G), the aim of goal-conditioned RL is to learn a policy, π(st, g), that maximises the expected discounted sum of future rewards: Es0∼p(s0) g∼p(G) at∼π(st,g) st+1∼P (st+1|st,at) [ ∞∑ t=0 γtR(st, at, st+1, g) ] (1) where γ ∈ [0, 1] is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal: R(st, at, st+1, g) = 1(st+1, g) = { 1, if st+1 achieves g, 0, otherwise (2) i.e. we have some criteria that tells us whether any given state s achieves any given goal g, and only provide a reward when this is satisfied. 3.2 Hindsight Experience Replay (HER) In complex environments it is extremely unlikely that the specified goal g will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don’t achieve the specified goal still contain useful information about how to achieve other goals — namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN [34] or DDPG [35] it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal g with an alternative goal g′, and subsequently re-compute the reward. For example, if (st, at, rt, st+1, g) is sampled from a replay buffer of past experience, g can be replaced with another goal g′ that occurs later in the trajectory, and then a reward for this new goal can be recomputed: r′t = R(st, at, st+1, g ′). This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent’s action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved. 4 Methods The key insight of our method is that the same principle underlying HER — i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory — has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate plausible transitions leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible. Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs [14], since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) [36] with spectral normalisation [37], as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary [24]. 4.1 Training the GAN(s) The aim of the first major component of our method is to train a generative model that can take in the current state st along with a desired goal g and produce an imagined action at and next state st+1 that moves the agent towards achieving g. We approach this by training an ensemble of N conditional-GANs, each consisting of a generator Gφi and a discriminator Dθi where {θi}Ni=1, {φi}Ni=1 are the parameters of the neural networks that represent these functions. The generators take in the current state st, a noise vector z and the target goal g in order to produce an imagined action at and next state st+1. The discriminators take in st, at, st+1 and g and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal g or an example created by the generator. We also consider a variation where concurrently we train an ensemble of Nm deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair (st, at) and predict the difference between the next state and the current state, st+1 − st, as in [15]. We denote these models as fβj , where {βj} Nm j=1 represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models — although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure 1. The loss for the ith generator is as follows: L(i)generator = Ez∼p(z) st,g∼R st+1,at∼Gφi (z,st,g) Dθi(st, g, st+1, at) + λ 1Nm Nm∑ j=1 ((st+1 − st)− fβj (st, at))2 (3) where R is a replay buffer of real experienced trajectories, z ∼ p(z) is a noise vector where each component is sampled independently from the standard normal N (0, 1) and λ is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the ith discriminator is: L(i)discriminator = E st,at,st+1,g∼R [Dθi(st, g, st+1, at)]− Ez∼p(z)st,g∼R st+1,at∼Gφi (z,st,g) [Dθi(st, g, st+1, at)] (4) The replay buffer R is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added toR whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory. The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of τ time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained by backpropagating through these unrolled trajectories. In more detail, we sample batches of real trajectories made up of τ transitions from the buffer: (s0, a0, g0, s1, a1, g1, . . . , sτ−1, aτ−1, gτ−1, sτ ), where each goal gi is an achieved goal at a later time along that same trajectory (we found that choosing a different goal at each time step worked better than just a single goal for the whole trajectory). We then use the generator to generate a trajectory (ŝ0 = s0, â0, g0, ŝ1, â1, g1, . . . , ŝτ−1, âτ−1, gτ−1, ŝτ ), where ŝt, ât−1 = Gφ(zt, ŝt−1, gt−1). Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations 3 and 4. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance. Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL [17], this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5. 4.2 Planning to achieve a goal Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, g. Our planner’s basic structure shares similarities with a number of other model-predictive control based approaches [15, 16, 17, 38] — make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models. To plan the next action to take from the current state st towards a desired goal g, we first sample a set of Y initial actions and next states, {ayt , s y t+1}Yy=1. For each y, a y t and s y t+1 are generated from a random generator in the ensemble, conditioned on st, g, i.e. a y t , s y t+1 = Gφi(st, g, z), where i ∼ Uniform{1, . . . , N}. Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal g. A good score here should reflect the fact that we want the next action to be moving us towards g as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps. To obtain such a score we duplicate each of the Y initial seed actions and next states C times. Each next state {sy,kt+1}Y Cy=1,k=1 is then used as the starting point for a trajectory of length T . These hypothetical trajectories are all generated using a different randomly chosen GAN at each timestep, so for example sy,ct+w is generated from a random generator in the ensemble conditioned on (sy,ct+w−1, g). Once we have generated these trajectories, we give each of them a score based on the fraction of time they spend achieving the goal. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within T steps are given a score of zero. We can then score each of the initial seed actions {ayt }Yy=1 based on the average score of all the imagined trajectories that started with that action. These scores are normalised and denoted as ny, and we define weights wy = eαny , where α > 0 is a hyperparameter. The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, at = ∑Y y=1 wyay∑Y y′=1 wy′ . The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN. 5 Experiments We perform experiments in four continuous environments built in MuJoCo [39] — Four Rooms Navigation, Three-link Reacher, Fetch Push and Fetch Pick And Place (see Figure 3). Full details about these environments, along with the hyper parameters used for the experiments, can be found in the Appendix. We evaluate the performance in terms of the percentage of goals that the agent is able to reach vs. the number of time steps it has interacted with the real environment1. 5.1 Comparisons We have compared the performance of our algorithm against a number of leading model-free methods designed for multi-goal, sparse reward environments (Figure 4). The most natural baseline to compare 1Videos of results are available here: https://sites.google.com/view/plangan/home Algorithm 1: PlanGAN initialise: generators {Gφm}Mm=1, discriminators {Dθm}Mm=1, one-step models {fβk}Kk=1, replay bufferR, environment Env begin for j = 1 : J do Append random trajectory (s0, a0, g0, . . . , aT−1, sT , gT ) toR for y = 1 : Y do train() for e = 1 : E do Sample goal g from environment (s0, a0, g0, . . . , sT , gT ) = planner(g) Append (s0, a0, g0, . . . , sT , gT ) toR for p = 1 : P do train() procedure train() for m = 1 :M do Sample batch of Bg trajectories fromR: (s0, a0, ĝ0, . . . , ĝτ−1, sτ ) Bg b=1 Use Gφm to generate a batch of Bg imagined trajectories, starting from the real s0 values and conditioning on the same goals ĝ0, . . . , ĝτ−1 as in the real trajectories Train Gφm , Dθm with equations 3 and 4 for k = 1 : K do Sample batch of Bm transitions fromR: (st, at, st+1) Train fβk to minimise: E [ ||fβj (st, at)− (st+1 − st)||22 ] procedure planner(g) s0, g0 ←− Env.reset(); Trajectory = (s0, g0) for t = 0 : T − 1 do InitAcs = {}; Scores = {} for y = 1 : Y do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝyt+1, â y t = Gφi(z, st, g) InitAcs.append(âyt ) ImaginedTrajs = {} for c = 1 : C do sy,ct+1 = s y t+1 for t′ = t+ 1 : t+ T do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝy,ct′+1, â y,d t′ = Gφi(z, ŝ y,c t′ , g) ImaginedTrajs.append(ŝy,ct+1, . . . , ŝ y,c t+T ) score[y] = 1T+1 ∑t+T t′=t 1(ŝ y,c t′ , g) scores.append(score[y]) scores = Normalise(scores) at = ∑Y y=1 e α scores[y]âyt∑Y y′=1 e α scores[y′] st+1, gt+1 = Env.step(at) Trajectory.append(at, st+1, gt+1) return Trajectory Figure 3: Environments that we evaluate PlanGAN on. (a) Four rooms navigation. (b) Reacher (Three Links). (c) Fetch Push. (d) Fetch Pick And Place. with is HER (using DDPG as the core RL algorithm [8]), as this is based on a similar underlying principle to PlanGAN. We also include DDPG without HER to demonstrate how standard model-free RL methods struggle with these tasks. For both of these we use the implementations found in OpenAI Baselines [40]. We also include comparisons with two recently proposed modifications to HER, “Curriculum-Guided HER" [30] (CHER) and “Soft HER" [32]2 (SHER). We also include a model-based baseline (PETS)[41], which also makes use of ensembles of models but which is not designed specifically with multi-goal, sparse reward tasks in mind. Note that it is computationally prohibitive to run this method for as long as the model-free methods, however we run it for at least as many steps as we run PlanGAN. Finally, for the Fetch Push and Fetch Pick And Place environments, 2using the official implementations found here and here respectively we include comparisons with a recent method “Simulated Locomotion Demonstrations" (SLD) [42], which requires an object to be defined. SLD uses the fact that with a simulator objects can move by themselves, so a separate object policy can be learned where the object moves itself to the desired goal. SLD leverages this object policy to guide the learning of the full robot policy. This gives it a significant advantage over PlanGAN as it makes use of separately learned self-generated demonstrations to guide the training, however we see that PlanGAN still achieves significantly better data efficiency. All plots are based on running each experiment using 5 different random seeds, with the solid line representing the median performance and the shaded area representing one standard deviation around the mean. We also include a line showing the average asymptotic performance of HER (as this is the most directly comparable method). Note that the environment interactions recorded on the training curves for PlanGAN do include both the initial random trajectories as well as any trajectories that are not stored in the buffer (when the final goal is identical to the initial goal). In all of the tasks considered we find that PlanGAN is significantly more sample efficient than any of the other methods model-free methods, requiring between 4-8 times less data to reach the same performance as HER. This is comparable to the sample efficiency gains reported in [15] for a model-based approach to dense reward tasks over leading model-free methods. It also substantially outperforms the model-based baseline (PETS) which is not designed for sparse reward, multi-goal environments. 5.2 Ablation studies In this section we study how various decisions we have made affect PlanGAN’s performance by performing ablation studies on the two more complicated environments considered (Fetch Push and Fetch Pick And Place). Firstly, we study whether the planner is a crucial component of our set-up. The first panel in Figure 1 in the Appendix shows a comparison of the full PlanGAN with a couple of variations that more directly use the actions proposed by the GANs. Both of these lead to significantly lower success rates, suggesting that the planner we use is crucial. We then consider how the number of GANs in the ensemble effects PlanGAN’s performance. The second panel in Figure 1 (Appendix) shows results for ensembles made up of 1, 3 and 5 GANs respectively. Whilst less significant than the inclusion of the planner, we find that using only a single GAN leads to slower and significantly less stable training. We also see that the larger ensemble (5 GANs) outperforms the smaller ensemble (3 GANs), but the difference in performance is relatively small. Finally, we consider running the algorithm with λ = 0, i.e. without any regularisation from the one-step predictive model. We see that the one-step model regularisation provides only a very minor improvement, suggesting that it is not a crucial component of PlanGAN. 6 Conclusions We proposed PlanGAN, a model-based method for solving multi-goal environments with sparse rewards. We showed how to train a generative model in order to generate plausible future trajectories that lead from a given state towards a desired goal, and how these can be used within a planning algorithm to achieve these goals efficiently. We demonstrated that this approach leads to a substantial increase in sample efficiency when compared to leading model-free RL methods that can cope with sparse rewards and multiple goals. In the future we would like to extend this work so that it can be applied to more complex environments. One of the main limitations with the current approach is the planner. When the number of time steps required to complete a task becomes large the planner becomes computationally expensive, since at each step we have to simulate a large number of future steps out until completion. We also need these trajectories to be at least reasonably accurate over a large number of time steps, as imagined future trajectories that do not reach the desired goal are given a score of zero. If no imagined trajectories reach the goal then the planner is unable to meaningfully choose an action. Future work which may more efficiently deal with longer horizon tasks could involve combining the GAN training with a model-free goal-conditioned value function (creating a hybrid method, similar to STEVE [19] and Dreamer [7]) which could learn to give a value to the actions proposed by the GANs, removing the need for a planner entirely. Statement of Broader Impact Since our work involves foundational research in the field of model-based reinforcement learning it is unlikely to have any large, immediate impacts on society. Nevertheless, in the longer term the impact of reinforcement learning agents capable of learning to autonomously make decisions could be huge. In principle one could discuss a huge range of potential impacts over different time frames, but we choose to focus on some potential medium-term impacts of robots that can learn autonomously from sparse rewards. Robots are pervasive in the modern world and are used in a wide range of manufacturing industries for carrying out tedious, repetitive work. Introducing robots that are capable of autonomously learning a set of skills from easy to specify reward functions has the potential to vastly increase the scope of possible tasks that they can be used for. In particular, it removes the requirement for their behaviours to be carefully engineered in a manual fashion for every possible scenario they might encounter. This has the potential to allow for many tasks that currently can only be carried out by human workers to become fully or partially automated. Whilst this could provide a huge economic boost to some manufacturing companies, it is important that this benefit is weighed against the potential negative impacts (both social and economic) that losing these manufacturing jobs could have — particularly if large scale changes were to occur in a short period of time. We feel that this is an important question for both economists/ policy advisors as well as researchers working in the field to think about. Acknowledgements/ Funding Disclosure This work was partially funded by Catapult (High Value Manufacturing) within Warwick Manufacturing Group.
1. What is the main contribution of the paper, and how does it improve sample efficiency in multiple goal environments? 2. What are the strengths of the proposed approach, and how does it differ from previous methods? 3. What are the weaknesses of the paper, particularly regarding computational efficiency and choice of planner? 4. How does the reviewer assess the novelty and impact of the paper's idea, and what potential applications does it have beyond multiple goal setups? 5. Are there any concerns or questions regarding the training of GANs and their use inside the planner?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduces an methods that combines GAN model and planning to dramatically improve sample efficiency on multiple goal environments. Section 1 presents the motivations and the proposed approach. Section 2 gives an overview of related work. Sections 3 gives more details on the target task for this method (multiple goal RL environments), and gives a bit more detail on a successful method in this field (HER) which the proposed approach builds upon. Section 4 describe the proposed method and how exactly the GANs are trained in the and how they are used inside a planner. Section 5 presents experiments which convincingly show that the proposed approach is much more data efficient that previous methods. Strengths The paper is straightforward, easy to read. The motivations are clear, the proposed method is precisely described and the experiments are convincingly showing the greater data efficiency of the proposed approach and plausible. Weaknesses While the proposed approach seem to be much more data efficient, it might not be more compute efficient. For example PlanGAN curves in figure 4 stop much earlier in environment interactions than other methods, suggesting that it was prohibitive to run for as many steps as previous methods. Many of the choices that were made seem somewhat arbitrary and are not backed by empirical evidence or convincing intuition. For example in the planner, the GAN used to generate imaginary rollouts is swapped at every time step. Also during GAN training, the goal is random amongst goals that have been subsequently achieved, many other choices could have been made. The planner consisting of Q suggested seed action followed by C imaginary rollouts for each action is bit naive and limited because the rollout have to be as long as the time horizon for reaching the goal. Would have been interesting to use a more sophisticated planner that can act on smaller horizon with value such as MCTS/MuZero style. Finally, combining GAN model and planning is in my opinion a very interesting idea but I am not sure why they author chose to only apply it to multiple goal setup. I think applying it to the less specialised environment would have been much more interesting and impactful. For all these reasons (a bit too narrow focus on multi-goal, lack of grounding to prior work results for the claim about data efficiency, naive planner, some arbitrary choices), I think the paper is not as impactful as it could be have been, although the idea being explored is quite interesting, hence my reserved rating.
NIPS
Title PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals Abstract Learning with sparse rewards remains a significant challenge in reinforcement learning (RL), especially when the aim is to train a policy capable of achieving multiple different goals. To date, the most successful approaches for dealing with multi-goal, sparse reward environments have been model-free RL algorithms. In this work we propose PlanGAN, a model-based algorithm specifically designed for solving multi-goal tasks in environments with sparse rewards. Our method builds on the fact that any trajectory of experience collected by an agent contains useful information about how to achieve the goals observed during that trajectory. We use this to train an ensemble of conditional generative models (GANs) to generate plausible trajectories that lead the agent from its current state towards a specified goal. We then combine these imagined trajectories into a novel planning algorithm in order to achieve the desired goal as efficiently as possible. The performance of PlanGAN has been tested on a number of robotic navigation/manipulation tasks in comparison with a range of model-free reinforcement learning baselines, including Hindsight Experience Replay. Our studies indicate that PlanGAN can achieve comparable performance whilst being around 4-8 times more sample efficient. 1 Introduction One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games [1, 2], board games [3, 4] and robotic control tasks [5, 6, 7]. Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL [8, 9], where the aim is to train a policy that can achieve a variety of different goals within the environment. Much of RL’s success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems [7, 10, 11]. One 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches [7, 10] are able to achieve comparable performance to some of the best model-free approaches [12, 13]. However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting. To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) [8], a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve any goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success — hence obtaining a much stronger learning signal. In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparsereward environments with multiple goals. The core of our method builds upon the same principle that underlies HER — namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) [14] which learn to generate plausible future trajectories conditioned on achieving a particular goal. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner. We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency. 2 Related Work A number of model-based approaches have utilised explicit planning algorithms, but have mostly been applied to single tasks with relatively dense rewards. Nagabandi et al. [15] use iterative random shooting within a deterministic neural network dynamics model in order to solve a number of continuous control tasks. Hafner et al. [16] learn a latent representation from images and then plan within this latent space using CEM. Nagabandi et al. [17] use a similar planning algorithm (MPPI) [18] within an ensemble of learned models in order to perform dexterous manipulation tasks. Other methods have had success with a hybrid approach, combining elements of model-based and model-free RL, and as in this work often use ensembles of models in order to improve robustness. STEVE [19] uses rollouts produced by an ensemble of models and Q-functions in order to obtain a robust estimate for the Q-learning target. Model-Ensemble TRPO [20] uses an ensemble of models as a simulator for running a model-free RL algorithm (trust-region policy optimisation) whilst maintaining some level of uncertainty for when the model’s predictions are valid. I2A [21] learns to interpret imagined trajectories generated by a model to augment the model-free training of a policy/value function. Temporal Difference Models (TDMs) [22] try to link model-based and model-free RL in the context of time-dependent, goal-conditioned value functions. Here, the model is itself the goal-conditioned value function, and is learned with model-free, off-policy RL. However, they require a meaningful distance metric between states to be defined and so do not work with fully sparse rewards. Nasiriany et al. [23] combine TDMs as an implicit model with a planning algorithm that allows them to plan over multiple abstract sub-goals. They apply this to solve long-horizon, goal-conditioned tasks directly from images. Azizzadenesheli et al. [24] use a Wasserstein GAN with spectral normalisation to learn a predictive model that they use with Monte-Carlo Tree Search to solve ATARI games. Although they do not find particularly strong results overall, they show that they are able to learn an extremely accurate model with stable training of the GAN even in a non-stationary environment. A significant difference with our work is that they train a GAN that takes an action and a state and predicts the next state, whereas we train the GANs to imagine full trajectories (also their focus is on image-based environments). GANs have also been used for curriculum learning in goal-conditioned RL [25], where a generator was trained to propose goals at an appropriate level of difficulty for the current agent to achieve. In terms of learning with sparse rewards, a number of approaches have had success by providing the agent with intrinsic rewards in order to aid with exploration [26, 27, 28]. However, in the multi-goal setting a majority of the most successful approaches have built upon Hindsight Experience Replay (HER) [8]. Zhao & Tresp [29] improve HER’s performance on certain robotics environments by more frequently resampling trajectories where the objects have higher energy. Fang et al. [30] propose an adaptive mechanism to select failed experiences based on a combination of the diversity of the achieved goals and their proximity to the desired goals. Liu et al. [31] propose a complementary re-labelling scheme in the context of a competitive exploration game between two agents in order to supplement HER. He at al. [32] introduce a method that combines HER with maximum entropy RL. Taking a different approach (but still closely related to HER), Ghosh et al. [33] introduce a method that learns goal-conditioned policies without explicitly using reinforcement learning. They use supervised behavioural cloning (a form of imitation learning) to train a policy to reach the goals that have been observed on the trajectories the agent itself has generated. Whilst simpler than HER, it does not use a model and does not claim to significantly improve upon HER’s sample efficiency. 3 Preliminaries 3.1 Goal-Conditioned Reinforcement Learning We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal g from a set of possible goals G. We assume that the environment is fully observable and can be described by: a set of states, S; a set of possible actions, A; a distribution of initial states, p(s0); and a transition function P (st+1|st, at) (st, st+1 ∈ S, at ∈ A). In the standard reinforcement setting we have a reward function, R(st, at, st+1). In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. R(st, at, st+1, g). Assuming that goals are sampled from some distribution p(G), the aim of goal-conditioned RL is to learn a policy, π(st, g), that maximises the expected discounted sum of future rewards: Es0∼p(s0) g∼p(G) at∼π(st,g) st+1∼P (st+1|st,at) [ ∞∑ t=0 γtR(st, at, st+1, g) ] (1) where γ ∈ [0, 1] is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal: R(st, at, st+1, g) = 1(st+1, g) = { 1, if st+1 achieves g, 0, otherwise (2) i.e. we have some criteria that tells us whether any given state s achieves any given goal g, and only provide a reward when this is satisfied. 3.2 Hindsight Experience Replay (HER) In complex environments it is extremely unlikely that the specified goal g will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don’t achieve the specified goal still contain useful information about how to achieve other goals — namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN [34] or DDPG [35] it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal g with an alternative goal g′, and subsequently re-compute the reward. For example, if (st, at, rt, st+1, g) is sampled from a replay buffer of past experience, g can be replaced with another goal g′ that occurs later in the trajectory, and then a reward for this new goal can be recomputed: r′t = R(st, at, st+1, g ′). This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent’s action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved. 4 Methods The key insight of our method is that the same principle underlying HER — i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory — has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate plausible transitions leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible. Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs [14], since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) [36] with spectral normalisation [37], as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary [24]. 4.1 Training the GAN(s) The aim of the first major component of our method is to train a generative model that can take in the current state st along with a desired goal g and produce an imagined action at and next state st+1 that moves the agent towards achieving g. We approach this by training an ensemble of N conditional-GANs, each consisting of a generator Gφi and a discriminator Dθi where {θi}Ni=1, {φi}Ni=1 are the parameters of the neural networks that represent these functions. The generators take in the current state st, a noise vector z and the target goal g in order to produce an imagined action at and next state st+1. The discriminators take in st, at, st+1 and g and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal g or an example created by the generator. We also consider a variation where concurrently we train an ensemble of Nm deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair (st, at) and predict the difference between the next state and the current state, st+1 − st, as in [15]. We denote these models as fβj , where {βj} Nm j=1 represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models — although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure 1. The loss for the ith generator is as follows: L(i)generator = Ez∼p(z) st,g∼R st+1,at∼Gφi (z,st,g) Dθi(st, g, st+1, at) + λ 1Nm Nm∑ j=1 ((st+1 − st)− fβj (st, at))2 (3) where R is a replay buffer of real experienced trajectories, z ∼ p(z) is a noise vector where each component is sampled independently from the standard normal N (0, 1) and λ is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the ith discriminator is: L(i)discriminator = E st,at,st+1,g∼R [Dθi(st, g, st+1, at)]− Ez∼p(z)st,g∼R st+1,at∼Gφi (z,st,g) [Dθi(st, g, st+1, at)] (4) The replay buffer R is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added toR whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory. The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of τ time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained by backpropagating through these unrolled trajectories. In more detail, we sample batches of real trajectories made up of τ transitions from the buffer: (s0, a0, g0, s1, a1, g1, . . . , sτ−1, aτ−1, gτ−1, sτ ), where each goal gi is an achieved goal at a later time along that same trajectory (we found that choosing a different goal at each time step worked better than just a single goal for the whole trajectory). We then use the generator to generate a trajectory (ŝ0 = s0, â0, g0, ŝ1, â1, g1, . . . , ŝτ−1, âτ−1, gτ−1, ŝτ ), where ŝt, ât−1 = Gφ(zt, ŝt−1, gt−1). Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations 3 and 4. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance. Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL [17], this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5. 4.2 Planning to achieve a goal Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, g. Our planner’s basic structure shares similarities with a number of other model-predictive control based approaches [15, 16, 17, 38] — make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models. To plan the next action to take from the current state st towards a desired goal g, we first sample a set of Y initial actions and next states, {ayt , s y t+1}Yy=1. For each y, a y t and s y t+1 are generated from a random generator in the ensemble, conditioned on st, g, i.e. a y t , s y t+1 = Gφi(st, g, z), where i ∼ Uniform{1, . . . , N}. Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal g. A good score here should reflect the fact that we want the next action to be moving us towards g as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps. To obtain such a score we duplicate each of the Y initial seed actions and next states C times. Each next state {sy,kt+1}Y Cy=1,k=1 is then used as the starting point for a trajectory of length T . These hypothetical trajectories are all generated using a different randomly chosen GAN at each timestep, so for example sy,ct+w is generated from a random generator in the ensemble conditioned on (sy,ct+w−1, g). Once we have generated these trajectories, we give each of them a score based on the fraction of time they spend achieving the goal. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within T steps are given a score of zero. We can then score each of the initial seed actions {ayt }Yy=1 based on the average score of all the imagined trajectories that started with that action. These scores are normalised and denoted as ny, and we define weights wy = eαny , where α > 0 is a hyperparameter. The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, at = ∑Y y=1 wyay∑Y y′=1 wy′ . The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN. 5 Experiments We perform experiments in four continuous environments built in MuJoCo [39] — Four Rooms Navigation, Three-link Reacher, Fetch Push and Fetch Pick And Place (see Figure 3). Full details about these environments, along with the hyper parameters used for the experiments, can be found in the Appendix. We evaluate the performance in terms of the percentage of goals that the agent is able to reach vs. the number of time steps it has interacted with the real environment1. 5.1 Comparisons We have compared the performance of our algorithm against a number of leading model-free methods designed for multi-goal, sparse reward environments (Figure 4). The most natural baseline to compare 1Videos of results are available here: https://sites.google.com/view/plangan/home Algorithm 1: PlanGAN initialise: generators {Gφm}Mm=1, discriminators {Dθm}Mm=1, one-step models {fβk}Kk=1, replay bufferR, environment Env begin for j = 1 : J do Append random trajectory (s0, a0, g0, . . . , aT−1, sT , gT ) toR for y = 1 : Y do train() for e = 1 : E do Sample goal g from environment (s0, a0, g0, . . . , sT , gT ) = planner(g) Append (s0, a0, g0, . . . , sT , gT ) toR for p = 1 : P do train() procedure train() for m = 1 :M do Sample batch of Bg trajectories fromR: (s0, a0, ĝ0, . . . , ĝτ−1, sτ ) Bg b=1 Use Gφm to generate a batch of Bg imagined trajectories, starting from the real s0 values and conditioning on the same goals ĝ0, . . . , ĝτ−1 as in the real trajectories Train Gφm , Dθm with equations 3 and 4 for k = 1 : K do Sample batch of Bm transitions fromR: (st, at, st+1) Train fβk to minimise: E [ ||fβj (st, at)− (st+1 − st)||22 ] procedure planner(g) s0, g0 ←− Env.reset(); Trajectory = (s0, g0) for t = 0 : T − 1 do InitAcs = {}; Scores = {} for y = 1 : Y do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝyt+1, â y t = Gφi(z, st, g) InitAcs.append(âyt ) ImaginedTrajs = {} for c = 1 : C do sy,ct+1 = s y t+1 for t′ = t+ 1 : t+ T do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝy,ct′+1, â y,d t′ = Gφi(z, ŝ y,c t′ , g) ImaginedTrajs.append(ŝy,ct+1, . . . , ŝ y,c t+T ) score[y] = 1T+1 ∑t+T t′=t 1(ŝ y,c t′ , g) scores.append(score[y]) scores = Normalise(scores) at = ∑Y y=1 e α scores[y]âyt∑Y y′=1 e α scores[y′] st+1, gt+1 = Env.step(at) Trajectory.append(at, st+1, gt+1) return Trajectory Figure 3: Environments that we evaluate PlanGAN on. (a) Four rooms navigation. (b) Reacher (Three Links). (c) Fetch Push. (d) Fetch Pick And Place. with is HER (using DDPG as the core RL algorithm [8]), as this is based on a similar underlying principle to PlanGAN. We also include DDPG without HER to demonstrate how standard model-free RL methods struggle with these tasks. For both of these we use the implementations found in OpenAI Baselines [40]. We also include comparisons with two recently proposed modifications to HER, “Curriculum-Guided HER" [30] (CHER) and “Soft HER" [32]2 (SHER). We also include a model-based baseline (PETS)[41], which also makes use of ensembles of models but which is not designed specifically with multi-goal, sparse reward tasks in mind. Note that it is computationally prohibitive to run this method for as long as the model-free methods, however we run it for at least as many steps as we run PlanGAN. Finally, for the Fetch Push and Fetch Pick And Place environments, 2using the official implementations found here and here respectively we include comparisons with a recent method “Simulated Locomotion Demonstrations" (SLD) [42], which requires an object to be defined. SLD uses the fact that with a simulator objects can move by themselves, so a separate object policy can be learned where the object moves itself to the desired goal. SLD leverages this object policy to guide the learning of the full robot policy. This gives it a significant advantage over PlanGAN as it makes use of separately learned self-generated demonstrations to guide the training, however we see that PlanGAN still achieves significantly better data efficiency. All plots are based on running each experiment using 5 different random seeds, with the solid line representing the median performance and the shaded area representing one standard deviation around the mean. We also include a line showing the average asymptotic performance of HER (as this is the most directly comparable method). Note that the environment interactions recorded on the training curves for PlanGAN do include both the initial random trajectories as well as any trajectories that are not stored in the buffer (when the final goal is identical to the initial goal). In all of the tasks considered we find that PlanGAN is significantly more sample efficient than any of the other methods model-free methods, requiring between 4-8 times less data to reach the same performance as HER. This is comparable to the sample efficiency gains reported in [15] for a model-based approach to dense reward tasks over leading model-free methods. It also substantially outperforms the model-based baseline (PETS) which is not designed for sparse reward, multi-goal environments. 5.2 Ablation studies In this section we study how various decisions we have made affect PlanGAN’s performance by performing ablation studies on the two more complicated environments considered (Fetch Push and Fetch Pick And Place). Firstly, we study whether the planner is a crucial component of our set-up. The first panel in Figure 1 in the Appendix shows a comparison of the full PlanGAN with a couple of variations that more directly use the actions proposed by the GANs. Both of these lead to significantly lower success rates, suggesting that the planner we use is crucial. We then consider how the number of GANs in the ensemble effects PlanGAN’s performance. The second panel in Figure 1 (Appendix) shows results for ensembles made up of 1, 3 and 5 GANs respectively. Whilst less significant than the inclusion of the planner, we find that using only a single GAN leads to slower and significantly less stable training. We also see that the larger ensemble (5 GANs) outperforms the smaller ensemble (3 GANs), but the difference in performance is relatively small. Finally, we consider running the algorithm with λ = 0, i.e. without any regularisation from the one-step predictive model. We see that the one-step model regularisation provides only a very minor improvement, suggesting that it is not a crucial component of PlanGAN. 6 Conclusions We proposed PlanGAN, a model-based method for solving multi-goal environments with sparse rewards. We showed how to train a generative model in order to generate plausible future trajectories that lead from a given state towards a desired goal, and how these can be used within a planning algorithm to achieve these goals efficiently. We demonstrated that this approach leads to a substantial increase in sample efficiency when compared to leading model-free RL methods that can cope with sparse rewards and multiple goals. In the future we would like to extend this work so that it can be applied to more complex environments. One of the main limitations with the current approach is the planner. When the number of time steps required to complete a task becomes large the planner becomes computationally expensive, since at each step we have to simulate a large number of future steps out until completion. We also need these trajectories to be at least reasonably accurate over a large number of time steps, as imagined future trajectories that do not reach the desired goal are given a score of zero. If no imagined trajectories reach the goal then the planner is unable to meaningfully choose an action. Future work which may more efficiently deal with longer horizon tasks could involve combining the GAN training with a model-free goal-conditioned value function (creating a hybrid method, similar to STEVE [19] and Dreamer [7]) which could learn to give a value to the actions proposed by the GANs, removing the need for a planner entirely. Statement of Broader Impact Since our work involves foundational research in the field of model-based reinforcement learning it is unlikely to have any large, immediate impacts on society. Nevertheless, in the longer term the impact of reinforcement learning agents capable of learning to autonomously make decisions could be huge. In principle one could discuss a huge range of potential impacts over different time frames, but we choose to focus on some potential medium-term impacts of robots that can learn autonomously from sparse rewards. Robots are pervasive in the modern world and are used in a wide range of manufacturing industries for carrying out tedious, repetitive work. Introducing robots that are capable of autonomously learning a set of skills from easy to specify reward functions has the potential to vastly increase the scope of possible tasks that they can be used for. In particular, it removes the requirement for their behaviours to be carefully engineered in a manual fashion for every possible scenario they might encounter. This has the potential to allow for many tasks that currently can only be carried out by human workers to become fully or partially automated. Whilst this could provide a huge economic boost to some manufacturing companies, it is important that this benefit is weighed against the potential negative impacts (both social and economic) that losing these manufacturing jobs could have — particularly if large scale changes were to occur in a short period of time. We feel that this is an important question for both economists/ policy advisors as well as researchers working in the field to think about. Acknowledgements/ Funding Disclosure This work was partially funded by Catapult (High Value Manufacturing) within Warwick Manufacturing Group.
1. What is the focus and contribution of the paper regarding goal-directed tasks? 2. What are the strengths of the proposed approach, particularly in terms of its performance compared to other methods? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works and its applicability to various domains?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors are interested in solving goal-directed tasks with sparse rewards. The authors propose to train a goal-conditioned dynamics model that can easily next-state predictions. Specifically, they train an ensemble of goal-conditioned GANs to generate the next state and action conditioned on a goal and current state. These one-step GAN models use their own predictions to generate trajectories. The GAN is trained to discriminate between generated and real trajectories, where the goal is relabelled with some achieved goal. To choose actions, the authors then sample a number of actions and trajectories given the current state, and choose the actions that lead to trajectories that have the highest score. (The score is the fraction of states that achieve some goal). The authors demonstrate across a variety of robot tasks that the method performs significantly better than model-free methods such as HER, curriculum HER, and soft HER. Strengths The empirical results of the paper are very strong and consistently outperform model-free method on sparse reward tasks. These impressive results are likely to be of interest to researchers studying goal-conditioned reinforcement learning and model-based learning. Weaknesses While the empirical results are strong, the paper would be strengthened with a comparison to a model-based method that is not goal-conditioned. In fact, given that the main claim at the end of the introduction is that their method is “explicitly designed for multi-goal, sparse reward” and therefore “[leads] to a significant improvement in sample efficiency,” this seems like a very important comparison to run. The paper would be also strengthened by testing the method on some more difficult domains, such as the “hand” environments that have higher intrinsic dimension, and where model-free methods generally excel.
NIPS
Title PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals Abstract Learning with sparse rewards remains a significant challenge in reinforcement learning (RL), especially when the aim is to train a policy capable of achieving multiple different goals. To date, the most successful approaches for dealing with multi-goal, sparse reward environments have been model-free RL algorithms. In this work we propose PlanGAN, a model-based algorithm specifically designed for solving multi-goal tasks in environments with sparse rewards. Our method builds on the fact that any trajectory of experience collected by an agent contains useful information about how to achieve the goals observed during that trajectory. We use this to train an ensemble of conditional generative models (GANs) to generate plausible trajectories that lead the agent from its current state towards a specified goal. We then combine these imagined trajectories into a novel planning algorithm in order to achieve the desired goal as efficiently as possible. The performance of PlanGAN has been tested on a number of robotic navigation/manipulation tasks in comparison with a range of model-free reinforcement learning baselines, including Hindsight Experience Replay. Our studies indicate that PlanGAN can achieve comparable performance whilst being around 4-8 times more sample efficient. 1 Introduction One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games [1, 2], board games [3, 4] and robotic control tasks [5, 6, 7]. Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL [8, 9], where the aim is to train a policy that can achieve a variety of different goals within the environment. Much of RL’s success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems [7, 10, 11]. One 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches [7, 10] are able to achieve comparable performance to some of the best model-free approaches [12, 13]. However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting. To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) [8], a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve any goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success — hence obtaining a much stronger learning signal. In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparsereward environments with multiple goals. The core of our method builds upon the same principle that underlies HER — namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) [14] which learn to generate plausible future trajectories conditioned on achieving a particular goal. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner. We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency. 2 Related Work A number of model-based approaches have utilised explicit planning algorithms, but have mostly been applied to single tasks with relatively dense rewards. Nagabandi et al. [15] use iterative random shooting within a deterministic neural network dynamics model in order to solve a number of continuous control tasks. Hafner et al. [16] learn a latent representation from images and then plan within this latent space using CEM. Nagabandi et al. [17] use a similar planning algorithm (MPPI) [18] within an ensemble of learned models in order to perform dexterous manipulation tasks. Other methods have had success with a hybrid approach, combining elements of model-based and model-free RL, and as in this work often use ensembles of models in order to improve robustness. STEVE [19] uses rollouts produced by an ensemble of models and Q-functions in order to obtain a robust estimate for the Q-learning target. Model-Ensemble TRPO [20] uses an ensemble of models as a simulator for running a model-free RL algorithm (trust-region policy optimisation) whilst maintaining some level of uncertainty for when the model’s predictions are valid. I2A [21] learns to interpret imagined trajectories generated by a model to augment the model-free training of a policy/value function. Temporal Difference Models (TDMs) [22] try to link model-based and model-free RL in the context of time-dependent, goal-conditioned value functions. Here, the model is itself the goal-conditioned value function, and is learned with model-free, off-policy RL. However, they require a meaningful distance metric between states to be defined and so do not work with fully sparse rewards. Nasiriany et al. [23] combine TDMs as an implicit model with a planning algorithm that allows them to plan over multiple abstract sub-goals. They apply this to solve long-horizon, goal-conditioned tasks directly from images. Azizzadenesheli et al. [24] use a Wasserstein GAN with spectral normalisation to learn a predictive model that they use with Monte-Carlo Tree Search to solve ATARI games. Although they do not find particularly strong results overall, they show that they are able to learn an extremely accurate model with stable training of the GAN even in a non-stationary environment. A significant difference with our work is that they train a GAN that takes an action and a state and predicts the next state, whereas we train the GANs to imagine full trajectories (also their focus is on image-based environments). GANs have also been used for curriculum learning in goal-conditioned RL [25], where a generator was trained to propose goals at an appropriate level of difficulty for the current agent to achieve. In terms of learning with sparse rewards, a number of approaches have had success by providing the agent with intrinsic rewards in order to aid with exploration [26, 27, 28]. However, in the multi-goal setting a majority of the most successful approaches have built upon Hindsight Experience Replay (HER) [8]. Zhao & Tresp [29] improve HER’s performance on certain robotics environments by more frequently resampling trajectories where the objects have higher energy. Fang et al. [30] propose an adaptive mechanism to select failed experiences based on a combination of the diversity of the achieved goals and their proximity to the desired goals. Liu et al. [31] propose a complementary re-labelling scheme in the context of a competitive exploration game between two agents in order to supplement HER. He at al. [32] introduce a method that combines HER with maximum entropy RL. Taking a different approach (but still closely related to HER), Ghosh et al. [33] introduce a method that learns goal-conditioned policies without explicitly using reinforcement learning. They use supervised behavioural cloning (a form of imitation learning) to train a policy to reach the goals that have been observed on the trajectories the agent itself has generated. Whilst simpler than HER, it does not use a model and does not claim to significantly improve upon HER’s sample efficiency. 3 Preliminaries 3.1 Goal-Conditioned Reinforcement Learning We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal g from a set of possible goals G. We assume that the environment is fully observable and can be described by: a set of states, S; a set of possible actions, A; a distribution of initial states, p(s0); and a transition function P (st+1|st, at) (st, st+1 ∈ S, at ∈ A). In the standard reinforcement setting we have a reward function, R(st, at, st+1). In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. R(st, at, st+1, g). Assuming that goals are sampled from some distribution p(G), the aim of goal-conditioned RL is to learn a policy, π(st, g), that maximises the expected discounted sum of future rewards: Es0∼p(s0) g∼p(G) at∼π(st,g) st+1∼P (st+1|st,at) [ ∞∑ t=0 γtR(st, at, st+1, g) ] (1) where γ ∈ [0, 1] is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal: R(st, at, st+1, g) = 1(st+1, g) = { 1, if st+1 achieves g, 0, otherwise (2) i.e. we have some criteria that tells us whether any given state s achieves any given goal g, and only provide a reward when this is satisfied. 3.2 Hindsight Experience Replay (HER) In complex environments it is extremely unlikely that the specified goal g will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don’t achieve the specified goal still contain useful information about how to achieve other goals — namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN [34] or DDPG [35] it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal g with an alternative goal g′, and subsequently re-compute the reward. For example, if (st, at, rt, st+1, g) is sampled from a replay buffer of past experience, g can be replaced with another goal g′ that occurs later in the trajectory, and then a reward for this new goal can be recomputed: r′t = R(st, at, st+1, g ′). This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent’s action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved. 4 Methods The key insight of our method is that the same principle underlying HER — i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory — has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate plausible transitions leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible. Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs [14], since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) [36] with spectral normalisation [37], as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary [24]. 4.1 Training the GAN(s) The aim of the first major component of our method is to train a generative model that can take in the current state st along with a desired goal g and produce an imagined action at and next state st+1 that moves the agent towards achieving g. We approach this by training an ensemble of N conditional-GANs, each consisting of a generator Gφi and a discriminator Dθi where {θi}Ni=1, {φi}Ni=1 are the parameters of the neural networks that represent these functions. The generators take in the current state st, a noise vector z and the target goal g in order to produce an imagined action at and next state st+1. The discriminators take in st, at, st+1 and g and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal g or an example created by the generator. We also consider a variation where concurrently we train an ensemble of Nm deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair (st, at) and predict the difference between the next state and the current state, st+1 − st, as in [15]. We denote these models as fβj , where {βj} Nm j=1 represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models — although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure 1. The loss for the ith generator is as follows: L(i)generator = Ez∼p(z) st,g∼R st+1,at∼Gφi (z,st,g) Dθi(st, g, st+1, at) + λ 1Nm Nm∑ j=1 ((st+1 − st)− fβj (st, at))2 (3) where R is a replay buffer of real experienced trajectories, z ∼ p(z) is a noise vector where each component is sampled independently from the standard normal N (0, 1) and λ is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the ith discriminator is: L(i)discriminator = E st,at,st+1,g∼R [Dθi(st, g, st+1, at)]− Ez∼p(z)st,g∼R st+1,at∼Gφi (z,st,g) [Dθi(st, g, st+1, at)] (4) The replay buffer R is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added toR whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory. The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of τ time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained by backpropagating through these unrolled trajectories. In more detail, we sample batches of real trajectories made up of τ transitions from the buffer: (s0, a0, g0, s1, a1, g1, . . . , sτ−1, aτ−1, gτ−1, sτ ), where each goal gi is an achieved goal at a later time along that same trajectory (we found that choosing a different goal at each time step worked better than just a single goal for the whole trajectory). We then use the generator to generate a trajectory (ŝ0 = s0, â0, g0, ŝ1, â1, g1, . . . , ŝτ−1, âτ−1, gτ−1, ŝτ ), where ŝt, ât−1 = Gφ(zt, ŝt−1, gt−1). Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations 3 and 4. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance. Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL [17], this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5. 4.2 Planning to achieve a goal Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, g. Our planner’s basic structure shares similarities with a number of other model-predictive control based approaches [15, 16, 17, 38] — make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models. To plan the next action to take from the current state st towards a desired goal g, we first sample a set of Y initial actions and next states, {ayt , s y t+1}Yy=1. For each y, a y t and s y t+1 are generated from a random generator in the ensemble, conditioned on st, g, i.e. a y t , s y t+1 = Gφi(st, g, z), where i ∼ Uniform{1, . . . , N}. Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal g. A good score here should reflect the fact that we want the next action to be moving us towards g as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps. To obtain such a score we duplicate each of the Y initial seed actions and next states C times. Each next state {sy,kt+1}Y Cy=1,k=1 is then used as the starting point for a trajectory of length T . These hypothetical trajectories are all generated using a different randomly chosen GAN at each timestep, so for example sy,ct+w is generated from a random generator in the ensemble conditioned on (sy,ct+w−1, g). Once we have generated these trajectories, we give each of them a score based on the fraction of time they spend achieving the goal. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within T steps are given a score of zero. We can then score each of the initial seed actions {ayt }Yy=1 based on the average score of all the imagined trajectories that started with that action. These scores are normalised and denoted as ny, and we define weights wy = eαny , where α > 0 is a hyperparameter. The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, at = ∑Y y=1 wyay∑Y y′=1 wy′ . The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN. 5 Experiments We perform experiments in four continuous environments built in MuJoCo [39] — Four Rooms Navigation, Three-link Reacher, Fetch Push and Fetch Pick And Place (see Figure 3). Full details about these environments, along with the hyper parameters used for the experiments, can be found in the Appendix. We evaluate the performance in terms of the percentage of goals that the agent is able to reach vs. the number of time steps it has interacted with the real environment1. 5.1 Comparisons We have compared the performance of our algorithm against a number of leading model-free methods designed for multi-goal, sparse reward environments (Figure 4). The most natural baseline to compare 1Videos of results are available here: https://sites.google.com/view/plangan/home Algorithm 1: PlanGAN initialise: generators {Gφm}Mm=1, discriminators {Dθm}Mm=1, one-step models {fβk}Kk=1, replay bufferR, environment Env begin for j = 1 : J do Append random trajectory (s0, a0, g0, . . . , aT−1, sT , gT ) toR for y = 1 : Y do train() for e = 1 : E do Sample goal g from environment (s0, a0, g0, . . . , sT , gT ) = planner(g) Append (s0, a0, g0, . . . , sT , gT ) toR for p = 1 : P do train() procedure train() for m = 1 :M do Sample batch of Bg trajectories fromR: (s0, a0, ĝ0, . . . , ĝτ−1, sτ ) Bg b=1 Use Gφm to generate a batch of Bg imagined trajectories, starting from the real s0 values and conditioning on the same goals ĝ0, . . . , ĝτ−1 as in the real trajectories Train Gφm , Dθm with equations 3 and 4 for k = 1 : K do Sample batch of Bm transitions fromR: (st, at, st+1) Train fβk to minimise: E [ ||fβj (st, at)− (st+1 − st)||22 ] procedure planner(g) s0, g0 ←− Env.reset(); Trajectory = (s0, g0) for t = 0 : T − 1 do InitAcs = {}; Scores = {} for y = 1 : Y do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝyt+1, â y t = Gφi(z, st, g) InitAcs.append(âyt ) ImaginedTrajs = {} for c = 1 : C do sy,ct+1 = s y t+1 for t′ = t+ 1 : t+ T do i ∼ Uniform(1, . . . ,M) z = [zk] d k=1, zk ∼ N (0, 1) ŝy,ct′+1, â y,d t′ = Gφi(z, ŝ y,c t′ , g) ImaginedTrajs.append(ŝy,ct+1, . . . , ŝ y,c t+T ) score[y] = 1T+1 ∑t+T t′=t 1(ŝ y,c t′ , g) scores.append(score[y]) scores = Normalise(scores) at = ∑Y y=1 e α scores[y]âyt∑Y y′=1 e α scores[y′] st+1, gt+1 = Env.step(at) Trajectory.append(at, st+1, gt+1) return Trajectory Figure 3: Environments that we evaluate PlanGAN on. (a) Four rooms navigation. (b) Reacher (Three Links). (c) Fetch Push. (d) Fetch Pick And Place. with is HER (using DDPG as the core RL algorithm [8]), as this is based on a similar underlying principle to PlanGAN. We also include DDPG without HER to demonstrate how standard model-free RL methods struggle with these tasks. For both of these we use the implementations found in OpenAI Baselines [40]. We also include comparisons with two recently proposed modifications to HER, “Curriculum-Guided HER" [30] (CHER) and “Soft HER" [32]2 (SHER). We also include a model-based baseline (PETS)[41], which also makes use of ensembles of models but which is not designed specifically with multi-goal, sparse reward tasks in mind. Note that it is computationally prohibitive to run this method for as long as the model-free methods, however we run it for at least as many steps as we run PlanGAN. Finally, for the Fetch Push and Fetch Pick And Place environments, 2using the official implementations found here and here respectively we include comparisons with a recent method “Simulated Locomotion Demonstrations" (SLD) [42], which requires an object to be defined. SLD uses the fact that with a simulator objects can move by themselves, so a separate object policy can be learned where the object moves itself to the desired goal. SLD leverages this object policy to guide the learning of the full robot policy. This gives it a significant advantage over PlanGAN as it makes use of separately learned self-generated demonstrations to guide the training, however we see that PlanGAN still achieves significantly better data efficiency. All plots are based on running each experiment using 5 different random seeds, with the solid line representing the median performance and the shaded area representing one standard deviation around the mean. We also include a line showing the average asymptotic performance of HER (as this is the most directly comparable method). Note that the environment interactions recorded on the training curves for PlanGAN do include both the initial random trajectories as well as any trajectories that are not stored in the buffer (when the final goal is identical to the initial goal). In all of the tasks considered we find that PlanGAN is significantly more sample efficient than any of the other methods model-free methods, requiring between 4-8 times less data to reach the same performance as HER. This is comparable to the sample efficiency gains reported in [15] for a model-based approach to dense reward tasks over leading model-free methods. It also substantially outperforms the model-based baseline (PETS) which is not designed for sparse reward, multi-goal environments. 5.2 Ablation studies In this section we study how various decisions we have made affect PlanGAN’s performance by performing ablation studies on the two more complicated environments considered (Fetch Push and Fetch Pick And Place). Firstly, we study whether the planner is a crucial component of our set-up. The first panel in Figure 1 in the Appendix shows a comparison of the full PlanGAN with a couple of variations that more directly use the actions proposed by the GANs. Both of these lead to significantly lower success rates, suggesting that the planner we use is crucial. We then consider how the number of GANs in the ensemble effects PlanGAN’s performance. The second panel in Figure 1 (Appendix) shows results for ensembles made up of 1, 3 and 5 GANs respectively. Whilst less significant than the inclusion of the planner, we find that using only a single GAN leads to slower and significantly less stable training. We also see that the larger ensemble (5 GANs) outperforms the smaller ensemble (3 GANs), but the difference in performance is relatively small. Finally, we consider running the algorithm with λ = 0, i.e. without any regularisation from the one-step predictive model. We see that the one-step model regularisation provides only a very minor improvement, suggesting that it is not a crucial component of PlanGAN. 6 Conclusions We proposed PlanGAN, a model-based method for solving multi-goal environments with sparse rewards. We showed how to train a generative model in order to generate plausible future trajectories that lead from a given state towards a desired goal, and how these can be used within a planning algorithm to achieve these goals efficiently. We demonstrated that this approach leads to a substantial increase in sample efficiency when compared to leading model-free RL methods that can cope with sparse rewards and multiple goals. In the future we would like to extend this work so that it can be applied to more complex environments. One of the main limitations with the current approach is the planner. When the number of time steps required to complete a task becomes large the planner becomes computationally expensive, since at each step we have to simulate a large number of future steps out until completion. We also need these trajectories to be at least reasonably accurate over a large number of time steps, as imagined future trajectories that do not reach the desired goal are given a score of zero. If no imagined trajectories reach the goal then the planner is unable to meaningfully choose an action. Future work which may more efficiently deal with longer horizon tasks could involve combining the GAN training with a model-free goal-conditioned value function (creating a hybrid method, similar to STEVE [19] and Dreamer [7]) which could learn to give a value to the actions proposed by the GANs, removing the need for a planner entirely. Statement of Broader Impact Since our work involves foundational research in the field of model-based reinforcement learning it is unlikely to have any large, immediate impacts on society. Nevertheless, in the longer term the impact of reinforcement learning agents capable of learning to autonomously make decisions could be huge. In principle one could discuss a huge range of potential impacts over different time frames, but we choose to focus on some potential medium-term impacts of robots that can learn autonomously from sparse rewards. Robots are pervasive in the modern world and are used in a wide range of manufacturing industries for carrying out tedious, repetitive work. Introducing robots that are capable of autonomously learning a set of skills from easy to specify reward functions has the potential to vastly increase the scope of possible tasks that they can be used for. In particular, it removes the requirement for their behaviours to be carefully engineered in a manual fashion for every possible scenario they might encounter. This has the potential to allow for many tasks that currently can only be carried out by human workers to become fully or partially automated. Whilst this could provide a huge economic boost to some manufacturing companies, it is important that this benefit is weighed against the potential negative impacts (both social and economic) that losing these manufacturing jobs could have — particularly if large scale changes were to occur in a short period of time. We feel that this is an important question for both economists/ policy advisors as well as researchers working in the field to think about. Acknowledgements/ Funding Disclosure This work was partially funded by Catapult (High Value Manufacturing) within Warwick Manufacturing Group.
1. What is the main contribution of the paper in the field of goal-conditioned Reinforcement Learning? 2. What are the strengths of the proposed approach, particularly in its integration with deep generative models? 3. What are the weaknesses of the paper regarding the experimental domain and the potential advantages of the proposed method? 4. How does the reviewer assess the significance and impact of the improvement achieved by the proposed approach? 5. Are there any suggestions or recommendations for the authors to improve their work, such as studying and comparing their method more thoroughly?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes to use GAN to solve goal-conditioned Reinforcement Learning tasks. Conditioning on the goal, their GAN can predict the current action and the next state based on the current state. They train an ensemble of several GANs on the relabeled trajectories. They then do model-based planning with the stochastic generative models and achieve 4-8 times more sample efficiency. --- Update --- The author rebuttal addressed my issue partially. I appreciate the demonstration that shows HER's performance degrades w.r.t. num_updates. I agree that it's unnecessary to compare with those variants I mentioned. Still, the paper could be more solid if the authors can study more thoroughly and make fairer comparisons, which can help us understand each component's benefit better. I maintain my score. Strengths This work integrates the recent achievement in the deep generative model into the goal-conditioned reinforcement learning. They train the goal-condition model in a hindsight way to solve the sparse reward problem, which combines the advantage of planning method and HER and achieves high sample efficiency. The improvement is very significant and will inspire future works. Weaknesses As mentioned in conclusion, the experimental domain is not very challenging. Those domains have a simple forward model, and the planning horizon is typically very short. The proposed model-based approaches may take advantage of that.
NIPS
Title DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling Abstract Graph Convolutional Networks (GCNs) have shown to be effective in handling unordered data like point clouds and meshes. In this work we propose novel approaches for graph convolution, pooling and unpooling, inspired from finite differences and algebraic multigrid frameworks. We form a parameterized convolution kernel based on discretized differential operators, leveraging the graph mass, gradient and Laplacian. This way, the parameterization does not depend on the graph structure, only on the meaning of the network convolutions as differential operators. To allow hierarchical representations of the input, we propose pooling and unpooling operations that are based on algebraic multigrid methods, which are mainly used to solve partial differential equations on unstructured grids. To motivate and explain our method, we compare it to standard convolutional neural networks, and show their similarities and relations in the case of a regular grid. Our proposed method is demonstrated in various experiments like classification and part-segmentation, achieving on par or better than state of the art results. We also analyze the computational cost of our method compared to other GCNs. 1 Introduction The emergence of deep learning and Convolutional Neural Networks (CNNs) [1, 2, 3] in recent years has had great impact on the community of computer vision and graphics [4, 5, 6, 7]. Over the past years, multiple works used standard CNNs to perform 3D related tasks on unordered data (e.g., point clouds and meshes), one of which is PointNet [8, 9], that operates directly on point clouds. Along with these works, another massively growing field is Graph Convolutional Networks (GCNs) [10], or Geometric Deep Learning, which suggests using graph convolutions for tasks related to three dimensional inputs, arising from either spectral theory [11, 12, 13] or spatial convolution [14, 15, 16, 17]. This makes the processing of unstructured data like point clouds, graphs and meshes more natural by operating directly in the underlying structure of the data. In this work we aim to bridge the gap between ordered and unordered deep learning architectures, and to build on the foundation of standard CNNs in unordered data. To this end, we leverage the similarity between standard CNNs and partial differential equations (PDEs) [18], and propose a new approach to define convolution operators on graphs that are based on discretization of differential operators on unstructured grids. Specifically, we define a 3D convolution kernel which is based on discretized differential operators. We consider the mass (self-feature), gradient and Laplacian of the graph, and discretize them using a simple version of finite differences, similarly to the way that standard graph Laplacians are defined. Such differential operators form a subspace which spans standard convolution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. kernels on structured grids. Leveraging such operators for unstructured grids leads to an abstract parameterization of the convolution operation, which is independent of the specific graph geometry. Our second contribution involves unstructured pooling and unpooling operators, which together with the convolution, are among the main building blocks of CNNs. To this end, and further motivated by the PDE interpretation of CNNs, we utilize multigrid methods which are among the most efficient numerical solvers for PDEs. Such methods use a hierarchy of smaller and smaller grids to represent the PDE on various scales. Specifically, algebraic multigrid (AMG) approaches [19, 20] are mostly used to solve PDEs on unstructured grids by forming the same hierarchy of problems using coarsening and upsampling operators. Using these building blocks of AMG, we propose novel pooling and unpooling operations for GCNs. Our operators are based on the Galerkin coarsening operator of aggregation-based AMG [21, 22], performing pure aggregation for pooling and smoothed aggregation as the unpooling operator. The advantage of having pooling capability, as seen both in traditional CNNs and GCNs [4, 23, 5, 6] are the enlargement of the receptive field of the neurons, and reduced computational cost (in terms of floating operations), allowing for wider and deeper networks. In what follows, we elaborate on existing unordered data methods in Section 2, and present our method in Section 3. We discuss the similarity between traditional CNNs and our proposed GCN, and motivate the use of differential operators as a parameterization to a convolution kernel in Section 3.3. Furthermore, we compare the computational cost of our method compared to other message-passing, spatially based GCNs in Section 3.5 . To validate our model, we perform experiments on point cloud classification and segmentation tasks on various datasets in Section 4. Finally, we study the importance and contribution of the different terms in the parameterization to the performance of our method in Section 4.3. 2 Related work Unordered data come in many forms and structures – from meshes and point clouds that describe 3D objects to social network graphs. For 3D related data, a natural choice would be to voxelize the support of the data, as in [24]. Clearly, such approach comes at a high computational cost, while causing degradation of the data. Other methods suggest to operate directly on the data - whether it is a point cloud [8, 9, 25] or a graph [13, 15, 11, 12, 26]. Recent works like [15, 17] assumed a system of local coordinates centered around each vertex. These methods propose to assign weights to geometric neighborhoods around the vertices, in addition to the filter weights. Masci et al. [15] proposed assigning fixed Gaussian mixture weight for those neighborhoods, and [17] goes a step further and learns the parameters of the Gaussians. These methods require high computational costs, due to the computation of exponential terms (particularly at inference time) as well as the overhead of additional learnable parameters. Later, it was shown in [26] that adopting GCNs for point-cloud related tasks can be highly beneficial, since the learned features of the graph vertices in different layers of the network can induce dynamic graphs which reveal their underlying correlations. We follow the trend of employing GCNs for point-cloud related tasks like shape classification and segmentation. Specifically, we choose to work with spatial GCNs since they are most similar to standard structured CNNs. However, compared to other works like DGCNN [27] and MPNN [16] which can be interpreted as non-directed discretized gradient operators, we introduce directed gradients, as well as the addition of the Laplacian term of the graph. The Laplacian is the key ingredient in spectral-based methods like [13, 12], but was not used in spatial GCNs where the vertices have a geometric meaning, to the best of our knowledge. Unlike traditional structured CNNs, where the pooling and unpooling operations are trivial, these operations are more debatable in unordered methods, due to the lack of order or the metric between points. Works like PointNet++ [9] proposed using Furthest Point Sampling technique in order to choose remaining points in coarsened versions of the inputs. Other works proposed utilizing `2 norm of the features to determine which elements of the graph are to be removed in subsequent layers of the network [6, 28]. Recent works like DiffPool [29] proposed learning a dense assignment matrix to produce coarsened version of an initial graph. However, learning a dense matrix is of quadratic computational cost in the number of vertices and does not scale well for large scale point-clouds. Also, DiffPool is constrained to fixed graph sizes, while our method is agnostic to the size of the input. We focus on the employment of AMG as a concept to define our pooling and unpooling operations. We use classical aggregation AMG [22], which is suitable for unstructured grids, similarly to works that incorporated geometric multigrid concepts into structured grids CNNs [5, 30]. On a different note, the recent work [31] showed another connection between AMG and GCNs, and proposed using GCNs for learning sparse AMG prolongation matrices to solve weighted diffusion problems on unstructured grids. 3 Method We propose to parameterize the graph convolutional kernel according to discretized differential operators defined on a graph. Therefore, we call our convolution DiffGCN. To have a complete set of neural network building blocks, we also propose an AMG inspired pooling and unpooling operators to enlarge the receptive fields of the neurons, and to allow for wider and deeper networks. 3.1 Convolution kernels via differential operators To define the convolution kernels in the simplest manner, we use finite differences, which is a simple and widely used approach for numerical discretization of differential operators. Alternatives, such as finite element or finite volume schemes may also be suitable for the task, but are more complicated to implement in existing deep learning frameworks. Using finite differences, the first and second order derivatives are approximated as: ∂f(x) ∂x ≈ f(x+ h)− f(x− h) 2h , ∂2f(x) ∂x2 ≈ f(x+ h)− 2f(x) + f(x− h) h2 . (1) We harness these simple operators to estimate the gradient and Laplacian of the unstructured feature maps defined on a graph. Given an undirected graph G = (V,E) where V,E denote the vertices and edges of the graph, respectively, we propose a formulation of the convolution kernel as follows: conv(G,Θ) ≈ θ1I + θ2 ∂ ∂x + θ3 ∂2 ∂x2 + θ4 ∂ ∂y + θ5 ∂2 ∂y2 + θ6 ∂ ∂z + θ7 ∂2 ∂z2 . (2) This gives a 7-point convolution kernel which consists of the mass, gradient and Laplacian of the signal defined over the graph. We now formulate the operators in (2) mathematically. We first define that the features of the GCN are located in the vertices vi ∈ V of the graph, similarly to a nodal discretization. For each node we have cin features (input channels). We start with the definition of the gradient (in x, y, z), which according to (1) is defined on the middle of an edge eij ∈ E connecting the pair vi and vj . Since the edge direction may not be aligned with a specific axis, we project the derivative along the edge onto the axes x, y, z. For example, (∂xGf)ij = ∂f ∂x (eij) = fvi − fvj dist(vi, vj) (x(vi)− x(vj)). (3) x(vi) is the x-coordinate of vertex vi, and fvi ∈ Rcin is the feature vector of size cin defined on vertex vi. dist(vi, vj) is the Euclidean distance between vi and vj . Given this approach, we define the gradient matrix of the graph by stacking the projected differential operator in each of the axes x, y, z: ∇G = ∂xG∂yG ∂zG : |V | · cin → 3 · |E| · cin (4) This gradient operates on the vertex space and its output size is 3 times the edge space of the graph, for the x, y, z directions. To gather the gradients back to the vertex space we form an edge-averaging operator A : 3 · |E| · cin → 3 · |V | · cin, (Af)i = 1 |Ne(vi)| ∑ j∈Ne(vi) feij (5) where Ne(vi) = {j : eij ∈ E} is the set of edges associated with vertex vi. The function f in (5) is a feature map tensor defined on the edges of the graph. The three different derivatives in (4) are treated as three different features for each edge. In a similar fashion, the Laplacian of the graph with respect to each axis x, y, z is computed in two steps. The first step is the gradient in equation (4). Then, we apply the following transposed first-order derivative operators to obtain the second derivatives back on the vertices:(∂xG)T 0 00 (∂yG)T 0 0 0 (∂zG) T : 3 · |E| · cin → 3 · |V | · cin. (6) The transposed gradient if often used to discretize the divergence operator, and the resulting Laplacian is the divergence of the gradient. Here, however, we do not sum the derivatives (the Laplacian is the sum of all second derivatives) so that the second derivative in each axis ends up as a separate feature on a vertex, so it is weighted in our convolution operator in (2). This construction is similar to the way graph Laplacians and finite element Laplacians are defined on graphs or unstructured grids. Implementation using PyTorch-Geometric To obtain such functionality while using common GCN-designated software [32] and concepts [14, 16], we define a directed auxiliary graph denoted by G′ = (V ′, E′), where in addition to the original set of vertices V , we have new dummy vertices, representing the mid-edge locations eij ∈ E. Then, we define the connectivity of G′ such that each vertex vi ∈ V has a direct connection to the mid-edge location eij as in Fig. 2. More explicitly: V ′ = V ∪ E , E′ = {(vi, eij), (vj , eij) | eij ∈ E}. (7) We also use the transposed graph, which is an edge flipped version of G′, also demonstrated in Fig. 2. Given these two graphs, we are able to obtain the gradient and Laplacian terms of the signal defined over the graph via mean aggregation of message passing scheme [16, 14], where we perform two stages of the latter. First, we use the auxiliary graph G′ to send the following message for each (vi, eij) ∈ E′: msgGrad(vi → eij , fvi) = fvi 2 · dist(vi, eij) x(vi)y(vi) z(vi) − x(eij)y(eij) z(eij) ∈ R3·cin . (8) Here, each vertex gets two messages, and due to the subtraction of vertex locations in the message followed by a sum aggregation geij = msgGrad(vi → eij , fvi) +msgGrad(vj → eij , fvj ) (9) the discretized gradient in (3)-(4) is obtained on the edges. Following this, we return two messages over the transposed graph G′, returning both the gradient and the Laplacian of the graph on the original vertex space V . The first part of the message returns the gradient terms from the edges to the vertices simply by sending the identity message followed by mean aggregation: GradG(vi) = 1 |Ni| ∑ j∈Ni geij . (10) This concludes Eq. (5). The second part of the message differentiates the edge gradients to obtain the Laplacian back on the original vertices: msgEdgeLap(eij → vi, geij ) = geij 2 · dist(vi, eij) x(eij)y(eij) z(eij) − x(vi)y(vi) z(vi) ∈ R3·cin . (11) Then, we obtain the features described in Eq. (6) by performing mean aggregation: LapG(vi) = 1 |Ni| ∑ j∈Ni msgEdgeLap(eij , vi). (12) Finally, we concatenate the mass, gradient and Laplacian to obtain the differential operators features: f̂vi = fvi ⊕GradG(vi)⊕ LapG(vi) ∈ R7·cin , (13) where ⊕ denotes channel-wise concatenation. Finally, we apply a multi-layer perceptron (MLP)—a cout × 7 · cin point-wise convolution followed by batch normalization and ReLU— to the features in Eq. (13). The implementation above follows the mathematical formulation step by step, but it requires to explicitly construct the auxiliary graph in Fig. 2. An equivalent and more efficient way to implement our method, which is only implicitly based on those auxiliary graphs, is to construct a message that contains 6 · cin features by combining Eq. (9) and (11) in a single message followed by mean aggregation as in Eq. (12) and concatenation of the self feature, resulting in a feature f̂vi ∈ R7·cin . 3.2 Algebraic multigrid pooling and unpooling An effective pooling operation is important to faithfully represent coarsened versions of the graph. We propose to use AMG methods [22, 20], namely, the Galerkin coarsening which we explain now. In AMG methods the coarse graph vertices are typically chosen either as a subset of the fine graph vertices (dubbed “C-points”) or as clusters of vertices called aggregates. We use the latter approach, and apply the Graclus clustering [33] to form the aggregates. Let {CJ}|Vcoarse|J=1 be the aggregates, each corresponds to a vertex in the coarse graph. Then, we define the restriction (pooling) operator: RJ,i = { 1 i ∈ CJ 0 otherwise . (14) Given a features matrix X and an adjacency matrix A, their coarsened counterparts are defined via the Galerkin coarsening: Xcoarse = RTX, Acoarse = RTAR ∈ R|Vcoarse|×|Vcoarse|. (15) To perform the unpooling operator, also called prolongation, we may use the transpose of the restriction operator (14). However, when unpooling with an aggregation matrix, we get piece-wise constant feature maps, which are undesired. To have a smoother unpooling operator, we propose to allow the prolongation of soft clustering via smoothed aggregation [22] as follows: P = (I − (D)−1L)RT ∈ R|V |×|Vcoarse| (16) Where I,D,L are the identity matrix, degree and Laplacian matrix of the layer, respectively. To unpool from a coarsened version of the graph, we apply the corresponding prolongation operator at each level, until we reach the initial problem resolution. 3.3 Similarity between DiffGCN and standard CNN operators for structured grids A standard CNN is based on learning weights of convolutional filters. The work [18] showed that the 2D convolution kernel can be represented as a linear combination of finite difference differential operators. These classical differential operators are obtained using our definitions in Eq. (3)-(6), in the case of a structured regular graph. In 2D (without the z axis), Eq. (2) will result in a 5-point stencil represented as θ1 0 0 00 1 0 0 0 0 + θ2 0 0 0−1 0 1 0 0 0 + θ3 0 0 01 −2 1 0 0 0 + θ4 0 1 00 0 0 0 −1 0 + θ5 0 1 00 −2 0 0 1 0 (17) The Laplacian, together with the mass term allow the network to obtain low-pass filters, which are highly important to average out noise, and to prevent aliasing when downsampling the feature-maps. Gradient based methods like [26] can only approximate the Laplacian term via multiple convolutions, leading to redundant computations. Furthermore, the work of [34] showed that the popular 3 × 3 convolution kernel can be replaced by this 5 point stencil without losing much accuracy. When extending this to 3D, the common 3× 3× 3 kernel includes 27 weights, and the lighter version in (2) ends in a star-shaped stencil using 7 weights only, which is a significant reduction from 27. We refer the interested reader to [35, 36, 37, 38, 39, 40] for a more rigorous study of the connection between ODEs, PDEs and CNNs. 3.4 DiffGCN architectures We show the architectures used in this work in Fig. 1. We define a DiffGCN block which consists of two DiffGCN convolutions, with a shortcut connection, as in ResNet [23] for better convergence and stability. Pooling is performed before the first convolution in each block, besides the first opening layer. We use concatenating skip-connections to fuse feature maps from shallow and deep layers. Before this concatenation, unpooling is performed to resize the point-cloud to its original dimensions. 3.5 Computational cost of DiffGCN Typically, spatial GCNs like [14, 16, 26, 15] employ the convolutions K times per vertex, where K is the neighborhood size. More explicitly, a typical convolution can be written as x′i = j∈Ni hΘ(f(xi, xj)), (18) where Ni is the set of neighbors of vertex vi ∈ V , is a permutation invariant aggregation operator like max or sum and hΘ is an MLP [8] parameterized by the set of weights Θ. f is a function that is dependent on a vertex and its neighbors. For instance, in DGCNN [26] f(xi, xj) = xi ⊕ (xi − xj). By design, our convolution operation first gathers the required differential terms, and then feeds their channel-wise concatenation through a MLP. That is, our convolution can be written as x′i = hΘ( j∈Ni g(xi, xj)), (19) where g is a function that constructs the desired differential operator terms. Thus, we reduce the feed-forward pass of our convolution by an order of K, which decreases the number of FLOPs required in our convolution. In other words, the MLP operation in our convolution is independent of the number of neighbors K, since we aggregate the neighborhood features prior to the MLP. If s is the stencil size (e.g.,DGCNN [26] uses s = 2, while ours is s = 7), N is the input size, and cin , cout are the number of input and output channels, respectively, then the number of floating point operations of a method defined via Eq. (18) is O(s×N ×K × cin × cout), while the cost of our method in Eq. (19) reduces to O(s×N × cin × cout). In Table 1 we report the required FLOPs and latency for various convolutions with 1, 024 points input and cin = 64 , cout = 128. For VoxNet [24] we use a 3× 3× 3 kernel and 12× 12× 12 input. For PointCNN, DGCNN and ours, we set the neighborhood size K = 10. 4 Experiments To demonstrate the effectiveness of our framework, we conducted three experiments on three different datasets - classification (ModelNet40 [41]), part segmentation (ShapeNet Parts [42]) and semantic segmentation (S3DIS [43]). We also report an ablation study to obtain a deeper understanding of our framework. In all the experiments, we start from a point-cloud, and at each DiffGCN block we construct a K-nearest-neighbor graph according to the features of the points. As noted in [14], spectral methods generally lead to inferior results than spatial methods - thus we omit them in our experimental evaluations. We implement our work using the PyTorch [44] and PyTorch Geometric [32] libraries. We use the networks shown in Fig. 1. For the semantic segmentation task on S3DIS we do not use a spatial transformer. Throughout all the experiments we use ADAM optimizer [45] with initial learning rate of 0.001. We run our experiments using NVIDIA Titan RTX with a batch size 20. Our loss function is the cross-entropy loss for classification and focal-loss [46] for the segmentation tasks. 4.1 Classification results For the classification task we use ModelNet-40 dataset [41] which includes 12,311 CAD meshes across 40 different categories. The data split is as follows: 9,843 for training and 2,468 for testing. Our training scheme is similar to the one proposed in PointNet [8], in which we rescale each mesh to a unit cube, and then we sample 1,024 random points from each mesh at each epoch. We also use random scaling between 0.8 to 1.2 and add random rotations to the generated point cloud. We report our results with K = 20, with and without pooling. The results of our method are summarized in Table 2. We obtained higher accuracy than [26, 51, 52] which also use GCNs for this task. We suggest that the difference stems mainly from the addition of the Laplacian term to our convolution, and the contribution of the pooling module. Note, the work HGNN [53] which is based on hyper-graphs, using features that are of size 4,096 (and not only 3), extracted from MVCNN [54] and GVCNN [55], therefore, we do not include it in Table 2. 4.2 Segmentation results We test our method on two different segmentation datasets - Shapenet part segmentation [42] and Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) [43]. We use the lower part network in Fig. 1, with K = 20, 10, 5 in each of the DiffGCN blocks, respectively. For Shapenet part segmentation dataset, our objective is to classify each point in a point-cloud to its correct part category. There are 16,881 3D shapes across 16 different categories, with a total of 50 part annotation classes, where each shape is annotated with 2-6 parts. We sample 2,048 points from each shape and use the training, validation and testing split in [56]. The results are reported in Table 3. Our method achieves the highest mIoU out of all the considered networks. The Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) contains 3D scans of 272 room from 6 different areas. Each point is annotated with one of 13 semantic classes. We adopt the pre-processing steps of splitting each room into 1m× 1m blocks with random 4,096 points at the training phase, and all points during, where each point represented by a 9D vector (XYZ, RGB, normalized spatial coordinates). We follow the training, validation and testing split from [8]. We follow the 6-fold protocol for training and testing as in [43], and report the results of this experiment in Table 4. We obtained higher accuracy than the popular point-based network PointNet [8] as well as the graph based network DGCNN [26]. Note that [25] uses different pre-processing steps. Namely, the blocks were of 1.3m× 1.3m, where the added 0.3m on each dimensions is used for location context, and is not part of the objective at each block. In addition, we compare our work with a recent work, DCM-Net [58], which differs from our method by its approach of combining geodesic and Euclidean data, decoupling the data by utilizing parallel networks. 4.3 Ablation study We measure the contribution of each component of our model, as well as different combinations of them, on classification with ModelNet40. Our results read that as expected, using each component on its own (e.g., mass term only) reduces accuracy. However, by combining the different terms - accuracy increases. We found that using the mass and Laplacian term is more beneficial than the mass and gradient term. This shows the representation power of the Laplacian operator which is widely used in classical computer graphics and vision [60, 61, 62]. That is in addition to spectral-based GCNs which are parameterized by polynomials of the graph Laplacian [13, 12, 63]. In addition, we experiment with different number of neighbors, with and without pooling, reading slight reduction in performance, but with less FLOPs and memory requirements. We note that the pooling operations lead to better performance since they enlarge the receptive fields of the neurons. 5 Conclusion We presented a novel graph convolution kernel based on discretized differential operators, which together with our AMG pooling and unpooling operators form the most important components of a CNN. Our GCN network shows on par or better performance than current state-of-the-art GCNs. We also draw an analogy between standard structured CNNs and our method, and the reduced cost of ours compared to other GCNs. Broader Impact The method we propose can be used for additional tasks in which the data have a geometric meaning. For instance, data sourced from geographic information systems (GIS) can be used for prediction of elections results [64]. Thus, it may have an impact on other fields. In addition, our method is lighter than other GCNs, which can be beneficial for power and time consumption. We are not aware of an ethical problem or negative societal consequences. Acknowledgments and Disclosure of Funding The research reported in this paper was supported by the Israel Innovation Authority through Avatar consortium, and by grant no. 2018209 from the United States - Israel Binational Science Foundation (BSF), Jerusalem, Israel. ME is supported by Kreitman High-tech scholarship.
1. What is the main contribution of the paper in the field of graph convolutional networks? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and performance? 3. What are the weaknesses of the paper, especially regarding the clarity of the derivations and the claims of efficiency? 4. How does the reviewer assess the significance of the work and its potential impact on the development of graph neural networks? 5. Are there any suggestions or recommendations for improving the paper's content or experimental results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work proposes a new formulation of operators used in graph convolutional networks. The formulation is based on ideas from PDEs. The authors introduced the formulations for graph convolution, graph pooling, and graph unpooling. The authors experimented the proposed model on 3D classification and segmentation benchmarks. The experimental results show the model achieves comparable results with other compared methods. Strengths + The idea of using differential operators in graph CNN seems new. + The experimental results are comparable to other state-of-the-art methods. Weaknesses - The derivation of the formulation is very hard to follow. It is unclear what is the input and output of the operators. Equations seem to be disconnected. Symbols just appear out of nowhere. I recommend some polishing on the derivations. - One important claim of the DifGCN ops are its efficiency. However, only FLOPs numbers are compared. To make the case for efficiency, the authors could add some comparison on wall-time cost between different methods.
NIPS
Title DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling Abstract Graph Convolutional Networks (GCNs) have shown to be effective in handling unordered data like point clouds and meshes. In this work we propose novel approaches for graph convolution, pooling and unpooling, inspired from finite differences and algebraic multigrid frameworks. We form a parameterized convolution kernel based on discretized differential operators, leveraging the graph mass, gradient and Laplacian. This way, the parameterization does not depend on the graph structure, only on the meaning of the network convolutions as differential operators. To allow hierarchical representations of the input, we propose pooling and unpooling operations that are based on algebraic multigrid methods, which are mainly used to solve partial differential equations on unstructured grids. To motivate and explain our method, we compare it to standard convolutional neural networks, and show their similarities and relations in the case of a regular grid. Our proposed method is demonstrated in various experiments like classification and part-segmentation, achieving on par or better than state of the art results. We also analyze the computational cost of our method compared to other GCNs. 1 Introduction The emergence of deep learning and Convolutional Neural Networks (CNNs) [1, 2, 3] in recent years has had great impact on the community of computer vision and graphics [4, 5, 6, 7]. Over the past years, multiple works used standard CNNs to perform 3D related tasks on unordered data (e.g., point clouds and meshes), one of which is PointNet [8, 9], that operates directly on point clouds. Along with these works, another massively growing field is Graph Convolutional Networks (GCNs) [10], or Geometric Deep Learning, which suggests using graph convolutions for tasks related to three dimensional inputs, arising from either spectral theory [11, 12, 13] or spatial convolution [14, 15, 16, 17]. This makes the processing of unstructured data like point clouds, graphs and meshes more natural by operating directly in the underlying structure of the data. In this work we aim to bridge the gap between ordered and unordered deep learning architectures, and to build on the foundation of standard CNNs in unordered data. To this end, we leverage the similarity between standard CNNs and partial differential equations (PDEs) [18], and propose a new approach to define convolution operators on graphs that are based on discretization of differential operators on unstructured grids. Specifically, we define a 3D convolution kernel which is based on discretized differential operators. We consider the mass (self-feature), gradient and Laplacian of the graph, and discretize them using a simple version of finite differences, similarly to the way that standard graph Laplacians are defined. Such differential operators form a subspace which spans standard convolution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. kernels on structured grids. Leveraging such operators for unstructured grids leads to an abstract parameterization of the convolution operation, which is independent of the specific graph geometry. Our second contribution involves unstructured pooling and unpooling operators, which together with the convolution, are among the main building blocks of CNNs. To this end, and further motivated by the PDE interpretation of CNNs, we utilize multigrid methods which are among the most efficient numerical solvers for PDEs. Such methods use a hierarchy of smaller and smaller grids to represent the PDE on various scales. Specifically, algebraic multigrid (AMG) approaches [19, 20] are mostly used to solve PDEs on unstructured grids by forming the same hierarchy of problems using coarsening and upsampling operators. Using these building blocks of AMG, we propose novel pooling and unpooling operations for GCNs. Our operators are based on the Galerkin coarsening operator of aggregation-based AMG [21, 22], performing pure aggregation for pooling and smoothed aggregation as the unpooling operator. The advantage of having pooling capability, as seen both in traditional CNNs and GCNs [4, 23, 5, 6] are the enlargement of the receptive field of the neurons, and reduced computational cost (in terms of floating operations), allowing for wider and deeper networks. In what follows, we elaborate on existing unordered data methods in Section 2, and present our method in Section 3. We discuss the similarity between traditional CNNs and our proposed GCN, and motivate the use of differential operators as a parameterization to a convolution kernel in Section 3.3. Furthermore, we compare the computational cost of our method compared to other message-passing, spatially based GCNs in Section 3.5 . To validate our model, we perform experiments on point cloud classification and segmentation tasks on various datasets in Section 4. Finally, we study the importance and contribution of the different terms in the parameterization to the performance of our method in Section 4.3. 2 Related work Unordered data come in many forms and structures – from meshes and point clouds that describe 3D objects to social network graphs. For 3D related data, a natural choice would be to voxelize the support of the data, as in [24]. Clearly, such approach comes at a high computational cost, while causing degradation of the data. Other methods suggest to operate directly on the data - whether it is a point cloud [8, 9, 25] or a graph [13, 15, 11, 12, 26]. Recent works like [15, 17] assumed a system of local coordinates centered around each vertex. These methods propose to assign weights to geometric neighborhoods around the vertices, in addition to the filter weights. Masci et al. [15] proposed assigning fixed Gaussian mixture weight for those neighborhoods, and [17] goes a step further and learns the parameters of the Gaussians. These methods require high computational costs, due to the computation of exponential terms (particularly at inference time) as well as the overhead of additional learnable parameters. Later, it was shown in [26] that adopting GCNs for point-cloud related tasks can be highly beneficial, since the learned features of the graph vertices in different layers of the network can induce dynamic graphs which reveal their underlying correlations. We follow the trend of employing GCNs for point-cloud related tasks like shape classification and segmentation. Specifically, we choose to work with spatial GCNs since they are most similar to standard structured CNNs. However, compared to other works like DGCNN [27] and MPNN [16] which can be interpreted as non-directed discretized gradient operators, we introduce directed gradients, as well as the addition of the Laplacian term of the graph. The Laplacian is the key ingredient in spectral-based methods like [13, 12], but was not used in spatial GCNs where the vertices have a geometric meaning, to the best of our knowledge. Unlike traditional structured CNNs, where the pooling and unpooling operations are trivial, these operations are more debatable in unordered methods, due to the lack of order or the metric between points. Works like PointNet++ [9] proposed using Furthest Point Sampling technique in order to choose remaining points in coarsened versions of the inputs. Other works proposed utilizing `2 norm of the features to determine which elements of the graph are to be removed in subsequent layers of the network [6, 28]. Recent works like DiffPool [29] proposed learning a dense assignment matrix to produce coarsened version of an initial graph. However, learning a dense matrix is of quadratic computational cost in the number of vertices and does not scale well for large scale point-clouds. Also, DiffPool is constrained to fixed graph sizes, while our method is agnostic to the size of the input. We focus on the employment of AMG as a concept to define our pooling and unpooling operations. We use classical aggregation AMG [22], which is suitable for unstructured grids, similarly to works that incorporated geometric multigrid concepts into structured grids CNNs [5, 30]. On a different note, the recent work [31] showed another connection between AMG and GCNs, and proposed using GCNs for learning sparse AMG prolongation matrices to solve weighted diffusion problems on unstructured grids. 3 Method We propose to parameterize the graph convolutional kernel according to discretized differential operators defined on a graph. Therefore, we call our convolution DiffGCN. To have a complete set of neural network building blocks, we also propose an AMG inspired pooling and unpooling operators to enlarge the receptive fields of the neurons, and to allow for wider and deeper networks. 3.1 Convolution kernels via differential operators To define the convolution kernels in the simplest manner, we use finite differences, which is a simple and widely used approach for numerical discretization of differential operators. Alternatives, such as finite element or finite volume schemes may also be suitable for the task, but are more complicated to implement in existing deep learning frameworks. Using finite differences, the first and second order derivatives are approximated as: ∂f(x) ∂x ≈ f(x+ h)− f(x− h) 2h , ∂2f(x) ∂x2 ≈ f(x+ h)− 2f(x) + f(x− h) h2 . (1) We harness these simple operators to estimate the gradient and Laplacian of the unstructured feature maps defined on a graph. Given an undirected graph G = (V,E) where V,E denote the vertices and edges of the graph, respectively, we propose a formulation of the convolution kernel as follows: conv(G,Θ) ≈ θ1I + θ2 ∂ ∂x + θ3 ∂2 ∂x2 + θ4 ∂ ∂y + θ5 ∂2 ∂y2 + θ6 ∂ ∂z + θ7 ∂2 ∂z2 . (2) This gives a 7-point convolution kernel which consists of the mass, gradient and Laplacian of the signal defined over the graph. We now formulate the operators in (2) mathematically. We first define that the features of the GCN are located in the vertices vi ∈ V of the graph, similarly to a nodal discretization. For each node we have cin features (input channels). We start with the definition of the gradient (in x, y, z), which according to (1) is defined on the middle of an edge eij ∈ E connecting the pair vi and vj . Since the edge direction may not be aligned with a specific axis, we project the derivative along the edge onto the axes x, y, z. For example, (∂xGf)ij = ∂f ∂x (eij) = fvi − fvj dist(vi, vj) (x(vi)− x(vj)). (3) x(vi) is the x-coordinate of vertex vi, and fvi ∈ Rcin is the feature vector of size cin defined on vertex vi. dist(vi, vj) is the Euclidean distance between vi and vj . Given this approach, we define the gradient matrix of the graph by stacking the projected differential operator in each of the axes x, y, z: ∇G = ∂xG∂yG ∂zG : |V | · cin → 3 · |E| · cin (4) This gradient operates on the vertex space and its output size is 3 times the edge space of the graph, for the x, y, z directions. To gather the gradients back to the vertex space we form an edge-averaging operator A : 3 · |E| · cin → 3 · |V | · cin, (Af)i = 1 |Ne(vi)| ∑ j∈Ne(vi) feij (5) where Ne(vi) = {j : eij ∈ E} is the set of edges associated with vertex vi. The function f in (5) is a feature map tensor defined on the edges of the graph. The three different derivatives in (4) are treated as three different features for each edge. In a similar fashion, the Laplacian of the graph with respect to each axis x, y, z is computed in two steps. The first step is the gradient in equation (4). Then, we apply the following transposed first-order derivative operators to obtain the second derivatives back on the vertices:(∂xG)T 0 00 (∂yG)T 0 0 0 (∂zG) T : 3 · |E| · cin → 3 · |V | · cin. (6) The transposed gradient if often used to discretize the divergence operator, and the resulting Laplacian is the divergence of the gradient. Here, however, we do not sum the derivatives (the Laplacian is the sum of all second derivatives) so that the second derivative in each axis ends up as a separate feature on a vertex, so it is weighted in our convolution operator in (2). This construction is similar to the way graph Laplacians and finite element Laplacians are defined on graphs or unstructured grids. Implementation using PyTorch-Geometric To obtain such functionality while using common GCN-designated software [32] and concepts [14, 16], we define a directed auxiliary graph denoted by G′ = (V ′, E′), where in addition to the original set of vertices V , we have new dummy vertices, representing the mid-edge locations eij ∈ E. Then, we define the connectivity of G′ such that each vertex vi ∈ V has a direct connection to the mid-edge location eij as in Fig. 2. More explicitly: V ′ = V ∪ E , E′ = {(vi, eij), (vj , eij) | eij ∈ E}. (7) We also use the transposed graph, which is an edge flipped version of G′, also demonstrated in Fig. 2. Given these two graphs, we are able to obtain the gradient and Laplacian terms of the signal defined over the graph via mean aggregation of message passing scheme [16, 14], where we perform two stages of the latter. First, we use the auxiliary graph G′ to send the following message for each (vi, eij) ∈ E′: msgGrad(vi → eij , fvi) = fvi 2 · dist(vi, eij) x(vi)y(vi) z(vi) − x(eij)y(eij) z(eij) ∈ R3·cin . (8) Here, each vertex gets two messages, and due to the subtraction of vertex locations in the message followed by a sum aggregation geij = msgGrad(vi → eij , fvi) +msgGrad(vj → eij , fvj ) (9) the discretized gradient in (3)-(4) is obtained on the edges. Following this, we return two messages over the transposed graph G′, returning both the gradient and the Laplacian of the graph on the original vertex space V . The first part of the message returns the gradient terms from the edges to the vertices simply by sending the identity message followed by mean aggregation: GradG(vi) = 1 |Ni| ∑ j∈Ni geij . (10) This concludes Eq. (5). The second part of the message differentiates the edge gradients to obtain the Laplacian back on the original vertices: msgEdgeLap(eij → vi, geij ) = geij 2 · dist(vi, eij) x(eij)y(eij) z(eij) − x(vi)y(vi) z(vi) ∈ R3·cin . (11) Then, we obtain the features described in Eq. (6) by performing mean aggregation: LapG(vi) = 1 |Ni| ∑ j∈Ni msgEdgeLap(eij , vi). (12) Finally, we concatenate the mass, gradient and Laplacian to obtain the differential operators features: f̂vi = fvi ⊕GradG(vi)⊕ LapG(vi) ∈ R7·cin , (13) where ⊕ denotes channel-wise concatenation. Finally, we apply a multi-layer perceptron (MLP)—a cout × 7 · cin point-wise convolution followed by batch normalization and ReLU— to the features in Eq. (13). The implementation above follows the mathematical formulation step by step, but it requires to explicitly construct the auxiliary graph in Fig. 2. An equivalent and more efficient way to implement our method, which is only implicitly based on those auxiliary graphs, is to construct a message that contains 6 · cin features by combining Eq. (9) and (11) in a single message followed by mean aggregation as in Eq. (12) and concatenation of the self feature, resulting in a feature f̂vi ∈ R7·cin . 3.2 Algebraic multigrid pooling and unpooling An effective pooling operation is important to faithfully represent coarsened versions of the graph. We propose to use AMG methods [22, 20], namely, the Galerkin coarsening which we explain now. In AMG methods the coarse graph vertices are typically chosen either as a subset of the fine graph vertices (dubbed “C-points”) or as clusters of vertices called aggregates. We use the latter approach, and apply the Graclus clustering [33] to form the aggregates. Let {CJ}|Vcoarse|J=1 be the aggregates, each corresponds to a vertex in the coarse graph. Then, we define the restriction (pooling) operator: RJ,i = { 1 i ∈ CJ 0 otherwise . (14) Given a features matrix X and an adjacency matrix A, their coarsened counterparts are defined via the Galerkin coarsening: Xcoarse = RTX, Acoarse = RTAR ∈ R|Vcoarse|×|Vcoarse|. (15) To perform the unpooling operator, also called prolongation, we may use the transpose of the restriction operator (14). However, when unpooling with an aggregation matrix, we get piece-wise constant feature maps, which are undesired. To have a smoother unpooling operator, we propose to allow the prolongation of soft clustering via smoothed aggregation [22] as follows: P = (I − (D)−1L)RT ∈ R|V |×|Vcoarse| (16) Where I,D,L are the identity matrix, degree and Laplacian matrix of the layer, respectively. To unpool from a coarsened version of the graph, we apply the corresponding prolongation operator at each level, until we reach the initial problem resolution. 3.3 Similarity between DiffGCN and standard CNN operators for structured grids A standard CNN is based on learning weights of convolutional filters. The work [18] showed that the 2D convolution kernel can be represented as a linear combination of finite difference differential operators. These classical differential operators are obtained using our definitions in Eq. (3)-(6), in the case of a structured regular graph. In 2D (without the z axis), Eq. (2) will result in a 5-point stencil represented as θ1 0 0 00 1 0 0 0 0 + θ2 0 0 0−1 0 1 0 0 0 + θ3 0 0 01 −2 1 0 0 0 + θ4 0 1 00 0 0 0 −1 0 + θ5 0 1 00 −2 0 0 1 0 (17) The Laplacian, together with the mass term allow the network to obtain low-pass filters, which are highly important to average out noise, and to prevent aliasing when downsampling the feature-maps. Gradient based methods like [26] can only approximate the Laplacian term via multiple convolutions, leading to redundant computations. Furthermore, the work of [34] showed that the popular 3 × 3 convolution kernel can be replaced by this 5 point stencil without losing much accuracy. When extending this to 3D, the common 3× 3× 3 kernel includes 27 weights, and the lighter version in (2) ends in a star-shaped stencil using 7 weights only, which is a significant reduction from 27. We refer the interested reader to [35, 36, 37, 38, 39, 40] for a more rigorous study of the connection between ODEs, PDEs and CNNs. 3.4 DiffGCN architectures We show the architectures used in this work in Fig. 1. We define a DiffGCN block which consists of two DiffGCN convolutions, with a shortcut connection, as in ResNet [23] for better convergence and stability. Pooling is performed before the first convolution in each block, besides the first opening layer. We use concatenating skip-connections to fuse feature maps from shallow and deep layers. Before this concatenation, unpooling is performed to resize the point-cloud to its original dimensions. 3.5 Computational cost of DiffGCN Typically, spatial GCNs like [14, 16, 26, 15] employ the convolutions K times per vertex, where K is the neighborhood size. More explicitly, a typical convolution can be written as x′i = j∈Ni hΘ(f(xi, xj)), (18) where Ni is the set of neighbors of vertex vi ∈ V , is a permutation invariant aggregation operator like max or sum and hΘ is an MLP [8] parameterized by the set of weights Θ. f is a function that is dependent on a vertex and its neighbors. For instance, in DGCNN [26] f(xi, xj) = xi ⊕ (xi − xj). By design, our convolution operation first gathers the required differential terms, and then feeds their channel-wise concatenation through a MLP. That is, our convolution can be written as x′i = hΘ( j∈Ni g(xi, xj)), (19) where g is a function that constructs the desired differential operator terms. Thus, we reduce the feed-forward pass of our convolution by an order of K, which decreases the number of FLOPs required in our convolution. In other words, the MLP operation in our convolution is independent of the number of neighbors K, since we aggregate the neighborhood features prior to the MLP. If s is the stencil size (e.g.,DGCNN [26] uses s = 2, while ours is s = 7), N is the input size, and cin , cout are the number of input and output channels, respectively, then the number of floating point operations of a method defined via Eq. (18) is O(s×N ×K × cin × cout), while the cost of our method in Eq. (19) reduces to O(s×N × cin × cout). In Table 1 we report the required FLOPs and latency for various convolutions with 1, 024 points input and cin = 64 , cout = 128. For VoxNet [24] we use a 3× 3× 3 kernel and 12× 12× 12 input. For PointCNN, DGCNN and ours, we set the neighborhood size K = 10. 4 Experiments To demonstrate the effectiveness of our framework, we conducted three experiments on three different datasets - classification (ModelNet40 [41]), part segmentation (ShapeNet Parts [42]) and semantic segmentation (S3DIS [43]). We also report an ablation study to obtain a deeper understanding of our framework. In all the experiments, we start from a point-cloud, and at each DiffGCN block we construct a K-nearest-neighbor graph according to the features of the points. As noted in [14], spectral methods generally lead to inferior results than spatial methods - thus we omit them in our experimental evaluations. We implement our work using the PyTorch [44] and PyTorch Geometric [32] libraries. We use the networks shown in Fig. 1. For the semantic segmentation task on S3DIS we do not use a spatial transformer. Throughout all the experiments we use ADAM optimizer [45] with initial learning rate of 0.001. We run our experiments using NVIDIA Titan RTX with a batch size 20. Our loss function is the cross-entropy loss for classification and focal-loss [46] for the segmentation tasks. 4.1 Classification results For the classification task we use ModelNet-40 dataset [41] which includes 12,311 CAD meshes across 40 different categories. The data split is as follows: 9,843 for training and 2,468 for testing. Our training scheme is similar to the one proposed in PointNet [8], in which we rescale each mesh to a unit cube, and then we sample 1,024 random points from each mesh at each epoch. We also use random scaling between 0.8 to 1.2 and add random rotations to the generated point cloud. We report our results with K = 20, with and without pooling. The results of our method are summarized in Table 2. We obtained higher accuracy than [26, 51, 52] which also use GCNs for this task. We suggest that the difference stems mainly from the addition of the Laplacian term to our convolution, and the contribution of the pooling module. Note, the work HGNN [53] which is based on hyper-graphs, using features that are of size 4,096 (and not only 3), extracted from MVCNN [54] and GVCNN [55], therefore, we do not include it in Table 2. 4.2 Segmentation results We test our method on two different segmentation datasets - Shapenet part segmentation [42] and Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) [43]. We use the lower part network in Fig. 1, with K = 20, 10, 5 in each of the DiffGCN blocks, respectively. For Shapenet part segmentation dataset, our objective is to classify each point in a point-cloud to its correct part category. There are 16,881 3D shapes across 16 different categories, with a total of 50 part annotation classes, where each shape is annotated with 2-6 parts. We sample 2,048 points from each shape and use the training, validation and testing split in [56]. The results are reported in Table 3. Our method achieves the highest mIoU out of all the considered networks. The Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) contains 3D scans of 272 room from 6 different areas. Each point is annotated with one of 13 semantic classes. We adopt the pre-processing steps of splitting each room into 1m× 1m blocks with random 4,096 points at the training phase, and all points during, where each point represented by a 9D vector (XYZ, RGB, normalized spatial coordinates). We follow the training, validation and testing split from [8]. We follow the 6-fold protocol for training and testing as in [43], and report the results of this experiment in Table 4. We obtained higher accuracy than the popular point-based network PointNet [8] as well as the graph based network DGCNN [26]. Note that [25] uses different pre-processing steps. Namely, the blocks were of 1.3m× 1.3m, where the added 0.3m on each dimensions is used for location context, and is not part of the objective at each block. In addition, we compare our work with a recent work, DCM-Net [58], which differs from our method by its approach of combining geodesic and Euclidean data, decoupling the data by utilizing parallel networks. 4.3 Ablation study We measure the contribution of each component of our model, as well as different combinations of them, on classification with ModelNet40. Our results read that as expected, using each component on its own (e.g., mass term only) reduces accuracy. However, by combining the different terms - accuracy increases. We found that using the mass and Laplacian term is more beneficial than the mass and gradient term. This shows the representation power of the Laplacian operator which is widely used in classical computer graphics and vision [60, 61, 62]. That is in addition to spectral-based GCNs which are parameterized by polynomials of the graph Laplacian [13, 12, 63]. In addition, we experiment with different number of neighbors, with and without pooling, reading slight reduction in performance, but with less FLOPs and memory requirements. We note that the pooling operations lead to better performance since they enlarge the receptive fields of the neurons. 5 Conclusion We presented a novel graph convolution kernel based on discretized differential operators, which together with our AMG pooling and unpooling operators form the most important components of a CNN. Our GCN network shows on par or better performance than current state-of-the-art GCNs. We also draw an analogy between standard structured CNNs and our method, and the reduced cost of ours compared to other GCNs. Broader Impact The method we propose can be used for additional tasks in which the data have a geometric meaning. For instance, data sourced from geographic information systems (GIS) can be used for prediction of elections results [64]. Thus, it may have an impact on other fields. In addition, our method is lighter than other GCNs, which can be beneficial for power and time consumption. We are not aware of an ethical problem or negative societal consequences. Acknowledgments and Disclosure of Funding The research reported in this paper was supported by the Israel Innovation Authority through Avatar consortium, and by grant no. 2018209 from the United States - Israel Binational Science Foundation (BSF), Jerusalem, Israel. ME is supported by Kreitman High-tech scholarship.
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its performance compared to other methods? 3. Are there any concerns or weaknesses in the paper's proposal, such as limitations or potential drawbacks of the new method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presented a novel graph convolution kernel based on discretized differential operators, along with pooling and unpooling operators to discretized differential operators defined on a graph for graph Neural Networks. Strengths -The differential operators defined on graphs can be useful for many applications of GNNs. -Performance of proposed method is superior to state of the art. Weaknesses -
NIPS
Title DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling Abstract Graph Convolutional Networks (GCNs) have shown to be effective in handling unordered data like point clouds and meshes. In this work we propose novel approaches for graph convolution, pooling and unpooling, inspired from finite differences and algebraic multigrid frameworks. We form a parameterized convolution kernel based on discretized differential operators, leveraging the graph mass, gradient and Laplacian. This way, the parameterization does not depend on the graph structure, only on the meaning of the network convolutions as differential operators. To allow hierarchical representations of the input, we propose pooling and unpooling operations that are based on algebraic multigrid methods, which are mainly used to solve partial differential equations on unstructured grids. To motivate and explain our method, we compare it to standard convolutional neural networks, and show their similarities and relations in the case of a regular grid. Our proposed method is demonstrated in various experiments like classification and part-segmentation, achieving on par or better than state of the art results. We also analyze the computational cost of our method compared to other GCNs. 1 Introduction The emergence of deep learning and Convolutional Neural Networks (CNNs) [1, 2, 3] in recent years has had great impact on the community of computer vision and graphics [4, 5, 6, 7]. Over the past years, multiple works used standard CNNs to perform 3D related tasks on unordered data (e.g., point clouds and meshes), one of which is PointNet [8, 9], that operates directly on point clouds. Along with these works, another massively growing field is Graph Convolutional Networks (GCNs) [10], or Geometric Deep Learning, which suggests using graph convolutions for tasks related to three dimensional inputs, arising from either spectral theory [11, 12, 13] or spatial convolution [14, 15, 16, 17]. This makes the processing of unstructured data like point clouds, graphs and meshes more natural by operating directly in the underlying structure of the data. In this work we aim to bridge the gap between ordered and unordered deep learning architectures, and to build on the foundation of standard CNNs in unordered data. To this end, we leverage the similarity between standard CNNs and partial differential equations (PDEs) [18], and propose a new approach to define convolution operators on graphs that are based on discretization of differential operators on unstructured grids. Specifically, we define a 3D convolution kernel which is based on discretized differential operators. We consider the mass (self-feature), gradient and Laplacian of the graph, and discretize them using a simple version of finite differences, similarly to the way that standard graph Laplacians are defined. Such differential operators form a subspace which spans standard convolution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. kernels on structured grids. Leveraging such operators for unstructured grids leads to an abstract parameterization of the convolution operation, which is independent of the specific graph geometry. Our second contribution involves unstructured pooling and unpooling operators, which together with the convolution, are among the main building blocks of CNNs. To this end, and further motivated by the PDE interpretation of CNNs, we utilize multigrid methods which are among the most efficient numerical solvers for PDEs. Such methods use a hierarchy of smaller and smaller grids to represent the PDE on various scales. Specifically, algebraic multigrid (AMG) approaches [19, 20] are mostly used to solve PDEs on unstructured grids by forming the same hierarchy of problems using coarsening and upsampling operators. Using these building blocks of AMG, we propose novel pooling and unpooling operations for GCNs. Our operators are based on the Galerkin coarsening operator of aggregation-based AMG [21, 22], performing pure aggregation for pooling and smoothed aggregation as the unpooling operator. The advantage of having pooling capability, as seen both in traditional CNNs and GCNs [4, 23, 5, 6] are the enlargement of the receptive field of the neurons, and reduced computational cost (in terms of floating operations), allowing for wider and deeper networks. In what follows, we elaborate on existing unordered data methods in Section 2, and present our method in Section 3. We discuss the similarity between traditional CNNs and our proposed GCN, and motivate the use of differential operators as a parameterization to a convolution kernel in Section 3.3. Furthermore, we compare the computational cost of our method compared to other message-passing, spatially based GCNs in Section 3.5 . To validate our model, we perform experiments on point cloud classification and segmentation tasks on various datasets in Section 4. Finally, we study the importance and contribution of the different terms in the parameterization to the performance of our method in Section 4.3. 2 Related work Unordered data come in many forms and structures – from meshes and point clouds that describe 3D objects to social network graphs. For 3D related data, a natural choice would be to voxelize the support of the data, as in [24]. Clearly, such approach comes at a high computational cost, while causing degradation of the data. Other methods suggest to operate directly on the data - whether it is a point cloud [8, 9, 25] or a graph [13, 15, 11, 12, 26]. Recent works like [15, 17] assumed a system of local coordinates centered around each vertex. These methods propose to assign weights to geometric neighborhoods around the vertices, in addition to the filter weights. Masci et al. [15] proposed assigning fixed Gaussian mixture weight for those neighborhoods, and [17] goes a step further and learns the parameters of the Gaussians. These methods require high computational costs, due to the computation of exponential terms (particularly at inference time) as well as the overhead of additional learnable parameters. Later, it was shown in [26] that adopting GCNs for point-cloud related tasks can be highly beneficial, since the learned features of the graph vertices in different layers of the network can induce dynamic graphs which reveal their underlying correlations. We follow the trend of employing GCNs for point-cloud related tasks like shape classification and segmentation. Specifically, we choose to work with spatial GCNs since they are most similar to standard structured CNNs. However, compared to other works like DGCNN [27] and MPNN [16] which can be interpreted as non-directed discretized gradient operators, we introduce directed gradients, as well as the addition of the Laplacian term of the graph. The Laplacian is the key ingredient in spectral-based methods like [13, 12], but was not used in spatial GCNs where the vertices have a geometric meaning, to the best of our knowledge. Unlike traditional structured CNNs, where the pooling and unpooling operations are trivial, these operations are more debatable in unordered methods, due to the lack of order or the metric between points. Works like PointNet++ [9] proposed using Furthest Point Sampling technique in order to choose remaining points in coarsened versions of the inputs. Other works proposed utilizing `2 norm of the features to determine which elements of the graph are to be removed in subsequent layers of the network [6, 28]. Recent works like DiffPool [29] proposed learning a dense assignment matrix to produce coarsened version of an initial graph. However, learning a dense matrix is of quadratic computational cost in the number of vertices and does not scale well for large scale point-clouds. Also, DiffPool is constrained to fixed graph sizes, while our method is agnostic to the size of the input. We focus on the employment of AMG as a concept to define our pooling and unpooling operations. We use classical aggregation AMG [22], which is suitable for unstructured grids, similarly to works that incorporated geometric multigrid concepts into structured grids CNNs [5, 30]. On a different note, the recent work [31] showed another connection between AMG and GCNs, and proposed using GCNs for learning sparse AMG prolongation matrices to solve weighted diffusion problems on unstructured grids. 3 Method We propose to parameterize the graph convolutional kernel according to discretized differential operators defined on a graph. Therefore, we call our convolution DiffGCN. To have a complete set of neural network building blocks, we also propose an AMG inspired pooling and unpooling operators to enlarge the receptive fields of the neurons, and to allow for wider and deeper networks. 3.1 Convolution kernels via differential operators To define the convolution kernels in the simplest manner, we use finite differences, which is a simple and widely used approach for numerical discretization of differential operators. Alternatives, such as finite element or finite volume schemes may also be suitable for the task, but are more complicated to implement in existing deep learning frameworks. Using finite differences, the first and second order derivatives are approximated as: ∂f(x) ∂x ≈ f(x+ h)− f(x− h) 2h , ∂2f(x) ∂x2 ≈ f(x+ h)− 2f(x) + f(x− h) h2 . (1) We harness these simple operators to estimate the gradient and Laplacian of the unstructured feature maps defined on a graph. Given an undirected graph G = (V,E) where V,E denote the vertices and edges of the graph, respectively, we propose a formulation of the convolution kernel as follows: conv(G,Θ) ≈ θ1I + θ2 ∂ ∂x + θ3 ∂2 ∂x2 + θ4 ∂ ∂y + θ5 ∂2 ∂y2 + θ6 ∂ ∂z + θ7 ∂2 ∂z2 . (2) This gives a 7-point convolution kernel which consists of the mass, gradient and Laplacian of the signal defined over the graph. We now formulate the operators in (2) mathematically. We first define that the features of the GCN are located in the vertices vi ∈ V of the graph, similarly to a nodal discretization. For each node we have cin features (input channels). We start with the definition of the gradient (in x, y, z), which according to (1) is defined on the middle of an edge eij ∈ E connecting the pair vi and vj . Since the edge direction may not be aligned with a specific axis, we project the derivative along the edge onto the axes x, y, z. For example, (∂xGf)ij = ∂f ∂x (eij) = fvi − fvj dist(vi, vj) (x(vi)− x(vj)). (3) x(vi) is the x-coordinate of vertex vi, and fvi ∈ Rcin is the feature vector of size cin defined on vertex vi. dist(vi, vj) is the Euclidean distance between vi and vj . Given this approach, we define the gradient matrix of the graph by stacking the projected differential operator in each of the axes x, y, z: ∇G = ∂xG∂yG ∂zG : |V | · cin → 3 · |E| · cin (4) This gradient operates on the vertex space and its output size is 3 times the edge space of the graph, for the x, y, z directions. To gather the gradients back to the vertex space we form an edge-averaging operator A : 3 · |E| · cin → 3 · |V | · cin, (Af)i = 1 |Ne(vi)| ∑ j∈Ne(vi) feij (5) where Ne(vi) = {j : eij ∈ E} is the set of edges associated with vertex vi. The function f in (5) is a feature map tensor defined on the edges of the graph. The three different derivatives in (4) are treated as three different features for each edge. In a similar fashion, the Laplacian of the graph with respect to each axis x, y, z is computed in two steps. The first step is the gradient in equation (4). Then, we apply the following transposed first-order derivative operators to obtain the second derivatives back on the vertices:(∂xG)T 0 00 (∂yG)T 0 0 0 (∂zG) T : 3 · |E| · cin → 3 · |V | · cin. (6) The transposed gradient if often used to discretize the divergence operator, and the resulting Laplacian is the divergence of the gradient. Here, however, we do not sum the derivatives (the Laplacian is the sum of all second derivatives) so that the second derivative in each axis ends up as a separate feature on a vertex, so it is weighted in our convolution operator in (2). This construction is similar to the way graph Laplacians and finite element Laplacians are defined on graphs or unstructured grids. Implementation using PyTorch-Geometric To obtain such functionality while using common GCN-designated software [32] and concepts [14, 16], we define a directed auxiliary graph denoted by G′ = (V ′, E′), where in addition to the original set of vertices V , we have new dummy vertices, representing the mid-edge locations eij ∈ E. Then, we define the connectivity of G′ such that each vertex vi ∈ V has a direct connection to the mid-edge location eij as in Fig. 2. More explicitly: V ′ = V ∪ E , E′ = {(vi, eij), (vj , eij) | eij ∈ E}. (7) We also use the transposed graph, which is an edge flipped version of G′, also demonstrated in Fig. 2. Given these two graphs, we are able to obtain the gradient and Laplacian terms of the signal defined over the graph via mean aggregation of message passing scheme [16, 14], where we perform two stages of the latter. First, we use the auxiliary graph G′ to send the following message for each (vi, eij) ∈ E′: msgGrad(vi → eij , fvi) = fvi 2 · dist(vi, eij) x(vi)y(vi) z(vi) − x(eij)y(eij) z(eij) ∈ R3·cin . (8) Here, each vertex gets two messages, and due to the subtraction of vertex locations in the message followed by a sum aggregation geij = msgGrad(vi → eij , fvi) +msgGrad(vj → eij , fvj ) (9) the discretized gradient in (3)-(4) is obtained on the edges. Following this, we return two messages over the transposed graph G′, returning both the gradient and the Laplacian of the graph on the original vertex space V . The first part of the message returns the gradient terms from the edges to the vertices simply by sending the identity message followed by mean aggregation: GradG(vi) = 1 |Ni| ∑ j∈Ni geij . (10) This concludes Eq. (5). The second part of the message differentiates the edge gradients to obtain the Laplacian back on the original vertices: msgEdgeLap(eij → vi, geij ) = geij 2 · dist(vi, eij) x(eij)y(eij) z(eij) − x(vi)y(vi) z(vi) ∈ R3·cin . (11) Then, we obtain the features described in Eq. (6) by performing mean aggregation: LapG(vi) = 1 |Ni| ∑ j∈Ni msgEdgeLap(eij , vi). (12) Finally, we concatenate the mass, gradient and Laplacian to obtain the differential operators features: f̂vi = fvi ⊕GradG(vi)⊕ LapG(vi) ∈ R7·cin , (13) where ⊕ denotes channel-wise concatenation. Finally, we apply a multi-layer perceptron (MLP)—a cout × 7 · cin point-wise convolution followed by batch normalization and ReLU— to the features in Eq. (13). The implementation above follows the mathematical formulation step by step, but it requires to explicitly construct the auxiliary graph in Fig. 2. An equivalent and more efficient way to implement our method, which is only implicitly based on those auxiliary graphs, is to construct a message that contains 6 · cin features by combining Eq. (9) and (11) in a single message followed by mean aggregation as in Eq. (12) and concatenation of the self feature, resulting in a feature f̂vi ∈ R7·cin . 3.2 Algebraic multigrid pooling and unpooling An effective pooling operation is important to faithfully represent coarsened versions of the graph. We propose to use AMG methods [22, 20], namely, the Galerkin coarsening which we explain now. In AMG methods the coarse graph vertices are typically chosen either as a subset of the fine graph vertices (dubbed “C-points”) or as clusters of vertices called aggregates. We use the latter approach, and apply the Graclus clustering [33] to form the aggregates. Let {CJ}|Vcoarse|J=1 be the aggregates, each corresponds to a vertex in the coarse graph. Then, we define the restriction (pooling) operator: RJ,i = { 1 i ∈ CJ 0 otherwise . (14) Given a features matrix X and an adjacency matrix A, their coarsened counterparts are defined via the Galerkin coarsening: Xcoarse = RTX, Acoarse = RTAR ∈ R|Vcoarse|×|Vcoarse|. (15) To perform the unpooling operator, also called prolongation, we may use the transpose of the restriction operator (14). However, when unpooling with an aggregation matrix, we get piece-wise constant feature maps, which are undesired. To have a smoother unpooling operator, we propose to allow the prolongation of soft clustering via smoothed aggregation [22] as follows: P = (I − (D)−1L)RT ∈ R|V |×|Vcoarse| (16) Where I,D,L are the identity matrix, degree and Laplacian matrix of the layer, respectively. To unpool from a coarsened version of the graph, we apply the corresponding prolongation operator at each level, until we reach the initial problem resolution. 3.3 Similarity between DiffGCN and standard CNN operators for structured grids A standard CNN is based on learning weights of convolutional filters. The work [18] showed that the 2D convolution kernel can be represented as a linear combination of finite difference differential operators. These classical differential operators are obtained using our definitions in Eq. (3)-(6), in the case of a structured regular graph. In 2D (without the z axis), Eq. (2) will result in a 5-point stencil represented as θ1 0 0 00 1 0 0 0 0 + θ2 0 0 0−1 0 1 0 0 0 + θ3 0 0 01 −2 1 0 0 0 + θ4 0 1 00 0 0 0 −1 0 + θ5 0 1 00 −2 0 0 1 0 (17) The Laplacian, together with the mass term allow the network to obtain low-pass filters, which are highly important to average out noise, and to prevent aliasing when downsampling the feature-maps. Gradient based methods like [26] can only approximate the Laplacian term via multiple convolutions, leading to redundant computations. Furthermore, the work of [34] showed that the popular 3 × 3 convolution kernel can be replaced by this 5 point stencil without losing much accuracy. When extending this to 3D, the common 3× 3× 3 kernel includes 27 weights, and the lighter version in (2) ends in a star-shaped stencil using 7 weights only, which is a significant reduction from 27. We refer the interested reader to [35, 36, 37, 38, 39, 40] for a more rigorous study of the connection between ODEs, PDEs and CNNs. 3.4 DiffGCN architectures We show the architectures used in this work in Fig. 1. We define a DiffGCN block which consists of two DiffGCN convolutions, with a shortcut connection, as in ResNet [23] for better convergence and stability. Pooling is performed before the first convolution in each block, besides the first opening layer. We use concatenating skip-connections to fuse feature maps from shallow and deep layers. Before this concatenation, unpooling is performed to resize the point-cloud to its original dimensions. 3.5 Computational cost of DiffGCN Typically, spatial GCNs like [14, 16, 26, 15] employ the convolutions K times per vertex, where K is the neighborhood size. More explicitly, a typical convolution can be written as x′i = j∈Ni hΘ(f(xi, xj)), (18) where Ni is the set of neighbors of vertex vi ∈ V , is a permutation invariant aggregation operator like max or sum and hΘ is an MLP [8] parameterized by the set of weights Θ. f is a function that is dependent on a vertex and its neighbors. For instance, in DGCNN [26] f(xi, xj) = xi ⊕ (xi − xj). By design, our convolution operation first gathers the required differential terms, and then feeds their channel-wise concatenation through a MLP. That is, our convolution can be written as x′i = hΘ( j∈Ni g(xi, xj)), (19) where g is a function that constructs the desired differential operator terms. Thus, we reduce the feed-forward pass of our convolution by an order of K, which decreases the number of FLOPs required in our convolution. In other words, the MLP operation in our convolution is independent of the number of neighbors K, since we aggregate the neighborhood features prior to the MLP. If s is the stencil size (e.g.,DGCNN [26] uses s = 2, while ours is s = 7), N is the input size, and cin , cout are the number of input and output channels, respectively, then the number of floating point operations of a method defined via Eq. (18) is O(s×N ×K × cin × cout), while the cost of our method in Eq. (19) reduces to O(s×N × cin × cout). In Table 1 we report the required FLOPs and latency for various convolutions with 1, 024 points input and cin = 64 , cout = 128. For VoxNet [24] we use a 3× 3× 3 kernel and 12× 12× 12 input. For PointCNN, DGCNN and ours, we set the neighborhood size K = 10. 4 Experiments To demonstrate the effectiveness of our framework, we conducted three experiments on three different datasets - classification (ModelNet40 [41]), part segmentation (ShapeNet Parts [42]) and semantic segmentation (S3DIS [43]). We also report an ablation study to obtain a deeper understanding of our framework. In all the experiments, we start from a point-cloud, and at each DiffGCN block we construct a K-nearest-neighbor graph according to the features of the points. As noted in [14], spectral methods generally lead to inferior results than spatial methods - thus we omit them in our experimental evaluations. We implement our work using the PyTorch [44] and PyTorch Geometric [32] libraries. We use the networks shown in Fig. 1. For the semantic segmentation task on S3DIS we do not use a spatial transformer. Throughout all the experiments we use ADAM optimizer [45] with initial learning rate of 0.001. We run our experiments using NVIDIA Titan RTX with a batch size 20. Our loss function is the cross-entropy loss for classification and focal-loss [46] for the segmentation tasks. 4.1 Classification results For the classification task we use ModelNet-40 dataset [41] which includes 12,311 CAD meshes across 40 different categories. The data split is as follows: 9,843 for training and 2,468 for testing. Our training scheme is similar to the one proposed in PointNet [8], in which we rescale each mesh to a unit cube, and then we sample 1,024 random points from each mesh at each epoch. We also use random scaling between 0.8 to 1.2 and add random rotations to the generated point cloud. We report our results with K = 20, with and without pooling. The results of our method are summarized in Table 2. We obtained higher accuracy than [26, 51, 52] which also use GCNs for this task. We suggest that the difference stems mainly from the addition of the Laplacian term to our convolution, and the contribution of the pooling module. Note, the work HGNN [53] which is based on hyper-graphs, using features that are of size 4,096 (and not only 3), extracted from MVCNN [54] and GVCNN [55], therefore, we do not include it in Table 2. 4.2 Segmentation results We test our method on two different segmentation datasets - Shapenet part segmentation [42] and Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) [43]. We use the lower part network in Fig. 1, with K = 20, 10, 5 in each of the DiffGCN blocks, respectively. For Shapenet part segmentation dataset, our objective is to classify each point in a point-cloud to its correct part category. There are 16,881 3D shapes across 16 different categories, with a total of 50 part annotation classes, where each shape is annotated with 2-6 parts. We sample 2,048 points from each shape and use the training, validation and testing split in [56]. The results are reported in Table 3. Our method achieves the highest mIoU out of all the considered networks. The Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) contains 3D scans of 272 room from 6 different areas. Each point is annotated with one of 13 semantic classes. We adopt the pre-processing steps of splitting each room into 1m× 1m blocks with random 4,096 points at the training phase, and all points during, where each point represented by a 9D vector (XYZ, RGB, normalized spatial coordinates). We follow the training, validation and testing split from [8]. We follow the 6-fold protocol for training and testing as in [43], and report the results of this experiment in Table 4. We obtained higher accuracy than the popular point-based network PointNet [8] as well as the graph based network DGCNN [26]. Note that [25] uses different pre-processing steps. Namely, the blocks were of 1.3m× 1.3m, where the added 0.3m on each dimensions is used for location context, and is not part of the objective at each block. In addition, we compare our work with a recent work, DCM-Net [58], which differs from our method by its approach of combining geodesic and Euclidean data, decoupling the data by utilizing parallel networks. 4.3 Ablation study We measure the contribution of each component of our model, as well as different combinations of them, on classification with ModelNet40. Our results read that as expected, using each component on its own (e.g., mass term only) reduces accuracy. However, by combining the different terms - accuracy increases. We found that using the mass and Laplacian term is more beneficial than the mass and gradient term. This shows the representation power of the Laplacian operator which is widely used in classical computer graphics and vision [60, 61, 62]. That is in addition to spectral-based GCNs which are parameterized by polynomials of the graph Laplacian [13, 12, 63]. In addition, we experiment with different number of neighbors, with and without pooling, reading slight reduction in performance, but with less FLOPs and memory requirements. We note that the pooling operations lead to better performance since they enlarge the receptive fields of the neurons. 5 Conclusion We presented a novel graph convolution kernel based on discretized differential operators, which together with our AMG pooling and unpooling operators form the most important components of a CNN. Our GCN network shows on par or better performance than current state-of-the-art GCNs. We also draw an analogy between standard structured CNNs and our method, and the reduced cost of ours compared to other GCNs. Broader Impact The method we propose can be used for additional tasks in which the data have a geometric meaning. For instance, data sourced from geographic information systems (GIS) can be used for prediction of elections results [64]. Thus, it may have an impact on other fields. In addition, our method is lighter than other GCNs, which can be beneficial for power and time consumption. We are not aware of an ethical problem or negative societal consequences. Acknowledgments and Disclosure of Funding The research reported in this paper was supported by the Israel Innovation Authority through Avatar consortium, and by grant no. 2018209 from the United States - Israel Binational Science Foundation (BSF), Jerusalem, Israel. ME is supported by Kreitman High-tech scholarship.
1. What is the main contribution of the paper regarding parameterization of spatial graph convolution? 2. What are the strengths of the paper, particularly in its experimental validation and ablation study? 3. What are the weaknesses of the paper, such as the pooling and unpooling operations, and how does it compare to state-of-the-art methods in segmentation benchmarks? 4. Do you have any questions about the definition of the convolution kernel using gradient and Laplacian in Equation 2? 5. Are there any concerns regarding the ablation study being shown only on one benchmark?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a parameterization of a spatial graph convolution based on differential operators such as gradient and Laplacian. Laplacian and gradient are then constructed on feature graph similar to discretized graph laplacians. Judging by ablation study, inclusion of Laplacian operator in the convolution kernel definition leads to superior performance over baseline spatial GCN that do not incorporate it on multiple benchmarks. Strengths The paper is overall well written and tackles important problem of defining a 3D convolution kernel which is very relevant to Neurips community (3D deep learning). The experimental validation is convincing as it outperforms baseline spatial GCN on multiple benchmarks. Ablation study is appreciated. Weaknesses --The pooling and unpooling operations, based on Galerkin coarsening, are very straightforward extension and also, not at all beneficial, as evident in Table 2 and 3 and 4 where including pooling either reduces the overall accuracy, Table 3 and 4 or at best increases the overall performance by .4 in Table 2. --The proposed method is really not close to state-of-the-art on segmentation benchmark,Table 4, as claimed in abstract. (57 against 64). -- I did not find any interpretation/related work for the Eq. 2 definition that uses Gradient and Laplacian to define the convolution kernel. --Ablation study is shown only on one benchmark. Authors should state/show if similar conclusion are drawn from other benchmarks too.
NIPS
Title DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling Abstract Graph Convolutional Networks (GCNs) have shown to be effective in handling unordered data like point clouds and meshes. In this work we propose novel approaches for graph convolution, pooling and unpooling, inspired from finite differences and algebraic multigrid frameworks. We form a parameterized convolution kernel based on discretized differential operators, leveraging the graph mass, gradient and Laplacian. This way, the parameterization does not depend on the graph structure, only on the meaning of the network convolutions as differential operators. To allow hierarchical representations of the input, we propose pooling and unpooling operations that are based on algebraic multigrid methods, which are mainly used to solve partial differential equations on unstructured grids. To motivate and explain our method, we compare it to standard convolutional neural networks, and show their similarities and relations in the case of a regular grid. Our proposed method is demonstrated in various experiments like classification and part-segmentation, achieving on par or better than state of the art results. We also analyze the computational cost of our method compared to other GCNs. 1 Introduction The emergence of deep learning and Convolutional Neural Networks (CNNs) [1, 2, 3] in recent years has had great impact on the community of computer vision and graphics [4, 5, 6, 7]. Over the past years, multiple works used standard CNNs to perform 3D related tasks on unordered data (e.g., point clouds and meshes), one of which is PointNet [8, 9], that operates directly on point clouds. Along with these works, another massively growing field is Graph Convolutional Networks (GCNs) [10], or Geometric Deep Learning, which suggests using graph convolutions for tasks related to three dimensional inputs, arising from either spectral theory [11, 12, 13] or spatial convolution [14, 15, 16, 17]. This makes the processing of unstructured data like point clouds, graphs and meshes more natural by operating directly in the underlying structure of the data. In this work we aim to bridge the gap between ordered and unordered deep learning architectures, and to build on the foundation of standard CNNs in unordered data. To this end, we leverage the similarity between standard CNNs and partial differential equations (PDEs) [18], and propose a new approach to define convolution operators on graphs that are based on discretization of differential operators on unstructured grids. Specifically, we define a 3D convolution kernel which is based on discretized differential operators. We consider the mass (self-feature), gradient and Laplacian of the graph, and discretize them using a simple version of finite differences, similarly to the way that standard graph Laplacians are defined. Such differential operators form a subspace which spans standard convolution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. kernels on structured grids. Leveraging such operators for unstructured grids leads to an abstract parameterization of the convolution operation, which is independent of the specific graph geometry. Our second contribution involves unstructured pooling and unpooling operators, which together with the convolution, are among the main building blocks of CNNs. To this end, and further motivated by the PDE interpretation of CNNs, we utilize multigrid methods which are among the most efficient numerical solvers for PDEs. Such methods use a hierarchy of smaller and smaller grids to represent the PDE on various scales. Specifically, algebraic multigrid (AMG) approaches [19, 20] are mostly used to solve PDEs on unstructured grids by forming the same hierarchy of problems using coarsening and upsampling operators. Using these building blocks of AMG, we propose novel pooling and unpooling operations for GCNs. Our operators are based on the Galerkin coarsening operator of aggregation-based AMG [21, 22], performing pure aggregation for pooling and smoothed aggregation as the unpooling operator. The advantage of having pooling capability, as seen both in traditional CNNs and GCNs [4, 23, 5, 6] are the enlargement of the receptive field of the neurons, and reduced computational cost (in terms of floating operations), allowing for wider and deeper networks. In what follows, we elaborate on existing unordered data methods in Section 2, and present our method in Section 3. We discuss the similarity between traditional CNNs and our proposed GCN, and motivate the use of differential operators as a parameterization to a convolution kernel in Section 3.3. Furthermore, we compare the computational cost of our method compared to other message-passing, spatially based GCNs in Section 3.5 . To validate our model, we perform experiments on point cloud classification and segmentation tasks on various datasets in Section 4. Finally, we study the importance and contribution of the different terms in the parameterization to the performance of our method in Section 4.3. 2 Related work Unordered data come in many forms and structures – from meshes and point clouds that describe 3D objects to social network graphs. For 3D related data, a natural choice would be to voxelize the support of the data, as in [24]. Clearly, such approach comes at a high computational cost, while causing degradation of the data. Other methods suggest to operate directly on the data - whether it is a point cloud [8, 9, 25] or a graph [13, 15, 11, 12, 26]. Recent works like [15, 17] assumed a system of local coordinates centered around each vertex. These methods propose to assign weights to geometric neighborhoods around the vertices, in addition to the filter weights. Masci et al. [15] proposed assigning fixed Gaussian mixture weight for those neighborhoods, and [17] goes a step further and learns the parameters of the Gaussians. These methods require high computational costs, due to the computation of exponential terms (particularly at inference time) as well as the overhead of additional learnable parameters. Later, it was shown in [26] that adopting GCNs for point-cloud related tasks can be highly beneficial, since the learned features of the graph vertices in different layers of the network can induce dynamic graphs which reveal their underlying correlations. We follow the trend of employing GCNs for point-cloud related tasks like shape classification and segmentation. Specifically, we choose to work with spatial GCNs since they are most similar to standard structured CNNs. However, compared to other works like DGCNN [27] and MPNN [16] which can be interpreted as non-directed discretized gradient operators, we introduce directed gradients, as well as the addition of the Laplacian term of the graph. The Laplacian is the key ingredient in spectral-based methods like [13, 12], but was not used in spatial GCNs where the vertices have a geometric meaning, to the best of our knowledge. Unlike traditional structured CNNs, where the pooling and unpooling operations are trivial, these operations are more debatable in unordered methods, due to the lack of order or the metric between points. Works like PointNet++ [9] proposed using Furthest Point Sampling technique in order to choose remaining points in coarsened versions of the inputs. Other works proposed utilizing `2 norm of the features to determine which elements of the graph are to be removed in subsequent layers of the network [6, 28]. Recent works like DiffPool [29] proposed learning a dense assignment matrix to produce coarsened version of an initial graph. However, learning a dense matrix is of quadratic computational cost in the number of vertices and does not scale well for large scale point-clouds. Also, DiffPool is constrained to fixed graph sizes, while our method is agnostic to the size of the input. We focus on the employment of AMG as a concept to define our pooling and unpooling operations. We use classical aggregation AMG [22], which is suitable for unstructured grids, similarly to works that incorporated geometric multigrid concepts into structured grids CNNs [5, 30]. On a different note, the recent work [31] showed another connection between AMG and GCNs, and proposed using GCNs for learning sparse AMG prolongation matrices to solve weighted diffusion problems on unstructured grids. 3 Method We propose to parameterize the graph convolutional kernel according to discretized differential operators defined on a graph. Therefore, we call our convolution DiffGCN. To have a complete set of neural network building blocks, we also propose an AMG inspired pooling and unpooling operators to enlarge the receptive fields of the neurons, and to allow for wider and deeper networks. 3.1 Convolution kernels via differential operators To define the convolution kernels in the simplest manner, we use finite differences, which is a simple and widely used approach for numerical discretization of differential operators. Alternatives, such as finite element or finite volume schemes may also be suitable for the task, but are more complicated to implement in existing deep learning frameworks. Using finite differences, the first and second order derivatives are approximated as: ∂f(x) ∂x ≈ f(x+ h)− f(x− h) 2h , ∂2f(x) ∂x2 ≈ f(x+ h)− 2f(x) + f(x− h) h2 . (1) We harness these simple operators to estimate the gradient and Laplacian of the unstructured feature maps defined on a graph. Given an undirected graph G = (V,E) where V,E denote the vertices and edges of the graph, respectively, we propose a formulation of the convolution kernel as follows: conv(G,Θ) ≈ θ1I + θ2 ∂ ∂x + θ3 ∂2 ∂x2 + θ4 ∂ ∂y + θ5 ∂2 ∂y2 + θ6 ∂ ∂z + θ7 ∂2 ∂z2 . (2) This gives a 7-point convolution kernel which consists of the mass, gradient and Laplacian of the signal defined over the graph. We now formulate the operators in (2) mathematically. We first define that the features of the GCN are located in the vertices vi ∈ V of the graph, similarly to a nodal discretization. For each node we have cin features (input channels). We start with the definition of the gradient (in x, y, z), which according to (1) is defined on the middle of an edge eij ∈ E connecting the pair vi and vj . Since the edge direction may not be aligned with a specific axis, we project the derivative along the edge onto the axes x, y, z. For example, (∂xGf)ij = ∂f ∂x (eij) = fvi − fvj dist(vi, vj) (x(vi)− x(vj)). (3) x(vi) is the x-coordinate of vertex vi, and fvi ∈ Rcin is the feature vector of size cin defined on vertex vi. dist(vi, vj) is the Euclidean distance between vi and vj . Given this approach, we define the gradient matrix of the graph by stacking the projected differential operator in each of the axes x, y, z: ∇G = ∂xG∂yG ∂zG : |V | · cin → 3 · |E| · cin (4) This gradient operates on the vertex space and its output size is 3 times the edge space of the graph, for the x, y, z directions. To gather the gradients back to the vertex space we form an edge-averaging operator A : 3 · |E| · cin → 3 · |V | · cin, (Af)i = 1 |Ne(vi)| ∑ j∈Ne(vi) feij (5) where Ne(vi) = {j : eij ∈ E} is the set of edges associated with vertex vi. The function f in (5) is a feature map tensor defined on the edges of the graph. The three different derivatives in (4) are treated as three different features for each edge. In a similar fashion, the Laplacian of the graph with respect to each axis x, y, z is computed in two steps. The first step is the gradient in equation (4). Then, we apply the following transposed first-order derivative operators to obtain the second derivatives back on the vertices:(∂xG)T 0 00 (∂yG)T 0 0 0 (∂zG) T : 3 · |E| · cin → 3 · |V | · cin. (6) The transposed gradient if often used to discretize the divergence operator, and the resulting Laplacian is the divergence of the gradient. Here, however, we do not sum the derivatives (the Laplacian is the sum of all second derivatives) so that the second derivative in each axis ends up as a separate feature on a vertex, so it is weighted in our convolution operator in (2). This construction is similar to the way graph Laplacians and finite element Laplacians are defined on graphs or unstructured grids. Implementation using PyTorch-Geometric To obtain such functionality while using common GCN-designated software [32] and concepts [14, 16], we define a directed auxiliary graph denoted by G′ = (V ′, E′), where in addition to the original set of vertices V , we have new dummy vertices, representing the mid-edge locations eij ∈ E. Then, we define the connectivity of G′ such that each vertex vi ∈ V has a direct connection to the mid-edge location eij as in Fig. 2. More explicitly: V ′ = V ∪ E , E′ = {(vi, eij), (vj , eij) | eij ∈ E}. (7) We also use the transposed graph, which is an edge flipped version of G′, also demonstrated in Fig. 2. Given these two graphs, we are able to obtain the gradient and Laplacian terms of the signal defined over the graph via mean aggregation of message passing scheme [16, 14], where we perform two stages of the latter. First, we use the auxiliary graph G′ to send the following message for each (vi, eij) ∈ E′: msgGrad(vi → eij , fvi) = fvi 2 · dist(vi, eij) x(vi)y(vi) z(vi) − x(eij)y(eij) z(eij) ∈ R3·cin . (8) Here, each vertex gets two messages, and due to the subtraction of vertex locations in the message followed by a sum aggregation geij = msgGrad(vi → eij , fvi) +msgGrad(vj → eij , fvj ) (9) the discretized gradient in (3)-(4) is obtained on the edges. Following this, we return two messages over the transposed graph G′, returning both the gradient and the Laplacian of the graph on the original vertex space V . The first part of the message returns the gradient terms from the edges to the vertices simply by sending the identity message followed by mean aggregation: GradG(vi) = 1 |Ni| ∑ j∈Ni geij . (10) This concludes Eq. (5). The second part of the message differentiates the edge gradients to obtain the Laplacian back on the original vertices: msgEdgeLap(eij → vi, geij ) = geij 2 · dist(vi, eij) x(eij)y(eij) z(eij) − x(vi)y(vi) z(vi) ∈ R3·cin . (11) Then, we obtain the features described in Eq. (6) by performing mean aggregation: LapG(vi) = 1 |Ni| ∑ j∈Ni msgEdgeLap(eij , vi). (12) Finally, we concatenate the mass, gradient and Laplacian to obtain the differential operators features: f̂vi = fvi ⊕GradG(vi)⊕ LapG(vi) ∈ R7·cin , (13) where ⊕ denotes channel-wise concatenation. Finally, we apply a multi-layer perceptron (MLP)—a cout × 7 · cin point-wise convolution followed by batch normalization and ReLU— to the features in Eq. (13). The implementation above follows the mathematical formulation step by step, but it requires to explicitly construct the auxiliary graph in Fig. 2. An equivalent and more efficient way to implement our method, which is only implicitly based on those auxiliary graphs, is to construct a message that contains 6 · cin features by combining Eq. (9) and (11) in a single message followed by mean aggregation as in Eq. (12) and concatenation of the self feature, resulting in a feature f̂vi ∈ R7·cin . 3.2 Algebraic multigrid pooling and unpooling An effective pooling operation is important to faithfully represent coarsened versions of the graph. We propose to use AMG methods [22, 20], namely, the Galerkin coarsening which we explain now. In AMG methods the coarse graph vertices are typically chosen either as a subset of the fine graph vertices (dubbed “C-points”) or as clusters of vertices called aggregates. We use the latter approach, and apply the Graclus clustering [33] to form the aggregates. Let {CJ}|Vcoarse|J=1 be the aggregates, each corresponds to a vertex in the coarse graph. Then, we define the restriction (pooling) operator: RJ,i = { 1 i ∈ CJ 0 otherwise . (14) Given a features matrix X and an adjacency matrix A, their coarsened counterparts are defined via the Galerkin coarsening: Xcoarse = RTX, Acoarse = RTAR ∈ R|Vcoarse|×|Vcoarse|. (15) To perform the unpooling operator, also called prolongation, we may use the transpose of the restriction operator (14). However, when unpooling with an aggregation matrix, we get piece-wise constant feature maps, which are undesired. To have a smoother unpooling operator, we propose to allow the prolongation of soft clustering via smoothed aggregation [22] as follows: P = (I − (D)−1L)RT ∈ R|V |×|Vcoarse| (16) Where I,D,L are the identity matrix, degree and Laplacian matrix of the layer, respectively. To unpool from a coarsened version of the graph, we apply the corresponding prolongation operator at each level, until we reach the initial problem resolution. 3.3 Similarity between DiffGCN and standard CNN operators for structured grids A standard CNN is based on learning weights of convolutional filters. The work [18] showed that the 2D convolution kernel can be represented as a linear combination of finite difference differential operators. These classical differential operators are obtained using our definitions in Eq. (3)-(6), in the case of a structured regular graph. In 2D (without the z axis), Eq. (2) will result in a 5-point stencil represented as θ1 0 0 00 1 0 0 0 0 + θ2 0 0 0−1 0 1 0 0 0 + θ3 0 0 01 −2 1 0 0 0 + θ4 0 1 00 0 0 0 −1 0 + θ5 0 1 00 −2 0 0 1 0 (17) The Laplacian, together with the mass term allow the network to obtain low-pass filters, which are highly important to average out noise, and to prevent aliasing when downsampling the feature-maps. Gradient based methods like [26] can only approximate the Laplacian term via multiple convolutions, leading to redundant computations. Furthermore, the work of [34] showed that the popular 3 × 3 convolution kernel can be replaced by this 5 point stencil without losing much accuracy. When extending this to 3D, the common 3× 3× 3 kernel includes 27 weights, and the lighter version in (2) ends in a star-shaped stencil using 7 weights only, which is a significant reduction from 27. We refer the interested reader to [35, 36, 37, 38, 39, 40] for a more rigorous study of the connection between ODEs, PDEs and CNNs. 3.4 DiffGCN architectures We show the architectures used in this work in Fig. 1. We define a DiffGCN block which consists of two DiffGCN convolutions, with a shortcut connection, as in ResNet [23] for better convergence and stability. Pooling is performed before the first convolution in each block, besides the first opening layer. We use concatenating skip-connections to fuse feature maps from shallow and deep layers. Before this concatenation, unpooling is performed to resize the point-cloud to its original dimensions. 3.5 Computational cost of DiffGCN Typically, spatial GCNs like [14, 16, 26, 15] employ the convolutions K times per vertex, where K is the neighborhood size. More explicitly, a typical convolution can be written as x′i = j∈Ni hΘ(f(xi, xj)), (18) where Ni is the set of neighbors of vertex vi ∈ V , is a permutation invariant aggregation operator like max or sum and hΘ is an MLP [8] parameterized by the set of weights Θ. f is a function that is dependent on a vertex and its neighbors. For instance, in DGCNN [26] f(xi, xj) = xi ⊕ (xi − xj). By design, our convolution operation first gathers the required differential terms, and then feeds their channel-wise concatenation through a MLP. That is, our convolution can be written as x′i = hΘ( j∈Ni g(xi, xj)), (19) where g is a function that constructs the desired differential operator terms. Thus, we reduce the feed-forward pass of our convolution by an order of K, which decreases the number of FLOPs required in our convolution. In other words, the MLP operation in our convolution is independent of the number of neighbors K, since we aggregate the neighborhood features prior to the MLP. If s is the stencil size (e.g.,DGCNN [26] uses s = 2, while ours is s = 7), N is the input size, and cin , cout are the number of input and output channels, respectively, then the number of floating point operations of a method defined via Eq. (18) is O(s×N ×K × cin × cout), while the cost of our method in Eq. (19) reduces to O(s×N × cin × cout). In Table 1 we report the required FLOPs and latency for various convolutions with 1, 024 points input and cin = 64 , cout = 128. For VoxNet [24] we use a 3× 3× 3 kernel and 12× 12× 12 input. For PointCNN, DGCNN and ours, we set the neighborhood size K = 10. 4 Experiments To demonstrate the effectiveness of our framework, we conducted three experiments on three different datasets - classification (ModelNet40 [41]), part segmentation (ShapeNet Parts [42]) and semantic segmentation (S3DIS [43]). We also report an ablation study to obtain a deeper understanding of our framework. In all the experiments, we start from a point-cloud, and at each DiffGCN block we construct a K-nearest-neighbor graph according to the features of the points. As noted in [14], spectral methods generally lead to inferior results than spatial methods - thus we omit them in our experimental evaluations. We implement our work using the PyTorch [44] and PyTorch Geometric [32] libraries. We use the networks shown in Fig. 1. For the semantic segmentation task on S3DIS we do not use a spatial transformer. Throughout all the experiments we use ADAM optimizer [45] with initial learning rate of 0.001. We run our experiments using NVIDIA Titan RTX with a batch size 20. Our loss function is the cross-entropy loss for classification and focal-loss [46] for the segmentation tasks. 4.1 Classification results For the classification task we use ModelNet-40 dataset [41] which includes 12,311 CAD meshes across 40 different categories. The data split is as follows: 9,843 for training and 2,468 for testing. Our training scheme is similar to the one proposed in PointNet [8], in which we rescale each mesh to a unit cube, and then we sample 1,024 random points from each mesh at each epoch. We also use random scaling between 0.8 to 1.2 and add random rotations to the generated point cloud. We report our results with K = 20, with and without pooling. The results of our method are summarized in Table 2. We obtained higher accuracy than [26, 51, 52] which also use GCNs for this task. We suggest that the difference stems mainly from the addition of the Laplacian term to our convolution, and the contribution of the pooling module. Note, the work HGNN [53] which is based on hyper-graphs, using features that are of size 4,096 (and not only 3), extracted from MVCNN [54] and GVCNN [55], therefore, we do not include it in Table 2. 4.2 Segmentation results We test our method on two different segmentation datasets - Shapenet part segmentation [42] and Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) [43]. We use the lower part network in Fig. 1, with K = 20, 10, 5 in each of the DiffGCN blocks, respectively. For Shapenet part segmentation dataset, our objective is to classify each point in a point-cloud to its correct part category. There are 16,881 3D shapes across 16 different categories, with a total of 50 part annotation classes, where each shape is annotated with 2-6 parts. We sample 2,048 points from each shape and use the training, validation and testing split in [56]. The results are reported in Table 3. Our method achieves the highest mIoU out of all the considered networks. The Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) contains 3D scans of 272 room from 6 different areas. Each point is annotated with one of 13 semantic classes. We adopt the pre-processing steps of splitting each room into 1m× 1m blocks with random 4,096 points at the training phase, and all points during, where each point represented by a 9D vector (XYZ, RGB, normalized spatial coordinates). We follow the training, validation and testing split from [8]. We follow the 6-fold protocol for training and testing as in [43], and report the results of this experiment in Table 4. We obtained higher accuracy than the popular point-based network PointNet [8] as well as the graph based network DGCNN [26]. Note that [25] uses different pre-processing steps. Namely, the blocks were of 1.3m× 1.3m, where the added 0.3m on each dimensions is used for location context, and is not part of the objective at each block. In addition, we compare our work with a recent work, DCM-Net [58], which differs from our method by its approach of combining geodesic and Euclidean data, decoupling the data by utilizing parallel networks. 4.3 Ablation study We measure the contribution of each component of our model, as well as different combinations of them, on classification with ModelNet40. Our results read that as expected, using each component on its own (e.g., mass term only) reduces accuracy. However, by combining the different terms - accuracy increases. We found that using the mass and Laplacian term is more beneficial than the mass and gradient term. This shows the representation power of the Laplacian operator which is widely used in classical computer graphics and vision [60, 61, 62]. That is in addition to spectral-based GCNs which are parameterized by polynomials of the graph Laplacian [13, 12, 63]. In addition, we experiment with different number of neighbors, with and without pooling, reading slight reduction in performance, but with less FLOPs and memory requirements. We note that the pooling operations lead to better performance since they enlarge the receptive fields of the neurons. 5 Conclusion We presented a novel graph convolution kernel based on discretized differential operators, which together with our AMG pooling and unpooling operators form the most important components of a CNN. Our GCN network shows on par or better performance than current state-of-the-art GCNs. We also draw an analogy between standard structured CNNs and our method, and the reduced cost of ours compared to other GCNs. Broader Impact The method we propose can be used for additional tasks in which the data have a geometric meaning. For instance, data sourced from geographic information systems (GIS) can be used for prediction of elections results [64]. Thus, it may have an impact on other fields. In addition, our method is lighter than other GCNs, which can be beneficial for power and time consumption. We are not aware of an ethical problem or negative societal consequences. Acknowledgments and Disclosure of Funding The research reported in this paper was supported by the Israel Innovation Authority through Avatar consortium, and by grant no. 2018209 from the United States - Israel Binational Science Foundation (BSF), Jerusalem, Israel. ME is supported by Kreitman High-tech scholarship.
1. What is the main contribution of the paper regarding pooling and unpooling operations for 3D point clouds? 2. What are the strengths of the proposed approach, particularly in terms of the use of differential operators and algebraic multigrid? 3. What are the weaknesses of the paper, especially regarding comparisons with other works and the novelty of the ideas presented? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the experimental results and comparisons with other methods?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work presents pooling and unpooling operations for 3D point clouds by formulating a graph convolution kernel based on gradients and Laplacian of the unstructured feature maps defined on a graph (see Eq.2). By then, the method defines "double message passing" between vertices via the connecting edges as (Section 3.1), and later adopts the Algebraic multigrid for pooling and unpooling (Section 3.2): - Adopt the Graclus clustering [31] to form aggregates (cluster of vertices in graph) and define the pooling operator, and - Adopt the smoothed aggregation [20] to define the unpooling operator. In the end, the paper presents three experiments: classification using ModelNet40, part segmentation using ShapeNet Parts, and semantic segmentation using S3DIS, as well as an ablation study. Overall, the approach helps reduce the network parameters needed in the graph convolutions, thus reducing the computational cost. Strengths The two interesting parts in this work are (i) the part on algebraic multigrid, which looks novel but too short, and (ii) the idea of using differential operators to formulate the graph convolution. Positive results were shown in the experiments: - For shape classification on ModelNet40, DiffGCN is found to performs better than DGCNN and LDGCNN, PointCNN, KCNet, etc. - For part segmentation on ShapeNet Parts, DiffGCN is found to performs better than DGCNN and LDGCNN, PointCNN, KCNet, etc. Weaknesses The ideas of Laplacian coordinates and differential pooling have been explored in existing works on graph neural networks, e.g., in those works on Spectral-based Convolutional GNN (see [1] below). So technically, in Section 3.3, can you provide comparison not only with standard CNN but also the recent graph neural network models and state the novelty of this work. The idea of adopting AMG is novel and the only work that I am aware of is [29] "Learning Algebraic Multigrid Using Graph Neural Networks," which is recently published in ICML 2020 (please update the reference). However, seeing Section 3.3, the contribution of adopting AMG is not very strong. This work can be stronger, if it explores AMG in greater depth. More importantly, since this work already cited [29], can you provide more explicit comparison with [29] at the end of the related work section (Section 2)? What is the technical difference between the two? Any advantage of this work over [29]? Also, I guess this submission could be treated as kind of concurrent with [29] (?), since ICML 2020 was held just recently. For shape classification on ModelNet40 and part segmentation using ShapeNet Parts, how about other spectral-based method? Also, DiffPool and RS-CNN? For semantic segmentation on S3DIS, many existing methods can perform much better, e.g., JSENet (67.7), KPConv (67.1), PointWeb (66.7), etc.
NIPS
Title Stochastic Three-Composite Convex Minimization Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize x∈Rd f(x) + g(x) + h(x), (1) where f : Rd → R and g : Rd → R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd → R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient∇h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, Γ0(Rd) denotes the set of all proper, lower semicontinuous convex functions from Rd to [−∞,+∞], and 〈· | ·〉 is the standard scalar product on Rd with its associated norm ‖ · ‖. Subdifferential. The subdifferential of f ∈ Γ0(Rd) at a point x ∈ Rd is defined as ∂f(x) = {u ∈ Rd | f(y)− f(x) ≥ 〈y − x | u〉 ,∀y ∈ Rd}. We denote the domain of ∂f as dom(∂f) = {x ∈ Rd | ∂f(x) 6= ∅}. If ∂f(x) is a singleton, then f is a differentiable function, and ∂f(x) = {∇f(x)}. Indicator function. Given a nonempty subset C in Rd, the indicator function of C is given by ιC(x) = { 0 if x ∈ C, +∞ if x 6∈ C. (2) Proximal operator. The proximal operator of a function f ∈ Γ0(Rd) is defined as follows proxf (x) = arg min z∈Rd { f(z) + 1 2 ‖z − x‖2 } . (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ∈ Γ0(Rd) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x,y ∈ Rd. Strong convexity. A function f ∈ Γ0(Rd) is called strongly convex with some parameter µ > 0 (or simply µ-strongly convex), if 〈p− q | x− y〉 ≥ µ‖x− y‖2, ∀x,y ∈ dom(∂f), ∀p ∈ ∂f(x), ∀q ∈ ∂f(y). Solution set. We denote optimum points of (1) by x?, and the solution set by X ?: x? ∈ X ? = {x ∈ Rd | 0 ∈ ∇h(x) + ∂g(x) + ∂f(x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ∈ Γ0(Rd) has restricted strong convexity with respect to a point x? in a set M ⊂ dom(∂f), with parameter µ > 0, if 〈p− q | x− x?〉 ≥ µ‖x− x?‖2, ∀x ∈M, ∀p ∈ ∂f(x), ∀q ∈ ∂f(x?). Let (Ω,F ,P) be a probability space. An Rd-valued random variable is a measurable function x : Ω → Rd, where Rd is endowed with the Borel σ-algebra. We denote by σ(x) the σ-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a σ-field A ⊂ F is denoted by E[x|A]. Given a random variable y : Ω→ Rd, the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd-valued random process is a sequence (xn)n∈N of Rd-valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N. Initialization: xg,0 = proxγ0g(xf,0) ug,0 = γ −1 0 (xf,0 − xg,0) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = proxγng(xf,n + γnug,n) ug,n+1 = γ −1 n (xf,n − xg,n+1) + ug,n xf,n+1 = proxγn+1f (xg,n+1 − γn+1ug,n+1 − γn+1rn+1) end for Output: xg,n as an approximation of an optimal solution x?. Theorem 1 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Further assume that g is µg-strongly convex, where we allow µg = 0. Consider the following update rule for the learning rate: γn+1 = −γ2nµhη + √ (γ2nµhη) 2 + (1 + 2γnµg)γ2n 1 + 2γnµg , for some γ0 > 0 and η ∈]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. There exists c ∈ [0,+∞[ and t ∈ R, that satisfies ∑n k=0 E[‖rk −∇h(xg,k)‖2] ≤ cnt. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O(1/n2) +O(1/n2−t). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[‖rn − ∇h(xg,n)‖2] ≤ c for all n ∈ N and for some constant c ∈ [0,+∞[. See [15, 22, 26] for details. Remark 2 When rn = ∇h(xn), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (γn)n∈N in Theorem 1 depends on the strong convexity parameter µh, which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence γn = Θ(1/nα) for some α ∈]0, 1], and denote β = limn→∞ 2µhn αγn. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. E[‖rn −∇h(xg,n)‖2] is uniformly bounded by some positive constant. 3. E[‖ug,n − x?‖2] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O ( 1/nα ) if 0 < α < 1 O ( 1/nβ ) if α = 1, and β < 1 O ( (log n)/n ) if α = 1, and β = 1, O ( 1/n ) if α = 1, and β > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fejér monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ∈ ∂g(xg,n). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: ‖p‖ ≤ c,∀p ∈ ∂g(xg,n), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n)n∈N and x?. Suppose that h has restricted strong convexity on M with parameter µh. Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: minimize x∈Rd m∑ i=1 fi(x) + h(x), where fi : Rd → R are proper, lower semicontinuous convex functions, and h : Rd → R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn, and ug,n by ui,n for i = 1, 2, . . . ,m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1,0,xf2,0, . . . ,xfm,0}, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N Initialization: x0 = m −1∑m i=1 xfi,0 for i=1,2,. . . ,m do ui,0 = γ −1 0 (xfi,0 − x0) end for Main loop: for n = 0, 1, 2, . . . do xn+1 = m −1∑m i=1(xfi,n + γnui,n) for i=1,2,. . . ,m do ui,n+1 = γ −1 n (xfi,n − xn+1) + ui,n xfi,n+1 = proxγn+1mfi(xn+1 − γn+1ui,n+1 − γn+1rn+1) end for end for Output: xn as an approximation of an optimal solution x?. Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ε ≤ γn ≡ γ ≤ α(2L−1 − ε), for some α and ε in ]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely. 2. ∑ n∈N E[‖rn+1 −∇h(xg,n+1)‖2|Fn] < +∞ almost surely. Then, (xg,n)n∈N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]: minimize x∈Rd E [ |aTi x− b|2 ] subject to x ∈ ∆, aTav x ≥ b, where ∆ is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, we replace the expectation with the empirical sample average: minimize x∈Rd 1 p p∑ i=1 (aTi x− b)2 subject to x ∈ ∆, aTav x ≥ b. (5) This problem fits into our optimization template (1) by setting h(x) = 1 p p∑ i=1 (aTi x− b)2, g(x) = ι∆(x), and f(x) = ι{x | aTavx≥b}(x). We compute the unbiased estimates of the gradient by rn = 2(aTinx − b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor’s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. For the deterministic algorithm, we set η = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter µh to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use γn = γ0/(n+ 1) with γ0 = 1 for FF100, and γ0 = 10 3 for others.1 We start both algorithms from the zero vector. 1Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., γn = γ0/(n+ ζ) for some positive constants γ0 and ζ. We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest. We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1,a2, . . . ,ad} and the corresponding class labels {b1, b2, . . . , bd}, where ai ∈ Rp and bi ∈ {−1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: minimize x∈Rd 1 2 d∑ i=1 d∑ j=1 K(ai,aj)bibjxixj − d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0, where C ∈ [0,+∞[ is the penalty parameter and K : Rp × Rp → R is a kernel function. In our example we use the Gaussian kernel given by Kσ(ai,aj) = exp(−σ‖ai − aj‖2) for some σ > 0. Define symmetric positive semidefinite matrix M ∈ Rd×d with entries Mij = Kσ(ai,aj)bibj . Then the problem takes the form minimize x∈Rd 1 2 xTMx− d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = 1 2 xTMx− d∑ i=1 xi, g(x) = ι[0,C]d(x), and f(x) = ι{x | bTx=0}(x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg, which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM inxin−1. Here M in denotes ithn column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset “a1a”, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and σ = 2−2, and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate γn = γ0/(n+ 1) for various values of γ0. We observe O(1/n) empirical convergence rate on the squared relative error for large enough γ0, which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel.
1. How does the proposed method extend the three operator splitting algorithm in [11] to a stochastic setting? 2. What are the additional assumptions required to show convergence in expectation, and how do they relate to the work of Raguet et al. (2013) and Bricenos-Arias (2014)? 3. How does the proposed method compare to existing algorithms, such as GFB and FB-DRS, in terms of its ability to solve composite problems with multiple non-smooth terms? 4. Is there a limitation to the number of non-smooth terms that can be handled by the proposed method, and if so, how can it be extended to handle an arbitrary number of non-smooth terms?
Review
Review The paper proposes an extension of the three operator splitting algorithm in [11] to a stochastic setting, to solve composite problems that are the sum of a convex function that has a Lipschitz continuous gradient and two propoer closed convex simple functions. To show convergence in expectation, additional (restricted) strong convexity assumptions are also needed. 1) I think it would be wise and fair to discuss the work of Raguet et al. (2013 in SIIMS) on the generalized forward-backward (GFB), and that of Bricenos-Arias (2014 in Optimization) on forward-bakcward-Douglas-Rachford (FB-DRS). These algorithms are prior to [11] and solve n even more general version of (1) in the deterministic case. 2) Extending (1) to the case of arbitrary number of non-smooth terms can be achieved easily through a product-space trick. It would be interesting to mention this in the paper.
NIPS
Title Stochastic Three-Composite Convex Minimization Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize x∈Rd f(x) + g(x) + h(x), (1) where f : Rd → R and g : Rd → R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd → R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient∇h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, Γ0(Rd) denotes the set of all proper, lower semicontinuous convex functions from Rd to [−∞,+∞], and 〈· | ·〉 is the standard scalar product on Rd with its associated norm ‖ · ‖. Subdifferential. The subdifferential of f ∈ Γ0(Rd) at a point x ∈ Rd is defined as ∂f(x) = {u ∈ Rd | f(y)− f(x) ≥ 〈y − x | u〉 ,∀y ∈ Rd}. We denote the domain of ∂f as dom(∂f) = {x ∈ Rd | ∂f(x) 6= ∅}. If ∂f(x) is a singleton, then f is a differentiable function, and ∂f(x) = {∇f(x)}. Indicator function. Given a nonempty subset C in Rd, the indicator function of C is given by ιC(x) = { 0 if x ∈ C, +∞ if x 6∈ C. (2) Proximal operator. The proximal operator of a function f ∈ Γ0(Rd) is defined as follows proxf (x) = arg min z∈Rd { f(z) + 1 2 ‖z − x‖2 } . (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ∈ Γ0(Rd) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x,y ∈ Rd. Strong convexity. A function f ∈ Γ0(Rd) is called strongly convex with some parameter µ > 0 (or simply µ-strongly convex), if 〈p− q | x− y〉 ≥ µ‖x− y‖2, ∀x,y ∈ dom(∂f), ∀p ∈ ∂f(x), ∀q ∈ ∂f(y). Solution set. We denote optimum points of (1) by x?, and the solution set by X ?: x? ∈ X ? = {x ∈ Rd | 0 ∈ ∇h(x) + ∂g(x) + ∂f(x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ∈ Γ0(Rd) has restricted strong convexity with respect to a point x? in a set M ⊂ dom(∂f), with parameter µ > 0, if 〈p− q | x− x?〉 ≥ µ‖x− x?‖2, ∀x ∈M, ∀p ∈ ∂f(x), ∀q ∈ ∂f(x?). Let (Ω,F ,P) be a probability space. An Rd-valued random variable is a measurable function x : Ω → Rd, where Rd is endowed with the Borel σ-algebra. We denote by σ(x) the σ-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a σ-field A ⊂ F is denoted by E[x|A]. Given a random variable y : Ω→ Rd, the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd-valued random process is a sequence (xn)n∈N of Rd-valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N. Initialization: xg,0 = proxγ0g(xf,0) ug,0 = γ −1 0 (xf,0 − xg,0) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = proxγng(xf,n + γnug,n) ug,n+1 = γ −1 n (xf,n − xg,n+1) + ug,n xf,n+1 = proxγn+1f (xg,n+1 − γn+1ug,n+1 − γn+1rn+1) end for Output: xg,n as an approximation of an optimal solution x?. Theorem 1 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Further assume that g is µg-strongly convex, where we allow µg = 0. Consider the following update rule for the learning rate: γn+1 = −γ2nµhη + √ (γ2nµhη) 2 + (1 + 2γnµg)γ2n 1 + 2γnµg , for some γ0 > 0 and η ∈]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. There exists c ∈ [0,+∞[ and t ∈ R, that satisfies ∑n k=0 E[‖rk −∇h(xg,k)‖2] ≤ cnt. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O(1/n2) +O(1/n2−t). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[‖rn − ∇h(xg,n)‖2] ≤ c for all n ∈ N and for some constant c ∈ [0,+∞[. See [15, 22, 26] for details. Remark 2 When rn = ∇h(xn), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (γn)n∈N in Theorem 1 depends on the strong convexity parameter µh, which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence γn = Θ(1/nα) for some α ∈]0, 1], and denote β = limn→∞ 2µhn αγn. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. E[‖rn −∇h(xg,n)‖2] is uniformly bounded by some positive constant. 3. E[‖ug,n − x?‖2] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O ( 1/nα ) if 0 < α < 1 O ( 1/nβ ) if α = 1, and β < 1 O ( (log n)/n ) if α = 1, and β = 1, O ( 1/n ) if α = 1, and β > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fejér monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ∈ ∂g(xg,n). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: ‖p‖ ≤ c,∀p ∈ ∂g(xg,n), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n)n∈N and x?. Suppose that h has restricted strong convexity on M with parameter µh. Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: minimize x∈Rd m∑ i=1 fi(x) + h(x), where fi : Rd → R are proper, lower semicontinuous convex functions, and h : Rd → R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn, and ug,n by ui,n for i = 1, 2, . . . ,m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1,0,xf2,0, . . . ,xfm,0}, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N Initialization: x0 = m −1∑m i=1 xfi,0 for i=1,2,. . . ,m do ui,0 = γ −1 0 (xfi,0 − x0) end for Main loop: for n = 0, 1, 2, . . . do xn+1 = m −1∑m i=1(xfi,n + γnui,n) for i=1,2,. . . ,m do ui,n+1 = γ −1 n (xfi,n − xn+1) + ui,n xfi,n+1 = proxγn+1mfi(xn+1 − γn+1ui,n+1 − γn+1rn+1) end for end for Output: xn as an approximation of an optimal solution x?. Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ε ≤ γn ≡ γ ≤ α(2L−1 − ε), for some α and ε in ]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely. 2. ∑ n∈N E[‖rn+1 −∇h(xg,n+1)‖2|Fn] < +∞ almost surely. Then, (xg,n)n∈N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]: minimize x∈Rd E [ |aTi x− b|2 ] subject to x ∈ ∆, aTav x ≥ b, where ∆ is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, we replace the expectation with the empirical sample average: minimize x∈Rd 1 p p∑ i=1 (aTi x− b)2 subject to x ∈ ∆, aTav x ≥ b. (5) This problem fits into our optimization template (1) by setting h(x) = 1 p p∑ i=1 (aTi x− b)2, g(x) = ι∆(x), and f(x) = ι{x | aTavx≥b}(x). We compute the unbiased estimates of the gradient by rn = 2(aTinx − b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor’s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. For the deterministic algorithm, we set η = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter µh to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use γn = γ0/(n+ 1) with γ0 = 1 for FF100, and γ0 = 10 3 for others.1 We start both algorithms from the zero vector. 1Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., γn = γ0/(n+ ζ) for some positive constants γ0 and ζ. We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest. We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1,a2, . . . ,ad} and the corresponding class labels {b1, b2, . . . , bd}, where ai ∈ Rp and bi ∈ {−1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: minimize x∈Rd 1 2 d∑ i=1 d∑ j=1 K(ai,aj)bibjxixj − d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0, where C ∈ [0,+∞[ is the penalty parameter and K : Rp × Rp → R is a kernel function. In our example we use the Gaussian kernel given by Kσ(ai,aj) = exp(−σ‖ai − aj‖2) for some σ > 0. Define symmetric positive semidefinite matrix M ∈ Rd×d with entries Mij = Kσ(ai,aj)bibj . Then the problem takes the form minimize x∈Rd 1 2 xTMx− d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = 1 2 xTMx− d∑ i=1 xi, g(x) = ι[0,C]d(x), and f(x) = ι{x | bTx=0}(x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg, which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM inxin−1. Here M in denotes ithn column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset “a1a”, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and σ = 2−2, and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate γn = γ0/(n+ 1) for various values of γ0. We observe O(1/n) empirical convergence rate on the squared relative error for large enough γ0, which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel.
1. What is the focus of the paper regarding composite convex minimization problems? 2. What are the strengths and weaknesses of the proposed STCM algorithm compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any questions or concerns about the paper's assumptions, proofs, or numerical experiments?
Review
Review In this paper the authors proposed a stochastic optimization algorithm, STCM, for a three composite convex minimization problems. This problem can be write the sum of two proper, lower semicontinuous convex functions and a smooth function with restricted strong convexity. This work is based on the deterministic three operator splitting method proposed by Davis and Yin. The author(s). The almost surely convergence and a convergence rate are established. Major comments: (1) The main result is quite clear, but lacks the support in details for important aspects. For example, it would be better to show some intuition of the proposed algorithm. If it is an directly extension of Davis and Yin’s work by changing the gradient into the stochastic gradient, authors should point out it. I would like to suggest authors to show more insights and intuition behind of the proposed algorithm, since you have enough space. The proof looks quite rush. Many derivation details are omitted. It is hard to check the correctness in proofs, although the results sound correct. (2) In the Numerical experiments section, some basic algorithms should be included for comparison, e.g., Wang et al., Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints, 2015. (3) It is important to compare the convergence rates with some other stochastic gradient based method. Basically, we want to know how tight it is? For example, is it consistent the standard SGD, accelerated SGD, and the method above. Otherwise, it is difficult to evaluate the theoretical merit of this work. Minor comments: (1) There are lots of typos in this paper, e.g., equation (4), the citation in line 136, line 72, line 95, line 285. (2) In line 2 of equation (10), the grad(x_g,n - x*) should be (grad(x_g,n) - grad(x*)) instead. (3) $\mu$ strong convex => $\mu$ strongly convex. Major comments: (1) The main result is quite clear, but lacks the support in details for important aspects. For example, it would be better to show some intuition of the proposed algorithm. If it is an directly extension of Davis and Yin’s work by changing the gradient into the stochastic gradient, authors should point out it. I would like to suggest authors to show more insights and intuition behind of the proposed algorithm, since you have enough space. The proof looks quite rush. Many derivation details are omitted. It is hard to check the correctness in proofs, although the results sound correct. (2) In the Numerical experiments section, some basic algorithms should be included for comparison, e.g., Wang et al., Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints, 2015. (3) It is important to compare the convergence rates with some other stochastic gradient based method. Basically, we want to know how tight it is? For example, is it consistent the standard SGD, accelerated SGD, and the method above. Otherwise, it is difficult to evaluate the theoretical merit of this work.
NIPS
Title Stochastic Three-Composite Convex Minimization Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize x∈Rd f(x) + g(x) + h(x), (1) where f : Rd → R and g : Rd → R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd → R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient∇h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, Γ0(Rd) denotes the set of all proper, lower semicontinuous convex functions from Rd to [−∞,+∞], and 〈· | ·〉 is the standard scalar product on Rd with its associated norm ‖ · ‖. Subdifferential. The subdifferential of f ∈ Γ0(Rd) at a point x ∈ Rd is defined as ∂f(x) = {u ∈ Rd | f(y)− f(x) ≥ 〈y − x | u〉 ,∀y ∈ Rd}. We denote the domain of ∂f as dom(∂f) = {x ∈ Rd | ∂f(x) 6= ∅}. If ∂f(x) is a singleton, then f is a differentiable function, and ∂f(x) = {∇f(x)}. Indicator function. Given a nonempty subset C in Rd, the indicator function of C is given by ιC(x) = { 0 if x ∈ C, +∞ if x 6∈ C. (2) Proximal operator. The proximal operator of a function f ∈ Γ0(Rd) is defined as follows proxf (x) = arg min z∈Rd { f(z) + 1 2 ‖z − x‖2 } . (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ∈ Γ0(Rd) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x,y ∈ Rd. Strong convexity. A function f ∈ Γ0(Rd) is called strongly convex with some parameter µ > 0 (or simply µ-strongly convex), if 〈p− q | x− y〉 ≥ µ‖x− y‖2, ∀x,y ∈ dom(∂f), ∀p ∈ ∂f(x), ∀q ∈ ∂f(y). Solution set. We denote optimum points of (1) by x?, and the solution set by X ?: x? ∈ X ? = {x ∈ Rd | 0 ∈ ∇h(x) + ∂g(x) + ∂f(x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ∈ Γ0(Rd) has restricted strong convexity with respect to a point x? in a set M ⊂ dom(∂f), with parameter µ > 0, if 〈p− q | x− x?〉 ≥ µ‖x− x?‖2, ∀x ∈M, ∀p ∈ ∂f(x), ∀q ∈ ∂f(x?). Let (Ω,F ,P) be a probability space. An Rd-valued random variable is a measurable function x : Ω → Rd, where Rd is endowed with the Borel σ-algebra. We denote by σ(x) the σ-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a σ-field A ⊂ F is denoted by E[x|A]. Given a random variable y : Ω→ Rd, the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd-valued random process is a sequence (xn)n∈N of Rd-valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N. Initialization: xg,0 = proxγ0g(xf,0) ug,0 = γ −1 0 (xf,0 − xg,0) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = proxγng(xf,n + γnug,n) ug,n+1 = γ −1 n (xf,n − xg,n+1) + ug,n xf,n+1 = proxγn+1f (xg,n+1 − γn+1ug,n+1 − γn+1rn+1) end for Output: xg,n as an approximation of an optimal solution x?. Theorem 1 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Further assume that g is µg-strongly convex, where we allow µg = 0. Consider the following update rule for the learning rate: γn+1 = −γ2nµhη + √ (γ2nµhη) 2 + (1 + 2γnµg)γ2n 1 + 2γnµg , for some γ0 > 0 and η ∈]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. There exists c ∈ [0,+∞[ and t ∈ R, that satisfies ∑n k=0 E[‖rk −∇h(xg,k)‖2] ≤ cnt. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O(1/n2) +O(1/n2−t). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[‖rn − ∇h(xg,n)‖2] ≤ c for all n ∈ N and for some constant c ∈ [0,+∞[. See [15, 22, 26] for details. Remark 2 When rn = ∇h(xn), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (γn)n∈N in Theorem 1 depends on the strong convexity parameter µh, which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence γn = Θ(1/nα) for some α ∈]0, 1], and denote β = limn→∞ 2µhn αγn. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. E[‖rn −∇h(xg,n)‖2] is uniformly bounded by some positive constant. 3. E[‖ug,n − x?‖2] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O ( 1/nα ) if 0 < α < 1 O ( 1/nβ ) if α = 1, and β < 1 O ( (log n)/n ) if α = 1, and β = 1, O ( 1/n ) if α = 1, and β > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fejér monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ∈ ∂g(xg,n). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: ‖p‖ ≤ c,∀p ∈ ∂g(xg,n), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n)n∈N and x?. Suppose that h has restricted strong convexity on M with parameter µh. Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: minimize x∈Rd m∑ i=1 fi(x) + h(x), where fi : Rd → R are proper, lower semicontinuous convex functions, and h : Rd → R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn, and ug,n by ui,n for i = 1, 2, . . . ,m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1,0,xf2,0, . . . ,xfm,0}, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N Initialization: x0 = m −1∑m i=1 xfi,0 for i=1,2,. . . ,m do ui,0 = γ −1 0 (xfi,0 − x0) end for Main loop: for n = 0, 1, 2, . . . do xn+1 = m −1∑m i=1(xfi,n + γnui,n) for i=1,2,. . . ,m do ui,n+1 = γ −1 n (xfi,n − xn+1) + ui,n xfi,n+1 = proxγn+1mfi(xn+1 − γn+1ui,n+1 − γn+1rn+1) end for end for Output: xn as an approximation of an optimal solution x?. Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ε ≤ γn ≡ γ ≤ α(2L−1 − ε), for some α and ε in ]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely. 2. ∑ n∈N E[‖rn+1 −∇h(xg,n+1)‖2|Fn] < +∞ almost surely. Then, (xg,n)n∈N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]: minimize x∈Rd E [ |aTi x− b|2 ] subject to x ∈ ∆, aTav x ≥ b, where ∆ is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, we replace the expectation with the empirical sample average: minimize x∈Rd 1 p p∑ i=1 (aTi x− b)2 subject to x ∈ ∆, aTav x ≥ b. (5) This problem fits into our optimization template (1) by setting h(x) = 1 p p∑ i=1 (aTi x− b)2, g(x) = ι∆(x), and f(x) = ι{x | aTavx≥b}(x). We compute the unbiased estimates of the gradient by rn = 2(aTinx − b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor’s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. For the deterministic algorithm, we set η = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter µh to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use γn = γ0/(n+ 1) with γ0 = 1 for FF100, and γ0 = 10 3 for others.1 We start both algorithms from the zero vector. 1Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., γn = γ0/(n+ ζ) for some positive constants γ0 and ζ. We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest. We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1,a2, . . . ,ad} and the corresponding class labels {b1, b2, . . . , bd}, where ai ∈ Rp and bi ∈ {−1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: minimize x∈Rd 1 2 d∑ i=1 d∑ j=1 K(ai,aj)bibjxixj − d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0, where C ∈ [0,+∞[ is the penalty parameter and K : Rp × Rp → R is a kernel function. In our example we use the Gaussian kernel given by Kσ(ai,aj) = exp(−σ‖ai − aj‖2) for some σ > 0. Define symmetric positive semidefinite matrix M ∈ Rd×d with entries Mij = Kσ(ai,aj)bibj . Then the problem takes the form minimize x∈Rd 1 2 xTMx− d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = 1 2 xTMx− d∑ i=1 xi, g(x) = ι[0,C]d(x), and f(x) = ι{x | bTx=0}(x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg, which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM inxin−1. Here M in denotes ithn column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset “a1a”, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and σ = 2−2, and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate γn = γ0/(n+ 1) for various values of γ0. We observe O(1/n) empirical convergence rate on the squared relative error for large enough γ0, which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel.
1. What is the focus of the paper in terms of the proposed algorithm and its application? 2. What are the strengths of the paper regarding its theoretical foundations and clarity of presentation? 3. What are the weaknesses of the paper regarding its experimental results and comparisons with other works?
Review
Review The authors propose a new algorithm for minimizing a stochastic composite objective with three convex components. The paper includes proof of convergence and numerical experiments in the context of portfolio optimization and classification via non-linear support vector machines.The topic is interesting and certainly has potential for impact. The paper is generally well written and clearly presented. However, the numerical experiments and comparison to existing methods fall short of satisfaction.
NIPS
Title Stochastic Three-Composite Convex Minimization Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize x∈Rd f(x) + g(x) + h(x), (1) where f : Rd → R and g : Rd → R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd → R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient∇h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, Γ0(Rd) denotes the set of all proper, lower semicontinuous convex functions from Rd to [−∞,+∞], and 〈· | ·〉 is the standard scalar product on Rd with its associated norm ‖ · ‖. Subdifferential. The subdifferential of f ∈ Γ0(Rd) at a point x ∈ Rd is defined as ∂f(x) = {u ∈ Rd | f(y)− f(x) ≥ 〈y − x | u〉 ,∀y ∈ Rd}. We denote the domain of ∂f as dom(∂f) = {x ∈ Rd | ∂f(x) 6= ∅}. If ∂f(x) is a singleton, then f is a differentiable function, and ∂f(x) = {∇f(x)}. Indicator function. Given a nonempty subset C in Rd, the indicator function of C is given by ιC(x) = { 0 if x ∈ C, +∞ if x 6∈ C. (2) Proximal operator. The proximal operator of a function f ∈ Γ0(Rd) is defined as follows proxf (x) = arg min z∈Rd { f(z) + 1 2 ‖z − x‖2 } . (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ∈ Γ0(Rd) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x,y ∈ Rd. Strong convexity. A function f ∈ Γ0(Rd) is called strongly convex with some parameter µ > 0 (or simply µ-strongly convex), if 〈p− q | x− y〉 ≥ µ‖x− y‖2, ∀x,y ∈ dom(∂f), ∀p ∈ ∂f(x), ∀q ∈ ∂f(y). Solution set. We denote optimum points of (1) by x?, and the solution set by X ?: x? ∈ X ? = {x ∈ Rd | 0 ∈ ∇h(x) + ∂g(x) + ∂f(x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ∈ Γ0(Rd) has restricted strong convexity with respect to a point x? in a set M ⊂ dom(∂f), with parameter µ > 0, if 〈p− q | x− x?〉 ≥ µ‖x− x?‖2, ∀x ∈M, ∀p ∈ ∂f(x), ∀q ∈ ∂f(x?). Let (Ω,F ,P) be a probability space. An Rd-valued random variable is a measurable function x : Ω → Rd, where Rd is endowed with the Borel σ-algebra. We denote by σ(x) the σ-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a σ-field A ⊂ F is denoted by E[x|A]. Given a random variable y : Ω→ Rd, the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd-valued random process is a sequence (xn)n∈N of Rd-valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N. Initialization: xg,0 = proxγ0g(xf,0) ug,0 = γ −1 0 (xf,0 − xg,0) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = proxγng(xf,n + γnug,n) ug,n+1 = γ −1 n (xf,n − xg,n+1) + ug,n xf,n+1 = proxγn+1f (xg,n+1 − γn+1ug,n+1 − γn+1rn+1) end for Output: xg,n as an approximation of an optimal solution x?. Theorem 1 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Further assume that g is µg-strongly convex, where we allow µg = 0. Consider the following update rule for the learning rate: γn+1 = −γ2nµhη + √ (γ2nµhη) 2 + (1 + 2γnµg)γ2n 1 + 2γnµg , for some γ0 > 0 and η ∈]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. There exists c ∈ [0,+∞[ and t ∈ R, that satisfies ∑n k=0 E[‖rk −∇h(xg,k)‖2] ≤ cnt. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O(1/n2) +O(1/n2−t). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[‖rn − ∇h(xg,n)‖2] ≤ c for all n ∈ N and for some constant c ∈ [0,+∞[. See [15, 22, 26] for details. Remark 2 When rn = ∇h(xn), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (γn)n∈N in Theorem 1 depends on the strong convexity parameter µh, which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence γn = Θ(1/nα) for some α ∈]0, 1], and denote β = limn→∞ 2µhn αγn. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. E[‖rn −∇h(xg,n)‖2] is uniformly bounded by some positive constant. 3. E[‖ug,n − x?‖2] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O ( 1/nα ) if 0 < α < 1 O ( 1/nβ ) if α = 1, and β < 1 O ( (log n)/n ) if α = 1, and β = 1, O ( 1/n ) if α = 1, and β > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fejér monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ∈ ∂g(xg,n). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: ‖p‖ ≤ c,∀p ∈ ∂g(xg,n), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n)n∈N and x?. Suppose that h has restricted strong convexity on M with parameter µh. Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: minimize x∈Rd m∑ i=1 fi(x) + h(x), where fi : Rd → R are proper, lower semicontinuous convex functions, and h : Rd → R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn, and ug,n by ui,n for i = 1, 2, . . . ,m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1,0,xf2,0, . . . ,xfm,0}, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N Initialization: x0 = m −1∑m i=1 xfi,0 for i=1,2,. . . ,m do ui,0 = γ −1 0 (xfi,0 − x0) end for Main loop: for n = 0, 1, 2, . . . do xn+1 = m −1∑m i=1(xfi,n + γnui,n) for i=1,2,. . . ,m do ui,n+1 = γ −1 n (xfi,n − xn+1) + ui,n xfi,n+1 = proxγn+1mfi(xn+1 − γn+1ui,n+1 − γn+1rn+1) end for end for Output: xn as an approximation of an optimal solution x?. Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ε ≤ γn ≡ γ ≤ α(2L−1 − ε), for some α and ε in ]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely. 2. ∑ n∈N E[‖rn+1 −∇h(xg,n+1)‖2|Fn] < +∞ almost surely. Then, (xg,n)n∈N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]: minimize x∈Rd E [ |aTi x− b|2 ] subject to x ∈ ∆, aTav x ≥ b, where ∆ is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, we replace the expectation with the empirical sample average: minimize x∈Rd 1 p p∑ i=1 (aTi x− b)2 subject to x ∈ ∆, aTav x ≥ b. (5) This problem fits into our optimization template (1) by setting h(x) = 1 p p∑ i=1 (aTi x− b)2, g(x) = ι∆(x), and f(x) = ι{x | aTavx≥b}(x). We compute the unbiased estimates of the gradient by rn = 2(aTinx − b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor’s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. For the deterministic algorithm, we set η = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter µh to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use γn = γ0/(n+ 1) with γ0 = 1 for FF100, and γ0 = 10 3 for others.1 We start both algorithms from the zero vector. 1Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., γn = γ0/(n+ ζ) for some positive constants γ0 and ζ. We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest. We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1,a2, . . . ,ad} and the corresponding class labels {b1, b2, . . . , bd}, where ai ∈ Rp and bi ∈ {−1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: minimize x∈Rd 1 2 d∑ i=1 d∑ j=1 K(ai,aj)bibjxixj − d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0, where C ∈ [0,+∞[ is the penalty parameter and K : Rp × Rp → R is a kernel function. In our example we use the Gaussian kernel given by Kσ(ai,aj) = exp(−σ‖ai − aj‖2) for some σ > 0. Define symmetric positive semidefinite matrix M ∈ Rd×d with entries Mij = Kσ(ai,aj)bibj . Then the problem takes the form minimize x∈Rd 1 2 xTMx− d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = 1 2 xTMx− d∑ i=1 xi, g(x) = ι[0,C]d(x), and f(x) = ι{x | bTx=0}(x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg, which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM inxin−1. Here M in denotes ithn column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset “a1a”, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and σ = 2−2, and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate γn = γ0/(n+ 1) for various values of γ0. We observe O(1/n) empirical convergence rate on the squared relative error for large enough γ0, which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel.
1. What is the focus of the paper in terms of optimization methods? 2. What are the strengths and weaknesses of the proposed algorithm? 3. How does the reviewer assess the novelty and significance of the paper's contributions? 4. Are there any concerns regarding the numerical experiments presented in the paper? 5. How does the reviewer suggest improving the paper's content or research?
Review
Review In this paper, the authors a stochastic optimization method for the convex minimization of the sum of three convex functions. They prove the convergence characterization of the proposed algorithm. Finally, some numerical experiments are conducted to show the the effectiveness of the proposed method.(1) This paper combine the "three operator splitting method" with "stochastic analysis". In fact, the main proof techniques are standard. Hence, I did not find the results very exciting. (2) The author claim that their method is the first purely primal stochastic splitting method which uses the proximity operator f and g separately, which is the main contribution. In fact, the literature "Accelerated stochastic gradient method for composite regularization (AISTATS), L.W. Zhong, J.T. Kwok" has proposed primal method for solving this class minimizations. (3) The numerical experiments are conducted unfairable. They compared their algorithm against the standard deterministic three-operator splitting method in [11] with a low accuracy. The authors should conducts more experiments to shown the efficiency of their algorithm by comparing with some state-of-the-art method such as the methods in " "Accelerated stochastic gradient method for composite regularization (AISTATS)".
NIPS
Title Stochastic Three-Composite Convex Minimization Abstract We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems. 1 Introduction We propose a stochastic optimization method for the three-composite minimization problem: minimize x∈Rd f(x) + g(x) + h(x), (1) where f : Rd → R and g : Rd → R are proper, lower semicontinuous convex functions that admit tractable proximal operators, and h : Rd → R is a smooth function with restricted strong convexity. We assume that we have access to unbiased, stochastic estimates of the gradient of h in the sequel, which is key to scale up optimization and to address streaming settings where data arrive in time. Template (1) covers a large number of applications in machine learning, statistics, and signal processing by appropriately choosing the individual terms. Operator splitting methods are powerful in this setting, since they reduce the complex problem (1) into smaller subproblems. These algorithms are easy to implement, and they typically exhibit state-of-the-art performance. To our knowledge, there is no operator splitting framework that can currently tackle template (1) using stochastic gradient of h and the proximal operators of f and g separately, which is critical to the scalability of the methods. This paper specifically bridges this gap. Our basic framework is closely related to the deterministic three operator splitting method proposed in [11], but we avoid the computation of the gradient∇h and instead work with its unbiased estimates. We provide rigorous convergence guarantees for our approach and provide guidance in selecting the learning rate under different scenarios. Road map. Section 2 introduces the basic optimization background. Section 3 then presents the main algorithm and provides its convergence characterization. Section 4 places our contributions in light of the existing work. Numerical evidence that illustrates our theory appears in Section 5. We relegate the technical proofs to the supplementary material. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Notation and background This section recalls a few basic notions from the convex analysis and the probability theory, and presents the notation used in the rest of the paper. Throughout, Γ0(Rd) denotes the set of all proper, lower semicontinuous convex functions from Rd to [−∞,+∞], and 〈· | ·〉 is the standard scalar product on Rd with its associated norm ‖ · ‖. Subdifferential. The subdifferential of f ∈ Γ0(Rd) at a point x ∈ Rd is defined as ∂f(x) = {u ∈ Rd | f(y)− f(x) ≥ 〈y − x | u〉 ,∀y ∈ Rd}. We denote the domain of ∂f as dom(∂f) = {x ∈ Rd | ∂f(x) 6= ∅}. If ∂f(x) is a singleton, then f is a differentiable function, and ∂f(x) = {∇f(x)}. Indicator function. Given a nonempty subset C in Rd, the indicator function of C is given by ιC(x) = { 0 if x ∈ C, +∞ if x 6∈ C. (2) Proximal operator. The proximal operator of a function f ∈ Γ0(Rd) is defined as follows proxf (x) = arg min z∈Rd { f(z) + 1 2 ‖z − x‖2 } . (3) Roughly speaking, the proximal operator is tractable when the computation of (3) is cheap. If f is the indicator function of a nonempty, closed convex subset C, its proximity operator is the projection operator on C. Lipschitz continuos gradient. A function f ∈ Γ0(Rd) has Lipschitz continuous gradient with Lipschitz constant L > 0 (or simply L-Lipschitz), if ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x,y ∈ Rd. Strong convexity. A function f ∈ Γ0(Rd) is called strongly convex with some parameter µ > 0 (or simply µ-strongly convex), if 〈p− q | x− y〉 ≥ µ‖x− y‖2, ∀x,y ∈ dom(∂f), ∀p ∈ ∂f(x), ∀q ∈ ∂f(y). Solution set. We denote optimum points of (1) by x?, and the solution set by X ?: x? ∈ X ? = {x ∈ Rd | 0 ∈ ∇h(x) + ∂g(x) + ∂f(x)}. Throughout this paper, we assume that X ? is not empty. Restricted strong convexity. A function f ∈ Γ0(Rd) has restricted strong convexity with respect to a point x? in a set M ⊂ dom(∂f), with parameter µ > 0, if 〈p− q | x− x?〉 ≥ µ‖x− x?‖2, ∀x ∈M, ∀p ∈ ∂f(x), ∀q ∈ ∂f(x?). Let (Ω,F ,P) be a probability space. An Rd-valued random variable is a measurable function x : Ω → Rd, where Rd is endowed with the Borel σ-algebra. We denote by σ(x) the σ-field generated by x. The expectation of a random variable x is denoted by E[x]. The conditional expectation of x given a σ-field A ⊂ F is denoted by E[x|A]. Given a random variable y : Ω→ Rd, the conditional expectation of x given y is denoted by E[x|y]. See [17] for more details on probability theory. An Rd-valued random process is a sequence (xn)n∈N of Rd-valued random variables. 3 Stochastic three-composite minimization algorithm and its analysis We present stochastic three-composite minimization method (S3CM) in Algorithm 1, for solving the three-composite template (1). Our approach combines the stochastic gradient of h, denoted as r, and the proximal operators of f and g in essentially the same structrure as the three-operator splitting method [11, Algorithm 2]. Our technique is a nontrivial combination of the algorithmic framework of [11] with stochastic analysis. Algorithm 1 Stochastic three-composite minimization algorithm (S3CM) Input: An initial point xf,0, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N. Initialization: xg,0 = proxγ0g(xf,0) ug,0 = γ −1 0 (xf,0 − xg,0) Main loop: for n = 0, 1, 2, . . . do xg,n+1 = proxγng(xf,n + γnug,n) ug,n+1 = γ −1 n (xf,n − xg,n+1) + ug,n xf,n+1 = proxγn+1f (xg,n+1 − γn+1ug,n+1 − γn+1rn+1) end for Output: xg,n as an approximation of an optimal solution x?. Theorem 1 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Further assume that g is µg-strongly convex, where we allow µg = 0. Consider the following update rule for the learning rate: γn+1 = −γ2nµhη + √ (γ2nµhη) 2 + (1 + 2γnµg)γ2n 1 + 2γnµg , for some γ0 > 0 and η ∈]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. There exists c ∈ [0,+∞[ and t ∈ R, that satisfies ∑n k=0 E[‖rk −∇h(xg,k)‖2] ≤ cnt. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O(1/n2) +O(1/n2−t). (4) Remark 1 The variance condition of the stochastic gradient estimates in the theorems above is satisfied when E[‖rn − ∇h(xg,n)‖2] ≤ c for all n ∈ N and for some constant c ∈ [0,+∞[. See [15, 22, 26] for details. Remark 2 When rn = ∇h(xn), S3CM reduces to the deterministic three-operator splitting scheme [11, Algorithm 2] and we recover the convergence rate O(1/n2) as in [11]. When g is zero, S3CM reduces to the standard stochastic proximal point algorithm [2, 13, 26]. Remark 3 Learning rate sequence (γn)n∈N in Theorem 1 depends on the strong convexity parameter µh, which may not be available a priori. Our next result avoids the explicit reliance on the strong convexity parameter, while providing essentially the same convergence rate. Theorem 2 Assume that h is µh-strongly convex and has L-Lipschitz continuous gradient. Consider a positive decreasing learning rate sequence γn = Θ(1/nα) for some α ∈]0, 1], and denote β = limn→∞ 2µhn αγn. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely, 2. E[‖rn −∇h(xg,n)‖2] is uniformly bounded by some positive constant. 3. E[‖ug,n − x?‖2] is uniformly bounded by some positive constant. Then, the iterates of S3CM satisfy E[‖xg,n − x?‖2] = O ( 1/nα ) if 0 < α < 1 O ( 1/nβ ) if α = 1, and β < 1 O ( (log n)/n ) if α = 1, and β = 1, O ( 1/n ) if α = 1, and β > 1. Proof outline. We consider the proof of three-operator splitting method as a baseline, and we use the stochastic fixed point theory to derive the convergence of the iterates via the stochastic Fejér monotone sequence. See the supplement for the complete proof. Remark 4 Note that ug,n ∈ ∂g(xg,n). Hence, we can replace condition 3 in Theorem 2 with the bounded subgradient assumption: ‖p‖ ≤ c,∀p ∈ ∂g(xg,n), for some positive constant c. Remark 5 (Restricted strong convexity) Let M be a subset of Rd that contains (xg,n)n∈N and x?. Suppose that h has restricted strong convexity on M with parameter µh. Then, Theorems 1 and 2 still hold. An example role of the restricted strong convexity assumption on algorithmic convergence can be found in [1, 21]. Remark 6 (Extension to arbitrary number of non-smooth terms.) Using the product space technique [5, Section 6.1], S3CM can be applied to composite problems with arbitrary number of non-smooth terms: minimize x∈Rd m∑ i=1 fi(x) + h(x), where fi : Rd → R are proper, lower semicontinuous convex functions, and h : Rd → R is a smooth function with restricted strong convexity. We present this variant in Algorithm 2. Theorems 1 and 2 hold for this variant, replacing xg,n by xn, and ug,n by ui,n for i = 1, 2, . . . ,m. Algorithm 2 Stochastic m(ulti)-composite minimization algorithm (SmCM) Input: Initial points {xf1,0,xf2,0, . . . ,xfm,0}, a sequence of learning rates (γn)n∈N, and a sequence of squared integrable Rd-valued stochastic gradient estimates (rn)n∈N Initialization: x0 = m −1∑m i=1 xfi,0 for i=1,2,. . . ,m do ui,0 = γ −1 0 (xfi,0 − x0) end for Main loop: for n = 0, 1, 2, . . . do xn+1 = m −1∑m i=1(xfi,n + γnui,n) for i=1,2,. . . ,m do ui,n+1 = γ −1 n (xfi,n − xn+1) + ui,n xfi,n+1 = proxγn+1mfi(xn+1 − γn+1ui,n+1 − γn+1rn+1) end for end for Output: xn as an approximation of an optimal solution x?. Remark 7 With a proper learning rate, S3CM still converges even if h is not (restricted) strongly convex under mild assumptions. Suppose that h has L-Lipschitz continuous gradient. Set the learning rate such that ε ≤ γn ≡ γ ≤ α(2L−1 − ε), for some α and ε in ]0, 1[. Define Fn = σ(xf,k)0≤k≤n, and suppose that the following conditions hold for every n ∈ N: 1. E[rn+1|Fn] = ∇h(xg,n+1) almost surely. 2. ∑ n∈N E[‖rn+1 −∇h(xg,n+1)‖2|Fn] < +∞ almost surely. Then, (xg,n)n∈N converges to a X ?-valued random vector almost surely. See [7] for details. Remark 8 All the results above hold for any separable Hilbert space, except that the strong convergence in Remark 7 is replaced by weak convergence. Note however that extending Remark 7 to variable metric setting as in [10, 27] is an open problem. 4 Contributions in the light of prior work Recent algorithms in the operator splitting, such as generalized forward-backward splitting [24], forward-Douglas-Rachford splitting [5], and the three-operator splitting [11], apply to our problem template (1). These key results, however, are in the deterministic setting. Our basic framework can be viewed as a combination of the three-operator splitting method in [11] with the stochastic analysis. The idea of using unbiased estimates of the gradient dates back to [25]. Recent developments of this idea can be viewed as proximal based methods for solving the generic composite convex minimization template with a single non-smooth term [2, 9, 12, 13, 15, 16, 19, 26, 23]. This generic form arises naturally in regularized or constrained composite problems [3, 13, 20], where the smooth term typically encodes the data fidelity. These methods require the evaluation of the joint prox of f and g when applied to the three-composite template (1). Unfortunately, evaluation of the joint prox is arguably more expensive compared to the individual prox operators. To make comparison stark, consider the simple example where f and g are indicator functions for two convex sets. Even if the projection onto the individual sets are easy to compute, projection onto the intersection of these sets can be challenging. Related literature also contains algorithms that solve some specific instances of template (1). To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers. On the other hand, accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints. To our knowledge, our method is the first operator splitting framework that can tackle optimization template (1) using the stochastic gradient estimate of h and the proximal operators of f and g separately, without any restriction on the non-smooth parts except that their subdifferentials are maximally monotone. When h is strongly convex, under mild assumptions, and with a proper learning rate, our algorithm converges with O(1/n) rate, which is optimal for the stochastic methods under strong convexity assumption for this problem class. 5 Numerical experiments We present numerical evidence to assess the theoretical convergence guarantees of the proposed algorithm. We provide two numerical examples from Markowitz portfolio optimization and support vector machines. As a baseline, we use the deterministic three-operator splitting method [11]. Even though the random averaging projection method proposed in [28] does not apply to our template (1) with its all generality, it does for the specific applications that we present below. In our numerical tests, however, we observed that this method exhibits essentially the same convergence behavior as ours when used with the same learning rate sequence. For the clarity of the presentation, we omit this method in our results. 5.1 Portfolio optimization Traditional Markowitz portfolio optimization aims to reduce risk by minimizing the variance for a given expected return. Mathematically, we can formulate this as a convex optimization problem [6]: minimize x∈Rd E [ |aTi x− b|2 ] subject to x ∈ ∆, aTav x ≥ b, where ∆ is the standard simplex for portfolios with no-short positions or a simple sum constraint, aav = E [ai] is the average returns for each asset that is assumed to be known (or estimated), and b encodes a minimum desired return. This problem has a streaming nature where new data points arrive in time. Hence, we typically do not have access to the whole dataset, and the stochastic setting is more favorable. For implementation, we replace the expectation with the empirical sample average: minimize x∈Rd 1 p p∑ i=1 (aTi x− b)2 subject to x ∈ ∆, aTav x ≥ b. (5) This problem fits into our optimization template (1) by setting h(x) = 1 p p∑ i=1 (aTi x− b)2, g(x) = ι∆(x), and f(x) = ι{x | aTavx≥b}(x). We compute the unbiased estimates of the gradient by rn = 2(aTinx − b)ain , where index in is chosen uniformly random. We use 5 different real portfolio datasets: Dow Jones industrial average (DJIA, with 30 stocks for 507 days), New York stock exchange (NYSE, with 36 stocks for 5651 days), Standard & Poor’s 500 (SP500, with 25 stocks for 1276 days), Toronto stock exchange (TSE, with 88 stocks for 1258 days) that are also considered in [4]; and one dataset by Fama and French (FF100, 100 portfolios formed on size and book-to-market, 23,647 days) that is commonly used in financial literature, e.g., [6, 14]. We impute the missing data in FF100 using nearest-neighbor method with Euclidean distance. For the deterministic algorithm, we set η = 0.1. We evaluate the Lipschitz constant L and the strong convexity parameter µh to determine the step-size. For the stochastic algorithm, we do not have access to the whole data, so we cannot compute these parameter. Hence, we adopt the learning rate sequence defined in Theorem 2. We simply use γn = γ0/(n+ 1) with γ0 = 1 for FF100, and γ0 = 10 3 for others.1 We start both algorithms from the zero vector. 1Note that a fine-tuned learning rate with a more complex definition can improve the empirical performance, e.g., γn = γ0/(n+ ζ) for some positive constants γ0 and ζ. We split all the datasets into test (10%) and train (90%) partitions randomly. We set the desired return as the average return over all assets in the training set, b = mean(aav). Other b values exhibit qualitatively similar behavior. The results of this experiment are compiled in Figure 1. We compute the objective function over the datapoints in the test partition, htest. We compare our algorithm against the deterministic threeoperator splitting method [11, Algorithm 2]. Since we seek statistical solutions, we compare the algorithms to achieve low to medium accuracy. [11] provides other variants of the deterministic algorithm, including two ergodic averaging schemes that feature improved theoretical rate of convergence. However, these variants performed worse in practice than the original method, and are omitted. Solid lines in Figure 1 present the average results over 100 Monte-Carlo simulations, and the boundaries of the shaded area are the best and worst instances. We also assess empirical evidence of the O(1/n) convergence rate guaranteed in Theorem 2, by presenting squared relative distance to the optimum solution for FF100 dataset. Here, we approximate the ground truth by solving the problem to high accuracy with the deterministic algorithm for 105 iterations. 5.2 Nonlinear support vector machines classification This section demonstrates S3CM on a support vector machines (SVM) for binary classification problem. We are given a training set A = {a1,a2, . . . ,ad} and the corresponding class labels {b1, b2, . . . , bd}, where ai ∈ Rp and bi ∈ {−1, 1}. The goal is to build a model that assigns new examples into one class or the other correctly. As common in practice, we solve the dual soft-margin SVM formulation: minimize x∈Rd 1 2 d∑ i=1 d∑ j=1 K(ai,aj)bibjxixj − d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0, where C ∈ [0,+∞[ is the penalty parameter and K : Rp × Rp → R is a kernel function. In our example we use the Gaussian kernel given by Kσ(ai,aj) = exp(−σ‖ai − aj‖2) for some σ > 0. Define symmetric positive semidefinite matrix M ∈ Rd×d with entries Mij = Kσ(ai,aj)bibj . Then the problem takes the form minimize x∈Rd 1 2 xTMx− d∑ i=1 xi subject to x ∈ [0, C]d, bTx = 0. (6) This problem fits into three-composite optimization template (1) with h(x) = 1 2 xTMx− d∑ i=1 xi, g(x) = ι[0,C]d(x), and f(x) = ι{x | bTx=0}(x). One can solve this problem using three-operator splitting method [11, Algorithm 1]. Note that proxf and proxg, which are projections onto the corresponding constraint sets, incur O(d) computational cost, whereas the cost of computing the gradient is O(d2). To compute an unbiased gradient estimate, we choose an index in uniformly random, and we form rn = dM inxin−1. Here M in denotes ithn column of matrix M , and 1 represents the vector of ones. We can compute rn in O(d) computations, hence each iteration of S3CM costs an order cheaper compared to deterministic algorithm. We use UCI machine learning dataset “a1a”, with d = 1605 datapoints and p = 123 features [8, 18]. Note that our goal here is to demonstrate the optimization performance of our algorithm for a real world problem, rather than competing the prediction quality of the best engineered solvers. Hence, to keep experiments simple, we fix problem parameters C = 1 and σ = 2−2, and we focus on the effects of algorithmic parameters on the convergence behavior. Since p < d, M is rank deficient and h is not strongly convex. Nevertheless we use S3CM with the learning rate γn = γ0/(n+ 1) for various values of γ0. We observe O(1/n) empirical convergence rate on the squared relative error for large enough γ0, which is guaranteed under restricted strong convexity assumption. See Figure 2 for the results. Acknowledgments This work was supported in part by ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, and NCCR-Marvel.
1. What is the focus of the paper regarding convex minimization? 2. What are the strengths of the proposed algorithm, particularly its convergence properties? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity and reproducibility of the paper's content?
Review
Review This paper proposed an algorithm for the convex minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. The convergence rate in expectation and the almost sure convergence of the algorithm were derived.Stochastic gradient method for solving composite convex minimization with one regularizer was studied in the following papers: Lan. An optimal method for stochastic composite optimization, 2012. Ghadimi, Lan. Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, 2012. In the current paper, the authors did not highlight the main difficulty in solving composite convex minimization with two regularizers. Moreover, the authors did not explain how to calculate the variance of the gradient vector in Example 5.2.
NIPS
Title Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1− e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √ n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗1 = −4 and µ∗2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→ L(µ), and the negative 2-norm of its gradient, µ 7→ −‖∇L(µ)‖2. Observe that the only local maxima are the vectors (−4, 4) and (4,−4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M -component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1− e−cM , the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, · · · ,M}, and N (µ,Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ,Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ,Σ) := 1√ (2π)ddet(Σ) e− 1 2 (x−µ) >Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗ = {µ∗i , . . . , µ∗M} of d-dimensional mean vectors, a vector λ ∗ = (λ∗1, . . . , λ ∗ M ) of nonnegative mixture weights that sum to one, and a collection Σ∗ = {Σ∗1, . . . ,Σ∗M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗,µ∗,Σ∗) = M∑ i=1 λ∗iφ(x | µ∗i ,Σ∗i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M∑ i=1 φ(x | µ∗i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗ = {µ∗i }Mi=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ‖µ∗i − µ∗j‖2 ≥ ξ, for all distinct pairs i, j ∈ [M ]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {x`}n`=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n∑ `=1 log ( 1 M M∑ i=1 φ(x` | µi, I) ) . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗ log ( 1 M M∑ i=1 φ(X | µi, I) ) . (4b) Here Eµ∗ denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗ (along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗ is by attempting to maximize the sample log-likelihood defined by the samples {x`}n`=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈ [M ] and ` ∈ [n], compute the membership weight wi(x`) = φ(x` | µi, I)∑M j=1 φ(x` | µj , I) . 2. M-step: For each i ∈ [M ], update the mean µi vector via µnewi = ∑n i=1 wi(x`)x`∑n `=1 wi(x`) . In the population setting, the M-step becomes: µnewi = Eµ∗ [wi(X)X] Eµ∗ [wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as logP(x | θ′) ≥ EZ∼P(·|x;θ) logP(x, Z | θ′)︸ ︷︷ ︸ :=Q(θ′|θ) −EZ∼P(·|x;θ) logP(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnewi = µi + s Eµ∗ [ wi(X)(X − µi) ] . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {x`}n`=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥ 3, we exhibit an M -component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥ 3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤ L(µ∗)− Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗1, µ ∗ 2 and µ ∗ 3, are such that the distance between µ∗1 and µ ∗ 2 is much smaller than the respective distances from µ ∗ 1 to µ ∗ 3, and from µ ∗ 2 to µ∗3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ ∗ 1 + µ ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗1 and µ ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture ofM unit-variance spherical Gaussians GMM(µ∗) with P [ ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ] ≥ 1− e−cM . Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M . As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M . In order to demonstrate that the success probability is exponentially small in M , we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(logM) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈ (0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ( ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ) ≥ 1− e−cM . (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M ) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈ (0, 1), and initialized by the random initialization scheme described previously. Then for any M -component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞ µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
1. What is the focus of the paper regarding the convergence of the EM algorithm? 2. What are the key findings of the paper regarding the non-convergence to global maxima? 3. Do you have any concerns or opinions regarding the surprisingness or expected nature of the results? 4. Can you explain the main bad example presented in the paper and its significance? 5. How does the paper extend the example for k components, and what does it show? 6. What is your assessment of the conclusion and contribution of the paper? 7. Are there any related works or comparisons that could enhance our understanding of the paper's content?
Review
Review The paper studies the convergence of the EM algorithm (and iterative heuristics for Maximum Likelihood Estimation) for learning mixtures of spherical Gaussians, and presents the following results that show non-convergence to global maxima: 1. Even when the mixture has 3 clusters that are very far apart and with infinite samples, algorithms like EM can get stuck in local optima. 2. For mixtures of k gaussians, random initialization for EM only succeeds with at most exp(-\Omega(k)) probability. 3. Gradient EM will not converge to strict saddle points generically; hence, bad local maxima are typically the issue. The paper presents a coherent set of results that show that EM does not converge to global maxima (even with infinite samples). However, I feel the results are not very surprising, and the techniques also follow expected lines. The main bad example is a mixture with 3 components: two of them are closer to each other, and the third is very far from both. They show that for the log-likelihood objective, there is a local maxima with one center near the first two components, and two centers near the far-off cluster. This is fairly expected behavior, particularly when the centers are separated by >> \sqrt{d}. Under this amount of separation, the clusters do not overlap and learning mixtures of spherical Gaussians becomes very similar in flavor to k-means. Similar examples provide bad instances of Lloyd's algorithm for k-means clustering. This example is extended for k components with a nice recursive construction -- this requires some effort, but again somewhat expected. Similarly, random initialization fails for similar reasons as for k-means clustering with the same failure probability (for k-means this serves as the motivation to pick k logk centers, or distance-squared sampling). To conclude, while the results tell a coherent story about the non-convergence to global optima, I feel this is along somewhat expected lines. Comments: 1. Kumar-Kannan provides convergence guarantees for Lloyd's heuristic for k-means clustering (and hence mixtures of Gaussians under sufficient separations conditions). While EM is different from Lloyd's heuristic when the Gaussians are not sufficiently separated (as in Balakrishnan-Wainwright-Yu), they seem essentially when there is large separation (this is true of the bad examples here). So, I think a comparison with k-means (and Lloyd's heuristic) that also contrasts this result would be very useful.
NIPS
Title Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1− e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √ n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗1 = −4 and µ∗2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→ L(µ), and the negative 2-norm of its gradient, µ 7→ −‖∇L(µ)‖2. Observe that the only local maxima are the vectors (−4, 4) and (4,−4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M -component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1− e−cM , the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, · · · ,M}, and N (µ,Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ,Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ,Σ) := 1√ (2π)ddet(Σ) e− 1 2 (x−µ) >Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗ = {µ∗i , . . . , µ∗M} of d-dimensional mean vectors, a vector λ ∗ = (λ∗1, . . . , λ ∗ M ) of nonnegative mixture weights that sum to one, and a collection Σ∗ = {Σ∗1, . . . ,Σ∗M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗,µ∗,Σ∗) = M∑ i=1 λ∗iφ(x | µ∗i ,Σ∗i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M∑ i=1 φ(x | µ∗i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗ = {µ∗i }Mi=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ‖µ∗i − µ∗j‖2 ≥ ξ, for all distinct pairs i, j ∈ [M ]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {x`}n`=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n∑ `=1 log ( 1 M M∑ i=1 φ(x` | µi, I) ) . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗ log ( 1 M M∑ i=1 φ(X | µi, I) ) . (4b) Here Eµ∗ denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗ (along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗ is by attempting to maximize the sample log-likelihood defined by the samples {x`}n`=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈ [M ] and ` ∈ [n], compute the membership weight wi(x`) = φ(x` | µi, I)∑M j=1 φ(x` | µj , I) . 2. M-step: For each i ∈ [M ], update the mean µi vector via µnewi = ∑n i=1 wi(x`)x`∑n `=1 wi(x`) . In the population setting, the M-step becomes: µnewi = Eµ∗ [wi(X)X] Eµ∗ [wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as logP(x | θ′) ≥ EZ∼P(·|x;θ) logP(x, Z | θ′)︸ ︷︷ ︸ :=Q(θ′|θ) −EZ∼P(·|x;θ) logP(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnewi = µi + s Eµ∗ [ wi(X)(X − µi) ] . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {x`}n`=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥ 3, we exhibit an M -component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥ 3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤ L(µ∗)− Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗1, µ ∗ 2 and µ ∗ 3, are such that the distance between µ∗1 and µ ∗ 2 is much smaller than the respective distances from µ ∗ 1 to µ ∗ 3, and from µ ∗ 2 to µ∗3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ ∗ 1 + µ ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗1 and µ ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture ofM unit-variance spherical Gaussians GMM(µ∗) with P [ ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ] ≥ 1− e−cM . Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M . As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M . In order to demonstrate that the success probability is exponentially small in M , we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(logM) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈ (0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ( ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ) ≥ 1− e−cM . (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M ) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈ (0, 1), and initialized by the random initialization scheme described previously. Then for any M -component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞ µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
1. What is the focus of the paper regarding the expectation-maximization algorithm? 2. What are the strengths and weaknesses of the paper's theoretical analysis? 3. Do you have any questions or concerns about the paper's proofs and technical aspects? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper addresses the expectation-maximization (EM) algorithm for the estimation of the set of mean parameters for a model of mixture of Gaussian variables. The number of Gaussian variables, the weights of the mixture, and the covariance matrices of the Gaussian variables are known and fixed. The paper is theoretical and prove three negative results for the EM algorithm. 1) There can be, even for large sample size, local maxima for the likelihood function, with likelihood values arbitrary smaller than the maximal likelihood value. 2) The EM algorithm, randomly initiated, converges with high probability to a critical point with likelihood value arbitrarily smaller than the maximal likelihood value. 3) The same negative result holds for the gradient EM algorithm. I think that the question addressed by the paper is interesting and worthy of attention. In order to obtain rigorous proofs, the paper is restricted to a simplified mixture of Gaussian variable model: the number of components is known, the weights are known and uniform and the covariance matrices are equal to identity. I do not think that these simplifications are a problem, given that the proofs are already technical, and that the obtained results are already informative. These results are very interesting in my opinion, and can be useful to understand better the EM algorithm in practice. It should be noted that the paper is written as addressing a case of general dimension d, while the proofs address the case d=1. Perhaps the authors could write if similar proofs would be possible for arbitrary dimension d, and if not, mention briefly the additional difficulties of the case d larger than 1. My only concern about the paper is the proof of Theorem 1 (unfortunately, I did not read the proofs in the supplementary material). I think that this proof is too short, and I would have preferred it to be longer, at the expense of shortening some of the discussions in the paper. The part of the proof where I think more details should be given is from line 288 to 306. The authors provide limits (as gamma or R goes to infinity) of supremums of likelihood functions over some domains. I do not have a problem with the computation of limits of likelihood functions evaluated a fixed parameters, so that, for instance, I conceive that the three equations of line 288 should be correct. Nevertheless, obtaining the limit of a maximum is not entirely obvious to me, when the domain is unbounded or depend of the parameters that go to infinity. In the same way, the authors mention at some places that the likelihood function is continuous, but this does not imply directly the types of convergences stated in the paper, in my opinion. I think that more details, and so a longer proof, are needed so that the reader can be completely convinced of the validity of the proof of Theorem 1. Finally, I have an other minor concern with the part of the paper addressing the Gradient EM, from line 162 to line 172. I found this part more difficult to follow than the rest of the paper. In particular, perhaps the authors could give more explanation on how Equation (5) is obtained.
NIPS
Title Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1− e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √ n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗1 = −4 and µ∗2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→ L(µ), and the negative 2-norm of its gradient, µ 7→ −‖∇L(µ)‖2. Observe that the only local maxima are the vectors (−4, 4) and (4,−4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M -component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1− e−cM , the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, · · · ,M}, and N (µ,Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ,Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ,Σ) := 1√ (2π)ddet(Σ) e− 1 2 (x−µ) >Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗ = {µ∗i , . . . , µ∗M} of d-dimensional mean vectors, a vector λ ∗ = (λ∗1, . . . , λ ∗ M ) of nonnegative mixture weights that sum to one, and a collection Σ∗ = {Σ∗1, . . . ,Σ∗M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗,µ∗,Σ∗) = M∑ i=1 λ∗iφ(x | µ∗i ,Σ∗i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M∑ i=1 φ(x | µ∗i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗ = {µ∗i }Mi=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ‖µ∗i − µ∗j‖2 ≥ ξ, for all distinct pairs i, j ∈ [M ]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {x`}n`=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n∑ `=1 log ( 1 M M∑ i=1 φ(x` | µi, I) ) . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗ log ( 1 M M∑ i=1 φ(X | µi, I) ) . (4b) Here Eµ∗ denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗ (along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗ is by attempting to maximize the sample log-likelihood defined by the samples {x`}n`=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈ [M ] and ` ∈ [n], compute the membership weight wi(x`) = φ(x` | µi, I)∑M j=1 φ(x` | µj , I) . 2. M-step: For each i ∈ [M ], update the mean µi vector via µnewi = ∑n i=1 wi(x`)x`∑n `=1 wi(x`) . In the population setting, the M-step becomes: µnewi = Eµ∗ [wi(X)X] Eµ∗ [wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as logP(x | θ′) ≥ EZ∼P(·|x;θ) logP(x, Z | θ′)︸ ︷︷ ︸ :=Q(θ′|θ) −EZ∼P(·|x;θ) logP(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnewi = µi + s Eµ∗ [ wi(X)(X − µi) ] . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {x`}n`=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥ 3, we exhibit an M -component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥ 3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤ L(µ∗)− Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗1, µ ∗ 2 and µ ∗ 3, are such that the distance between µ∗1 and µ ∗ 2 is much smaller than the respective distances from µ ∗ 1 to µ ∗ 3, and from µ ∗ 2 to µ∗3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ ∗ 1 + µ ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗1 and µ ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture ofM unit-variance spherical Gaussians GMM(µ∗) with P [ ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ] ≥ 1− e−cM . Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M . As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M . In order to demonstrate that the success probability is exponentially small in M , we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(logM) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈ (0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ( ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ) ≥ 1− e−cM . (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M ) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈ (0, 1), and initialized by the random initialization scheme described previously. Then for any M -component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞ µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
1. What is the main contribution and significance of the paper regarding local modes in Gaussian mixture models? 2. How likely are the configurations leading to bad local modes in practice, especially as the number of components or dimensionality increases? 3. Is random initialization practical when data is clearly clustered, particularly when there are well-spaced clusters? 4. Why is converging to a saddle point less desirable than converging to a local optimum? 5. Can the authors provide more clarity on requirement (1) in L233-235, and why it is needed? 6. Can the authors explain the purpose of the example on lines 74-81 and whether it convinces the reader that Sresbro conjecture holds when k = 2? 7. Are there any minor points or typos in the review that the authors would like to address?
Review
Review The paper is concerned with the existence of local modes in the log-likelihood of Gaussian mixture models. The simplest possible mixture model is considered: isotropic Gaussian components with known scale. In addition the model is well-specified. The authors show that local modes exist, even when infinite data is available, and that these are arbitrarily bad, in term of differences in log-likelihood between the local and global optima. The paper focuses on particular configurations, in particular cases where the mixture centres are divided between well-spaced clusters. Under such configurations, the authors show that random initialization will make so that EM misses the global optima almost certainly, as the number of components increases.The paper is well written, with occasional typos. I have two main doubts about the practical relevance of the paper. 1. Theorem 1 shows that the log-likelihood has a local modes, even when infinite data is available. The authors show that this mode exists when a particular configuration of the centres is chosen. My question is: how likely is this configuration to occur in practice? I would feel more confident about the practical relevance of the paper if, for instance, the authors made some assumptions about the distribution of the centres, and then showed that "bad" configurations become increasingly likely as the number of components (k) or the dimensionality of the problem increases. L210 to L218 of the paper say that Theorem 2 addresses these concerns, but I am not sure it does. It would be good to have a clear statement say how likely "bad" configurations are, under some assumptions. Notice that here I am talking about "bad" configurations (positions of the true centres), not "bad" initializations. 2. In theorem 2 the initialization of the EM algorithm comes to play. The authors consider initializing the centres of the components by randomly sampling the mixture. With this initialization, and under specific configurations of the centres, the EM very often fails to converge to the global optimum. Now, let us consider the case where the data is generated using 3 Gaussian, 2 of which are close and one far apart. If we have the same number N of data-points from each Gaussian, then 2N points will fall close to the first cluster and N will fall close to the 3rd density. It seems to me that very odd that somebody could initialize EM by putting 2 centers where there are only N points and 1 center where there are N observations, especially given that the two clusters are very well separated from each other. Is random initialization practically relevant when the data is clearly clustered? L74-81 I am not sure what the example on lines 74-81 is meant to convey. It does not convince me that Sresbro conjecture holds when k = 2, so what is its purpose? L233-235 I understand requirement (2) here, but I don't see why requirement (1) is needed. You mean that, if you initialize a center between 2 clusters, you cannot guarantee that it will move toward the correct cluster? L267-273 The authors prove that EM is unlikely to converge to a saddlepoint. From a practical point of view, why is this result important? Is converging to a saddlepoint more or less problematic than converging to a local optima? MINOR POINTS: L18 "We further show gradient EM algorithm" add "that the" before "gradient". L38 "quality of the estimate" maybe "estimates" is better. L60 "to characterize when it is that efficient algorithms..." I guess here you really mean "to characterize under which conditions efficient algorithms..." L138 Just to be clear: O(d^1/2)-separated mean that all densities are c-separated, with c being O(d^1/2). L172 Isn't this sentence redundant? Please reformulate it or remove it. L177 "this is equivalent of sampling " maybe "equivalent to" is better. L218 I don't understand what a "well-separated" constant is. L220 " initialized by the the random initialization" L225 I don't know what \Omega (k) means. L245 "unfortunate coincident for one single algorithm" used "coincidence" instead. L249 "Recall for uniform weighted" add "that" after "recall" L308 "And claim when" add "that" after "claim"
NIPS
Title Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1− e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √ n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗1 = −4 and µ∗2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→ L(µ), and the negative 2-norm of its gradient, µ 7→ −‖∇L(µ)‖2. Observe that the only local maxima are the vectors (−4, 4) and (4,−4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M -component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1− e−cM , the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, · · · ,M}, and N (µ,Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ,Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ,Σ) := 1√ (2π)ddet(Σ) e− 1 2 (x−µ) >Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗ = {µ∗i , . . . , µ∗M} of d-dimensional mean vectors, a vector λ ∗ = (λ∗1, . . . , λ ∗ M ) of nonnegative mixture weights that sum to one, and a collection Σ∗ = {Σ∗1, . . . ,Σ∗M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗,µ∗,Σ∗) = M∑ i=1 λ∗iφ(x | µ∗i ,Σ∗i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M∑ i=1 φ(x | µ∗i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗ = {µ∗i }Mi=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ‖µ∗i − µ∗j‖2 ≥ ξ, for all distinct pairs i, j ∈ [M ]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {x`}n`=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n∑ `=1 log ( 1 M M∑ i=1 φ(x` | µi, I) ) . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗ log ( 1 M M∑ i=1 φ(X | µi, I) ) . (4b) Here Eµ∗ denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗ (along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗ is by attempting to maximize the sample log-likelihood defined by the samples {x`}n`=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈ [M ] and ` ∈ [n], compute the membership weight wi(x`) = φ(x` | µi, I)∑M j=1 φ(x` | µj , I) . 2. M-step: For each i ∈ [M ], update the mean µi vector via µnewi = ∑n i=1 wi(x`)x`∑n `=1 wi(x`) . In the population setting, the M-step becomes: µnewi = Eµ∗ [wi(X)X] Eµ∗ [wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as logP(x | θ′) ≥ EZ∼P(·|x;θ) logP(x, Z | θ′)︸ ︷︷ ︸ :=Q(θ′|θ) −EZ∼P(·|x;θ) logP(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnewi = µi + s Eµ∗ [ wi(X)(X − µi) ] . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {x`}n`=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥ 3, we exhibit an M -component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥ 3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤ L(µ∗)− Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗1, µ ∗ 2 and µ ∗ 3, are such that the distance between µ∗1 and µ ∗ 2 is much smaller than the respective distances from µ ∗ 1 to µ ∗ 3, and from µ ∗ 2 to µ∗3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ ∗ 1 + µ ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗1 and µ ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture ofM unit-variance spherical Gaussians GMM(µ∗) with P [ ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ] ≥ 1− e−cM . Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M . As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M . In order to demonstrate that the success probability is exponentially small in M , we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(logM) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈ (0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ( ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ) ≥ 1− e−cM . (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M ) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈ (0, 1), and initialized by the random initialization scheme described previously. Then for any M -component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞ µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
1. What is the focus of the paper regarding Gaussian Mixture Models? 2. What are the three main results presented in the paper, and how do they relate to the local maxima of the log-likelihood function? 3. How does the paper tie the mean configuration of GMM to the local maxima of its log-likelihood? 4. What is the significance of the key idea behind Theorem 2, and how does it build upon Theorem 1? 5. What is the potential impact of the paper's results on both theoretical and practical aspects of clustering tasks? 6. How does the reviewer assess the technical quality, novelty, originality, potential usefulness, and clarity of the paper's content?
Review
Review This paper studies the structure of local maxima of the log-likelihood function of Gaussian Mixture Models (with equal mixing weights and identity as covariance). It presented three related results regarding this vein: 1. It provided a counter-example showing that there exist local maxima (when k = 3 and d=1), whose likelihood is arbitrarily worse than the global maximum. 2. They showed with random initialization, the probability of not converging to a bad local maximum could be exponentially small in k on when the means of Gaussians are configured in a special way. 3. They also showed gradient EM does not converge to a saddle point of the log-likelihood almost surely. Summary: This paper studies the structure of local maxima of the log-likelihood function of Gaussian Mixture Models (with equal mixing weights and identity as covariance). It presented three related results regarding this vein: 1. It provided a counter-example showing that there exist local maxima (when k = 3 and d=1), whose likelihood is arbitrarily worse than the global maximum. 2. They showed with random initialization, the probability of not converging to a bad local maximum could be exponentially small in k on when the means of Gaussians are configured in a special way. 3. They also showed gradient EM does not converge to a saddle point of the log-likelihood almost surely. Technical quality I only checked the proofs in the main paper (Theorem 1), and I think the proofs are sound. Having read the analysis of Theorem 1, I think the proof idea for Theorem 2 also seems reasonable, although I didn’t check the details in the Appendix. All of the three theorems are well interpreted after their statements. Novelty & Originality The paper ties the mean configuration of GMM to the local maxima of its log-likelihood. I think this observation, although intuitive, is very original. And I think the key idea of Theorem 2 based on Theorem 1, that the hierarchical grouping structure of the true mean determines the local maxima for k > 3 is also very clever. Potential impact & Usefulness I think the results could have good impact on both theory and practice: in theory, it could inspire the study of the local optima structure of related clustering tasks, such as k-means problems; it could be also interesting to examine whether their result in Lemma 7 also holds for d > 1; in practice, it shows the importance of seeding enough points in each true cluster, and maybe will provide some insights in designing new algorithms. Clarity and presentation The paper does a very good job in presenting their results, and explaining the intuition of their analysis. It clearly introduces the problem setup, and provided intuition and interpretation of all their analysis.
NIPS
Title Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1− e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √ n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗1 = −4 and µ∗2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→ L(µ), and the negative 2-norm of its gradient, µ 7→ −‖∇L(µ)‖2. Observe that the only local maxima are the vectors (−4, 4) and (4,−4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M -component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1− e−cM , the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, · · · ,M}, and N (µ,Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ,Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ,Σ) := 1√ (2π)ddet(Σ) e− 1 2 (x−µ) >Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗ = {µ∗i , . . . , µ∗M} of d-dimensional mean vectors, a vector λ ∗ = (λ∗1, . . . , λ ∗ M ) of nonnegative mixture weights that sum to one, and a collection Σ∗ = {Σ∗1, . . . ,Σ∗M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗,µ∗,Σ∗) = M∑ i=1 λ∗iφ(x | µ∗i ,Σ∗i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M∑ i=1 φ(x | µ∗i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗ = {µ∗i }Mi=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ‖µ∗i − µ∗j‖2 ≥ ξ, for all distinct pairs i, j ∈ [M ]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {x`}n`=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n∑ `=1 log ( 1 M M∑ i=1 φ(x` | µi, I) ) . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗ log ( 1 M M∑ i=1 φ(X | µi, I) ) . (4b) Here Eµ∗ denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗ (along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗ is by attempting to maximize the sample log-likelihood defined by the samples {x`}n`=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈ [M ] and ` ∈ [n], compute the membership weight wi(x`) = φ(x` | µi, I)∑M j=1 φ(x` | µj , I) . 2. M-step: For each i ∈ [M ], update the mean µi vector via µnewi = ∑n i=1 wi(x`)x`∑n `=1 wi(x`) . In the population setting, the M-step becomes: µnewi = Eµ∗ [wi(X)X] Eµ∗ [wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as logP(x | θ′) ≥ EZ∼P(·|x;θ) logP(x, Z | θ′)︸ ︷︷ ︸ :=Q(θ′|θ) −EZ∼P(·|x;θ) logP(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnewi = µi + s Eµ∗ [ wi(X)(X − µi) ] . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {x`}n`=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥ 3, we exhibit an M -component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥ 3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤ L(µ∗)− Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗1, µ ∗ 2 and µ ∗ 3, are such that the distance between µ∗1 and µ ∗ 2 is much smaller than the respective distances from µ ∗ 1 to µ ∗ 3, and from µ ∗ 2 to µ∗3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ ∗ 1 + µ ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗1 and µ ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture ofM unit-variance spherical Gaussians GMM(µ∗) with P [ ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ] ≥ 1− e−cM . Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M . As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M . In order to demonstrate that the success probability is exponentially small in M , we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(logM) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈ (0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥ 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ( ∀t ≥ 0, L(µt) ≤ L(µ∗)− Cgap ) ≥ 1− e−cM . (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M ) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈ (0, 1), and initialized by the random initialization scheme described previously. Then for any M -component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞ µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
1. What is the main contribution of the paper regarding Gaussian mixture models? 2. What are the strengths of the paper, particularly in its clarity and balance between theory and intuition? 3. Do you have any concerns or suggestions regarding the supplementary materials or the presentation of the results? 4. How does the reviewer assess the significance and impact of the paper's findings on non-convex optimization and its applications?
Review
Review The paper presents an answer to the open question asked by Srebro in his 2007 paper "Are there local maxima in the infinite-sample likelihood of Gaussian mixture estimation?". This paper answers the question in the negative through the construction of a general class of counterexamples. The authors begin be outlining the schemes by which the likelihood is maximized in the setting of Gaussian Mixture Models. These include the EM and gradient EM algorithms. The authors also address the issue of initialization and give a common choice. The authors then state their primary theorems, which give the counterexample to the question by Srebro [2007]. The authors further show even with the common choice of random initialization, the EM and gradient EM algorithms converge to the non-optimal maxima with exponentially high probability. The authors then give some intuition for their proofs by proving a simple case (k=3, d=1).This paper is very clear and well written, and balances well the space requirements of the paper with the need to give intuition for their theoretical result. The question answered by the authors is of great importance for further understanding the issues that arise with non-convex optimization, even in the asymptotic limit for simple distributions. I did not carefully check the supplementary material, but some estimates of the constants in the probability term that they give would be nice. This is important because it seems to me that one would never consider k to be too large. Estimates of the constants would be nice to see when we could start to see the convergence to bad solutions in the asymptotic limit (i.e. where is the phase transition?). If the constants are too complicated or do not make sense, supporting simulations would be nice to at least this is a phenomena that we can observe. However, this issue does not detract from the novelty or impact of this paper.
NIPS
Title Distributionally Robust Optimization with Data Geometry Abstract Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including f -divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry. 1 Introduction Machine learning algorithms with empirical risk minimization often suffer from poor generalization performance under distributional shifts in real applications due to the widespread latent heterogeneity, domain shifts, and data selection bias, etc. It is demanded for machine learning algorithms to achieve uniformly good performances against potential distributional shifts, especially in high-stake applications. Towards this goal, distributionally robust optimization (DRO) [27, 23, 29, 4, 13, 11], stemming from the literature of robust learning, has been proposed and developed in recent years. It optimizes the worst-case distribution within an uncertainty set P(Ptr) lying around the training distribution Ptr. When the testing distribution Pte is contained in P(Ptr), DRO could guarantee the generalization performance on Pte. In principle, the effectiveness of DRO heavily depends on the rationality of its uncertainty set P(Ptr) which is commonly formulated as a ball surrounding the training distribution endowed with a certain distance metric. An ideal uncertainty set should be constituted by all realistic distributions that may be encountered in test environments. However, existing DRO methods adopting the Wasserstein distance (i.e. WDRO methods [27, 29, 4, 13]) or f -divergence distance (i.e. f -DRO methods [23, 11]) tend to generate over-flexible uncertainty sets that incorporate unrealistic distributions far beyond the ideal ∗Equal Contributions †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). uncertainty set [15, 13]. As such unrealistic distributions must violate the underlying predicting mechanism, they are prone to be the worst-case and attract much optimization energy in the DRO framework, making the learned model deviate from the true predicting mechanism. Here we argue that the unrealistic distributions mentioned above originate from the distance metrics’ inherent ignorance of data geometry, as illustrated in Figure 1. The Euclidean-norm transportation cost measured by the L2-Wasserstein metric leads to a straight-line transportation path as shown in Figure 1(a) (red dotted line) which deviates from the data manifold (blue region). Therefore, WDRO methods tend to create unrealistic samples beyond the underlying data manifold, resulting in unrealistic distributions. f -divergence can also be interpreted as a data geometry-independent measure of the transportation cost confined in the support of Ptr. Taking χ2-divergence for example, the cost is a constant to transfer per unit of probability weights between samples, like a virtual tunnel (yellow dotted line in Figure 1(a)). In such a case, the noisy samples (e.g. outliers or samples with label noises) are more prone to be the worst-case and thus gather much larger weights than normal samples. The resultant distribution is obviously unrealistic. To mitigate the problem, it is imperative to introduce a new distance metric incorporating data geometry to further constrain the uncertainty set and avoid the undesired cases. As illustrated in Figure 1(a), considering the common assumption that data lie on a low-dimensional manifold [24, 30, 2], we expect the probability density transportation path (the blue dotted line) is restricted within the data manifold (the blue region). In this way, the uncertainty set (i.e. the Geometric Wasserstein Set as shown in Figure 1(b)) could inherently exclude the distributions beyond the data manifold. Furthermore, it is harder to gather probability weights on isolated noisy samples, which also mitigate the undesired cases in f -DRO. In this work, we propose a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance [6] which measures the transportation cost of probability density along the geodesic in a metric space. As the Geometric Wasserstein distance does not enjoy an analytical expression, we derive an approximate algorithm from the Gradient Flow in the Finsler manifold endowed with Geometric Wasserstein Distance (in section 3.2). We further theoretically specify an exponentially vanishing error rate of our approximation as well as a O(1/ √ T ) convergence rate of our algorithm, and characterize the edge cases where GDRO will degenerate to f -DRO or Wasserstein DRO (in section 3.3 and 3.4). Comprehensive experiments encompassing various distributional shifts, including sub-population shifts and class difficulty shifts, validate the effectiveness of our proposed GDRO (in section 4). We also observe a lower Dirichlet Energy (i.e. higher smoothness) of GDRO’s estimated worst-case distribution w.r.t the data manifold compared with existing DRO methods, justifying its adaptability to data geometry. 2 Preliminaries on Distributionally Robust Optimization Notations. X ∈ X denotes the covariates, Y ∈ Y denotes the target, fθ(·) : X → Y is the predictor parameterized by θ ∈ Θ. Ptr(X,Y ) and Pte(X,Y ) abbreviated with Ptr and Pte respectively represent the joint training distribution and test distribution. The random variable of data points is denoted by Z = (X,Y ) ∈ Z . Distributionally Robust Optimization (DRO) is formulated as: θ∗ = argmin θ∈Θ sup P∈P(Ptr) EP [ℓ(fθ(X), Y )], (1) where ℓ is a loss function, P(Ptr) = {P : Dist(P, Ptr) ≤ ϵ} characterizes the uncertainty set surrounding the training distribution restricted by a radius ϵ, and Dist is a distance metric between probability distributions. Most works specify the Dist metric as the f -divergence [23, 11] or the Wasserstein distance [27, 29, 4, 26, 12]. f -divergence DRO (abbr. f -DRO) f -divergence is defined as Df (P∥Q) = ∫ f(dP/dQ)dQ, where f(·) is a convex function and f(1) = 0. Two typical instances of f -divergences are KLdivergence (f(t) = t log t) and χ2-divergence (f(t) = (t − 1)2). [23] theoretically demonstrates the equivalence between χ2-DRO and the variance-regularized empirical risk minimization (ERM) problem, and [11] derives the optimization algorithm for a family of f -DRO. However, as proven in [15], f -DRO faces the over-pessimism problem and ends up giving a classifier only fitting the given training distribution, which we attribute to the ignorance of data geometry. As shown in Figure 1(a), f -divergence only cares about the probability of each sample (only dP, dQ occur). However, data geometry information is crucial for a reasonable uncertainty set, since it is well-accepted that data lie on a low-dimensional manifold and adjacent data points have similar degrees of importance. For example, for heterogeneous data, while one hopes to focus on some sub-population (e.g., put more weights on a group of data), without data geometric information, the distribution in the f -divergence ball is prone to only focus on some isolated samples with higher noises (as shown in Figure 3(a)). And in Figure 3(b), we find the worst-case distribution of f -DRO (with KL-divergence) is not smooth (with larger Dirichlet Energy) w.r.t. the data manifold. Wasserstein DRO (abbr. WDRO) Compared with the f -divergence ball that does not extend the support of the training distribution, the uncertainty set built with Wasserstein distance allows for the extension of the support [27, 29, 4]. [26, 12, 4] convert the original problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO that could be applied to deep neural networks, which protects models from adversarial attacks. However, the flexibility of the Wasserstein ball also causes an over-pessimistic estimation under strong distributional shifts [13], where the created samples are too noisy to obtain a confident model. As demonstrated in Figure 3(a), WDRO adds much more noises to the data and thus hurts the generalization performances in practice. Therefore, to mitigate the over-pessimism problem of DRO, we propose to incorporate the geometric properties into the uncertainty set. Compared with traditional shape-constrained methods [17, 16] for multivariate extreme event analysis that use the unimodality to constitute the uncertainty set, our proposed method characterizes the data manifold in a data-driven way and incorporates it into the DRO framework intrinsically via the Geometric Wasserstein distance metric, which is also compatible with manifold learning and graph learning methods. 3 Proposed Method In this work, we propose Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). In the following of this section, we first introduce the Geometric Wasserstein distance and propose the overall objective of GDRO; then we derive an approximate algorithm for optimization utilizing Gradient Flow; finally, some theoretical properties are proved and connections with existing DRO methods are demonstrated. 3.1 Discrete Geometric Wasserstein Distance GWG0 and GDRO We firstly introduce the Discrete Geometric Wasserstein distance, which extends the Benamou-Brenier formulation of the optimal transport problem to a metric space. The first step is to define a discrete velocity field and its discrete divergence, which we mainly follow the construction by Chow et al. [6]. Consider a given weighted finite graph G0 = (V,E,w) with n nodes, where V = {1, 2, . . . , n} is the vertex set, E is the edge set and w = (wij)i,j∈V is the weight of each edge. A velocity field v = (vij)i,j∈V ∈ Rn×n on G0 is defined to be a skew-symmetric matrix on the edge set E such that vij = −vji if (i, j) ∈ E. The probability set (simplex) P(G0) supported on V is defined as P(G0) = {(pi)ni=1 ∈ Rn| ∑n i=1 pi = 1, pi ≥ 0, for any i ∈ V } and its interior is denoted by Po(G0). κij is a predefined "cross-sectional area" typically interpolated with the associated nodes’ densities pi, pj . The direct approach is to take the arithmetic average such that κij(p) = (pi + pj)/2. However, to ensure the positiveness of p during optimization, we adopt the upwind interpolation:κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi. One could thereafter define the product pv ∈ Rn×n, called flux function on G0, by pv := (vijκij(p))(i,j)∈E . The divergence of pv is divG0(pv) := −( ∑ j∈V :(i,j)∈E √ wijvijκij(p)) n i=1 which is a vector in Rn. The divergence vector is supposed to lie in the tangent space of Po(G0), summing over all the in-fluxes and out-fluxes along edges of a certain node, with each edge transporting a probability density √wijvijκij(p). Now we are ready to define Geometric Wasserstein distance in Equation 2. Definition 3.1 (Discrete Geometric Wasserstein Distance GWG0(·, ·) [6]). Given a finite graph G0, for any pair of distributions p0, p1 ∈ Po(G0), define the Geometric Wasserstein Distance: GW2G0(p 0, p1) := inf v ∫ 1 0 1 2 ∑ (i,j)∈E κij(p)v 2 ijdt : dp dt + divG0(pv) = 0, p(0) = p 0, p(1) = p1 , (2) where v ∈ Rn×n denotes the velocity field on G0, p is a continuously differentiable curve p(t) : [0, 1] → Po(G0), and κij(p) is a pre-defined interpolation function between pi and pj . Intuitively v is a velocity field continuously transporting masses to convert the density distribution from p0 to p1 along a curve in the Wasserstein space [31]. Equation 2 measures the shortest (geodesic) length among all possible plans, which is calculated by integrating a total "kinetic energy" of the velocity field over the transportation process. Compared with the Benamou-Brenier formulation of continuous L2-Wasserstein distance, it ensures that the transportation path stays within the manifold (as the blue dotted line shown in Figure 1(a)), and it induces a smoother estimate of the worst-case probability distribution w.r.t the data structure since weights are exchanged just between neighbors. Then we present the overall objective function of Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). Given the training dataset Dtr = {(xi, yi)}ni=1 and its empirical marginal distribution P̂tr = 1n ∑ i δ(xi), along with a manifold structure represented by graph G0, we intend to obtain a distributionally robust predictor parameterized by θ∗ such that for certain ϵ > 0: θ∗ = argmin θ∈Θ sup P :GW2 G0 (P̂tr,P )≤ϵ { Rn(θ, p) = n∑ i=1 piℓ(fθ(xi), yi)− β n∑ i=1 pi log pi } . (3) We add a minor entropy-regularization with a small β as proposed in the entropy-balancing literature [14] to avoid singular cases and ensure the convergence of our optimization in section 3.2. Owing to the Geometric Wasserstein distance, the uncertainty set of GDRO excludes those distributions supported on points beyond the data manifold and the Geometric Wassserstein Ball is directional in Wasserstein space as it stretches along the data structure, as depicted in Figure 1(b). How is G0 estimated? To characterize the data manifold, the G0 used in GDRO is constructed as a k-nearest neighbor (kNN) graph from the training data only, as the kNN graph is shown to have a good approximation of the geodesic distance within local structures on the manifold [21, 7]. Note that our GDRO is compatible with any manifold learning and graph learning methods. 3.2 Optimization In this subsection, we derive the optimization algorithm for GDRO. Due to the lack of an analytical form of the Geometric Wasserstein distance, we give up providing a prescribed amount ϵ of robustness Algorithm 1 Geometric Wasserstein Distributionally Robust Optimization (GDRO) Input: Training Dataset Dtr = {(xi, yi)}ni=1, learning rate αθ , gradient flow iterations T , entropy term β, manifold representation G0 (learned by kNN algorithm from Dtr). Initialization: Sample weights initialized as (1/n, . . . , 1/n)T . Predictor’s parameters initialized as θ(0). for i = 0 to Epochs do 1. Simulate gradient flow for T time steps according to Equation 5∼6 to learn an approximate worst-case probability weight pT . 2. θ(i+1) ← θ(i) − αθ∇θ( ∑ i p T i ℓi(θ)) end for in Equation 3 and propose an alternate optimization algorithm as an approximation. For fixed probability weights p, the parameter θ could be optimized via gradient descents for Rn(θ, p) w.r.t. θ in parameter space Θ. The inner supremum problem can be approximately solved via gradient ascents for Rn(θ, p) w.r.t. p in the Geometric Wasserstein space (Po(G0),GWG0). And the cost measured by GW2G0(P̂tr, ·) could be approximated with the length of the gradient flow, which is a curve in (Po(G0),GWG0). Here we clarify some notations. p : [0, T ] 7→ Po(G0) denotes the continuous gradient flow, and the probability weight of the i-th sample at time t is abbreviated as pi(t). The time-discretized gradient flow corresponding with the time step τ is denoted as p̂τ : [0, T ] 7→ Po(G0), and p̂τ (t) is abbreviated as p̂tτ . For the optimization, we adopt the time-discretized definition of Gradient Flow [31] for −Rn(θ, p) in the Geometric Wasserstein space (Po(G0),GWG0) as: (with the time step τ ) p̂τ (t+ τ) = arg max p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂τ (t), p). (4) When τ → 0, the time-discretized gradient flow p̂τ becomes the continuous one p. Note that Equation 4 describes the Gradient Flow as a steepest ascent curve locally optimizing for a maximal objective within an infinitesimal Geometric Wasserstein ball, and it coincides with the Lagrangian penalty problem of Equation 3. In theorem 3.1 we would prove that Equation 4 finds the exact solution to a local GDRO at each time step. Following Chow et al. [6], the analytical solution to Equation 4 as τ → 0 could be derived as: dpi dt = ∑ j:(i,j)∈E wijκij(ℓi − ℓj) + β ∑ j:(i,j)∈E wijκij(log pj − log pi), (5) where pi denotes the time-dependent probability function of the i-th sample, li denotes the loss of the i-th sample and we take an upwind interpolation of κ: κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi, so that the probability density transferred on an edge equals the density from the origin node associated with the velocity field. The upwind interpolation guarantees that the probability weight p stays positive along the Gradient Flow in Equation 5. Then we discretize equation 5 with Forward Euler Method: pi(t+ α) = pi(t) + αdpi(t)/dt, (6) where α is a learning rate. For our algorithm, we control the maximum time step as t ≤ T in Equation 6 to approximately restrict the radius of the Geometric Wasserstein ball. We prove in theorem 3.2 that for the final time step t = T , the probability weights p(T ) learned by Equation 6 guarantees a global error rate e−CT from the worst-case risk Rn(θ, p∗) constrained in an ϵ(θ)-radius ball where ϵ(θ) = GW2G0(P̂tr, p(T )) and p ∗ = arg supp{Rn(θ, p) : GW2G0(P̂tr, p) ≤ ϵ(θ)}. The result is similar to conventions in WDRO [29], which gives up providing a prescribed radius of its uncertainty set but turns to an approximation with a intermediate hyperparameter. Pseudo-code of the whole algorithm is shown in Algorithm 1. The whole derivations are in Appendix. 3.3 Theoretical Properties In this section, we prove the equivalence between our Gradient-Flow-based algorithm and a local GDRO problem, and the bound of its global error rate as well as the convergence rate is derived. We first provide the robustness guarantee for the Lagrangian penalty problem in Equation 4. Theorem 3.1 (Local Robustness Guarantees of Lagrangian Penalty Problem). For any τ > 0, t > 0 and given θ, denote the solution of Equation 4 as p∗(θ) = arg supp∈P(G0) Rn(θ, p) − 1 2τ GW 2 G0(p̂ t τ (θ), p). Let ϵτ (θ) = GW 2 G0(p̂ t τ (θ), p ∗(θ)), we have sup p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂ t τ (θ), p) = sup p:GW2 G0 (p̂tτ (θ),p)≤ϵτ (θ) Rn(θ, p). (7) Theorem 3.1 proves that at each time step our Lagrangian penalty problem is equivalent to a local GDRO within the ϵτ (θ)-radius Geometric Wasserstein ball. It further shows that with τ → 0 in Equation 4, our gradient flow constantly finds the steepest descent direction. Then we theoretically analyze the global error rate brought by our approximate algorithm. Theorem 3.2 (Global Error Rate Bound). Given the model parameter θ, denote the approximate worst-case by gradient descent in Equation 6 after time t as pt(θ), and ϵ(θ) = GW2G0(P̂tr, p t(θ)) denotes the distance between our approximation pt and the training distribution P̂tr. Then denote the real worst-case distribution within the ϵ(θ)-radius discrete Geometric Wasserstein-ball as p∗(θ), that is, p∗(θ) = arg sup p:GW2 G0 (P̂tr,p)≤ϵ(θ) n∑ i=1 piℓi − β n∑ i=1 pi log pi. (8) Here we derive the bound w.r.t. the error ratio of objective function Rn(θ, p) (abbr. R(p)). For θ ∈ Θ, there exists C > 0 such that Error Rate = ( R(p∗)−R(pt) ) / ( R(p∗)−R(P̂tr) ) < e−Ct, (9) and when t → ∞, Error Rate → 0. The value of C depends on ℓ, β, n. Theorem 3.2 theoretically characterizes ’how far’ our approximation pt is from the real worst-case p∗ in terms of the drop ratio of the objective function R(p). At last we derive the convergence rate of our Algorithm 1. Theorem 3.3 (Convergence of Algorithm 1). Denote the objective function for the predictor as: F (θ) = sup GW2 G0 (P̂tr,p)≤ϵ(θ) Rn(θ, p), (10) which is assumed as L-smooth and Rn(θ, p) satisfies Lp-smoothness such that ∥∇pRn(θ, p) − ∇pRn(θ, p′)∥2 ≤ Lp∥p − p′∥2. ϵ(θ) follows the definition in Theorem 3.2. Take a constant ∆F ≥ F (θ(0)) − infθ F (θ) and set step size as α = √ ∆F /(LK). For t ≥ T0 where T0 is a constant, denote the upper bound of ∥pt − p∗∥22 as γ and train the model for K steps, we have: 1 K E [ K∑ k=1 ∥∇θF (θ(k))∥22 ] − (1 + 2 √ L∆F /K) 1− 2 √ L∆F /K L2pγ ≤ 2∆F√ ∆FK − 2L∆F . (11) Here we make a common assumption on the smoothness of the objective function as in [29]. As K → ∞, ∇θF (θ(k)) will achieve a square-root convergence only if γ is controlled by the exponentially vanishing error rate in Theorem 3.2. And the accuracy parameter γ remains a fixed effect on optimization accuracy. 3.4 Connections with Conventional DRO Methods In Theorem 3.4, we illustrate the connections of our GDRO with f -DRO. Theorem 3.4 (Connection with f -DRO with KL-diveregence (KL-DRO).). Relax the discrete Geometric Wasserstein-ball regularization (set ϵ → ∞) and set the graph G0 to a fully-connected graph, and then the solution of GDRO is equivalent to the following form of KL-DRO: min θ∈Θ sup p:DKL(p∥P̂tr)≤ϵ̂(θ) n∑ i=1 piℓ(fθ(xi), yi), with ϵ̂(θ) = DKL(p ∗(θ)∥P̂tr), (12) where p∗(θ) = argmaxp ∑n i=1 piℓ(fθ(xi), yi)− β ∑n i=1 pi log pi. Remark (Connections with WDRO). Since conventional WDRO allows distributions to extend training support, our proposed GDRO is intrinsically different from WDRO. Intuitively, for infinite samples, if the graph G0 is set to a fully-connected graph with edge weights wij = ∥zi − zj∥2 and β is set to 0, our GDRO resembles support-restricted version of WDRO. 4 Experiments In this section, we investigate the empirical performance of our proposed GDRO on different simulation and real-world datasets under various kinds of distributional shifts, including sub-population shifts and class difficulty shifts. As for baselines, we compare with empirical risk minimization (ERM), WDRO [4, 29] and two typical f -DRO methods [11], including KL-DRO (f(t) = t ln t) and χ2-DRO (f(t) = (t− 1)2). Implementation Details For all experiments, G0 is constructed as a k-nearest neighbor graph from the training data only at the initialization step. Specifically, we adopt NN-Descent [10] to efficiently estimate the k-nearest neighbor graph for the large-scale dataset Colored MNIST while performing Figure 2: Visualization of learned kNN graph with different k of the regression data, which is projected on the plane spanned by the unit vector of V axis and θS with a projection matrix [ 01,5 01,4 1 θTS 01,4 0 ] . an exact search for k-nearest neighbors in the other experiments. We adopt MSE as the empirical loss function for regression tasks and cross-entropy for classification tasks. We use MLPs for the Colored MNIST and Ionosphere datasets, and linear models in the other experiments. Besides, we find that the two-stage optimization is enough for good performances, as mentioned in [19], and we use it in our experiments. Note that GDRO is compatible with any parameterized models including deep models. The simulation of gradient flow in Equation 6 is implemented by message propagation with DGL package [32], which scales linearly with sample size and enjoys parallelization by GPU. 4.1 Simulation Data In this subsection, we use simulations to verify that our GDRO could deal with sub-population shifts and to some extent resist the label noises. And we also visualize the effects of the kNN algorithm as well as the sensitivity of GDRO to the parameter k. 1. Regression: Sub-population Shifts via Selection Bias Mechanism Data Generation The input features X = [S,U, V ]T ∈ R10 are comprised of stable features S ∈ R5, noisy features U ∈ R4 and the spurious feature V ∈ R: S ∼ N (0, 2I5) ∈ R5, U ∼ N (0, 2I4) ∈ R4, Y = θTSS + 0.1 · S1S2S3 +N (0, 0.5), (13) V ∼ Laplace(sign(r) · Y, 1 5 ln |r| ) ∈ R, (14) where θS ∈ R5 is the coefficient of the true model. |r| > 1 is a factor for each sub-population. S are stable features with the invariant relationship with Y . U are noisy features such that U ⊥ Y . And V is the spurious feature whose relationship with Y is unstable and is controlled by the factor r. Intuitively, sign(r) controls whether the spurious correlation between V and Y is positive or negative. And |r| controls the strength of the spurious correlation: the larger |r| is, the stronger the spurious correlation is. Simulation Setting 1 In training, we generate 10000 points, where the major group contains 95% data with r = 1.9 (i.e. strong positive spurious correlation) and the minor group contains 5% data with r = −1.3 (i.e. weak negative spurious correlation). As shown in Figure 2, the training data is the union of two sub-spaces. In testing, we vary r ∈ {−1.5,−1.7,−1.9,−2.3,−2.7,−3.0} to simulate stronger negative spurious correlations between V and Y . Notably, the testing data also lie on the same manifold as the training. We use the linear model and calculate the root-mean-square errors (RMSE) and the parameter estimation errors Est Error = ∥θ̂ − θ∗∥2 of different methods (θ∗ = [θS , 0, . . . , 0]T ). The results are shown in the Simulation 1 in Table 1. Simulation Setting 2 Then to test whether GDRO could resist label noises, we randomly sample 20 points and add label noises to them via Ỹ = Y + Std(Y ) where std(Y ) denotes the standard derivation of the marginal distribution of Y . The results are shown in the Simulation 2 in Table 1. And we visualize the learned worst-case distribution of three methods in Figure 3(a) and 3(b). Analysis (1) From the results of Simulation 1 and Simulation 2 in Table 1, GDRO outperforms all the baselines in terms of low prediction error on the minor group under different strengths of spurious correlations. (2) From Simulation 2 in Table 1, compared with KL-DRO and χ2-DRO, GDRO is only slightly affected by the label noises. Also, from Figure 3(a), compared with GDRO, KL-DRO puts much heavier weights on the noisy points (red points of f -DRO are much larger). And GDRO focuses more on the minor group (blue points), which results in their different performances under Simulation 2. Further, to investigate this phenomenon, we quantify the smoothness via Dirichlet Energy. In Figure 3(b), we plot the Dirichlet Energy w.r.t the relative entropy KL(P̂∥P̂tr) between the learned distribution P̂ and training distribution P̂tr, which proves that the learned weights of GDRO are much smoother w.r.t. the data manifold. And this property helps GDRO to resist the label noises, since GDRO does not allow extremely high weights on the isolated points. (3) The third sub-figure in Figure 3(a) verifies our analysis on WDRO that it introduces much more label noises (red points). Discussion on kNN To test whether GDRO is sensitive to the parameter k of the kNN graph G0, we vary k ∈ {5, 20, 100} and test the performances of our GDRO under simulation setting 1. We also visualize the kNN graphs in Figure 2, which show that kNN consistently manages to fit the data manifold well until k = 100. And empirical results of Simulation 3 in Table 1 prove that with k < 100, GDRO performs stably better than the baselines with small and moderate k, except that smaller k leads to slower convergence since sparse graphs restrain the flow of probability weights. Still, we present an extreme failure case where KNN achieves a poor approximation of the data manifold. When k increases to an extremely large number as k = 100, the neighborhood of kNN diffuses and two manifolds start to merge on the graph, in which case GDRO could not distinguish between two sub-populations and its performance degrades as shown in the Table 1. Actually, in Theorem 3.4 of this paper, we have proved that with an infinitely large k, GDRO could be reduced to KL-DRO, which completely ignores data geometry. Still, we have to clarify that kNN and GDRO perform stably well for a large range of k. 2. Classification: Sub-population Shifts with High-dimensional Manifold Data Data Generation In this setting, data are high-dimensional but with a low-dimensional structure. The data generation is similar to [25] and is a typical classification setting in OOD generalization. We introduce the spurious correlation between the label Y = {+1,−1} and the spurious attribute A = {+1,−1}. We firstly generate low-dimensional data Xlow = [S, V ]T ∈ R10 as: S ∼ N (Y 1, σ2sI5), V ∼ N (A1, σ2vI5), where A = { Y ,with probability r, −Y ,with probability 1− r. (15) Intuitively, r ∈ [0, 1] tunes the proportions of sub-populations and controls the spurious correlation between A and Y . When r > 0.5, the spurious attribute A is positively correlated with Y ; and when r < 0.5, the spurious correlation becomes negative. And larger |r − 0.5| results in stronger spurious correlation between A and Y . Then to convert the low-dimensional data to high-dimensional space, Xlow is multiplied by a column full rank matrix H as: Xhigh = (HXlow) ∈ R300, (16) where H ∈ R300×10 is full column rank, and we randomly choose H in each run. Simulation Setting For both the training and testing data, we set σ2s = 1.0 and σ2v = 0.3. We use linear models with cross-entropy loss for all methods. In training, we set r = 0.85 (A is positively correlated with Y ). In testing, we design two environments with r1 = 0.5 (A ⊥ Y ) and r2 = 0.0 (A is negatively correlated with Y ) to introduce distributional shifts. Apart from the natural setting without label noises, we also test the performances under label noises. Specifically, we add 4% label noises in the training data by flipping the label Y . We run the experiments 10 times, each time with one random matrix H . We report the mean accuracy in Table 2. Analysis From the results in Table 2, our GDRO outperforms all baselines under the sub-population shifts, and it is not affected much by the label noises, which validates the effectiveness of our GDRO. 4.2 Real-World Data We evaluate our method on four real-world datasets. Due to space limits, we place two of them here, with various kinds of distributions, including sub-populations shifts and class difficulty shifts, and the others can be found in Appendix. We use MLPs with cross-entropy loss in these experiments. Colored MNIST: Sub-population Shifts & Label Noises Following Arjovsky et al. [1], Colored MNIST is a binary classification task constructed on the MNIST dataset. Firstly, a binary label Y is assigned to each image according to its digit: Y = 0 for digit 0 ∼ 4 and Y = 1 for digit 5 ∼ 9. Secondly, we induce noisy labels Ỹ by randomly flipping the label Y with a probability of 0.2. Then we sample the color id C spuriously correlated with Ỹ as C = { +Ỹ , with probability 1 − r, −Ỹ , with probability r. Intuitively, r controls the spurious correlation between Y and C. When r < 0.5, C is positively correlated with Y ; and when r > 0.5, the spurious correlation becomes negative. And |r − 0.5| controls the strength of the spurious correlation. In training, we randomly sample 5000 data points and set r = 0.85 (strong negative spurious correlation between C and Y ) and in testing, we set r = 0 (strong positive spurious correlation), inducing strong shifts between training and testing. Results are shown in Table 3. Ionosphere Radar Classification: Class Difficulty Shifts Ionosphere Radar Dataset [8] consists of return signals from the ionosphere of a phased array radar system in Google Bay, Labrador. The electromagnetic signals were processed by an auto-correlation function to produce 34 continuous attributes. The task is to predict whether the return signal indicates specific physical structures in the ionosphere (good return) or not (bad return). However, the prediction difficulty of two classes is quite different, and ERM was found to achieve a much lower accuracy on bad returns than good ones [28]. In this experiment, both the training and testing sets consist of samples with balanced label distribution. But due to the disparity of class difficulty, the prediction accuracy of two classes is quite different, while DRO methods are expected to achieve similar prediction accuracy for both classes. Therefore, in testing, we report the testing accuracy for the easy class and the hard class respectively, as well as the AUC score of the testing set. Results are shown in Table 3. Analysis From the results on real-world data, we find that all DRO methods (WDRO and f -DROs) show significant promotions to ERM, reflecting the reasonability of our experimental settings. And our proposed GDRO outperforms all baselines significantly when dealing with sub-population shifts and class difficulty shifts, which validates the effectiveness of our GDRO. 5 Related Work Distributionally robust optimization (DRO) directly solves the OOD generalization problem by optimizing the worst-case error in a pre-defined uncertainty set, which is often constrained by moment or support conditions [9, 3], shape constraints [22, 17, 16, 5], f -divergence [23, 11] and Wasserstein distance [26, 29, 4, 12]. [9, 3] set moment or support conditions for the distributions in the uncertainty set. As for shape constraints, one commonly used is unimodality, and [16] uses the orthounimodality to constitute the uncertainty set for DRO for multivariate extreme event analysis. As for f -divergence, [23] theoretically demonstrates that it is equivalent to the variance penalty, and [11] derives the optimization algorithm from its dual reformulation. Compared with f -divergences which require the support of distributions in the uncertainty set is fixed, the uncertainty set built with Wasserstein distance contains distributions with different support and could provide robustness to unseen data. Despite the capacity of a Wasserstein uncertainty set, the optimization of Wasserstein DRO is quite hard. [26, 12, 4] convert the original DRO problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO and could be applied to deep neural networks, which protects the models from adversarial attacks. Besides, DRO methods have also been used for structured data. [18] studies the DRO problem for data generated by a time-homogeneous, ergodic finite-state Markov chain. Although DRO methods could guarantee the OOD generalization performances when the testing distribution is included in the uncertainty set, there are works [15, 13] doubting their real effects in practice. In order to guarantee the OOD generalization ability, in real scenarios, the uncertainty set has to be overwhelmingly large to contain the potential testing distributions. Such overwhelmingly large set makes the learned model make decisions with fairly low confidence, and it is also referred to as the over-pessimism problem. To mitigate such problem, [13] proposes to incorporate additional unlabeled data to further constrain the uncertainty set, and [20] learns the transportation cost function for WDRO with the help of multiple environment data. 6 Conclusion Through this work, we take the first step to incorporate data geometry information to mitigate the over-flexibility problem in DRO. In this work, we use the k-nearest-neighbor graph to characterize the data manifold, while our proposed method is compatible with any manifold learning or graph learning methods. And we believe that a more accurate estimated data structure with advanced manifold learning and graph learning algorithms will further boost the performance of GDRO, which we leave for future work. Acknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004, No. 2020AAA0106300), National Natural Science Foundation of China (No. U1936219, 62141607), Beijing Academy of Artificial Intelligence (BAAI). Bo Li’s research was supported by the National Natural Science Foundation of China (No.72171131); the Tsinghua University Initiative Scientific Research Grant (No. 2019THZWJC11); Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403. We would like to thank Yuting Pan, Renzhe Xu, Hao Zou for helpful comments. Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [N/A] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] All proofs are placed in Appendix. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] All details are placed in Appendix. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] The error bar is quite small in our experiments and therefore we omit it. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [N/A] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the main contribution of the paper regarding DRO with data geometry considered in the uncertainty set? 2. What are the strengths and weaknesses of the proposed GDRO framework compared to other DRO frameworks? 3. Do you have any concerns about the mathematical exposition, notation, and language used in the paper? 4. How would you rate the clarity and quality of the experimental descriptions and results presented in the paper? 5. Are there any formatting, typesetting, or writing issues that need to be addressed in the paper? 6. What are the limitations of the proposed approach, and how do they compare to other methods in the field?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors study DRO with data geometry considered in the uncertainty set, making use of the so-called Geometric Wasserstein distance. The authors derive an approximate algorithm for the proposed GDRO and prove its convergence. Numerical experiments are performed to demonstrate the proposed GDRO framework over ERM and other DRO frameworks. Strengths And Weaknesses Strengths: An interesting framework proposed using the Geometric Wasserstein distance Extensive experiments Weaknesses: The whole paper is quite hard to follow due to: the lack of self-containedness—some rather non-standard notions are not well-defined or well-explained, or very hard to understand unsatisfactory English writing (wrong grammar, choice of words, etc.) inaccuracy in mathematics exposition, the lack of mathematical rigor and lots of abuse of notation Related works are not discussed in detail, e.g., DRO A lot of typos, typesetting and formatting issues The settings and descriptions of the experiments are unclear ============================= Post-rebuttal: Thanks the authors for their effort. The revised version has addressed my concerns and I have raised my score. Although it is allowed in this NeurIPS submission cycle, I think the authors should have provided sufficient experimental details in their initial submission. A lot of new content have been added in the revised version of the paper, including supplementary material, say for the experimental details. I feel it is quite unfair to assess the merits of the paper according to this updated version, comparing to other paper submissions. It appears to me that the authors was submitting unfinished work on the submission deadline and take advantage of the rebuttal phase. Questions The mathematics of this article is very hard to follow due to abuse of notation. For example, p has been used to represent a lot of different notions. Various claims are also stated without proof or sufficient explanations (e.g., line 151). The use of English language should also be largely improved. The theoretical results also appear to be very trivial. The description of the experiments must be improved—we don’t even know what loss functions are used in the experiments. The exact DRO problems solved in the experiments, especially for the real-world data ones, are completely unstated. The authors also have to rectify the following issues: Typesetting or formatting issues (non-exhaustive): You have to add spaces before all parentheses and citations Add punctuations at the end of display style equations Format tables according to the instructions in the NeurIPS paper template—no vertical lines! Use italics instead of bold to emphasize words Typos or writing issues (non-exhaustive): Line 42: X 2 —do you mean χ 2 ? Line 60: O ( 1 / T ) Line 109: large-scaled Line 128: cross section cross-sectional Line 129: algorithmic arithmetic Line 137: Brenier Line 146: δ ( x i ) vs δ ( x i , y i ) Line 159: alternate alternating Line 161: ascents Line 193: R n vs R n Line 200: GW vs G W Line 204: what is R ( p ) ? Not previously defined. Table 1: why adding underscores between “mean” and “error”? Use a dot for abbreviations instead of an underscore. Limitations Relevant discussion of limitations might appear in the paper but is hard to find.
NIPS
Title Distributionally Robust Optimization with Data Geometry Abstract Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including f -divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry. 1 Introduction Machine learning algorithms with empirical risk minimization often suffer from poor generalization performance under distributional shifts in real applications due to the widespread latent heterogeneity, domain shifts, and data selection bias, etc. It is demanded for machine learning algorithms to achieve uniformly good performances against potential distributional shifts, especially in high-stake applications. Towards this goal, distributionally robust optimization (DRO) [27, 23, 29, 4, 13, 11], stemming from the literature of robust learning, has been proposed and developed in recent years. It optimizes the worst-case distribution within an uncertainty set P(Ptr) lying around the training distribution Ptr. When the testing distribution Pte is contained in P(Ptr), DRO could guarantee the generalization performance on Pte. In principle, the effectiveness of DRO heavily depends on the rationality of its uncertainty set P(Ptr) which is commonly formulated as a ball surrounding the training distribution endowed with a certain distance metric. An ideal uncertainty set should be constituted by all realistic distributions that may be encountered in test environments. However, existing DRO methods adopting the Wasserstein distance (i.e. WDRO methods [27, 29, 4, 13]) or f -divergence distance (i.e. f -DRO methods [23, 11]) tend to generate over-flexible uncertainty sets that incorporate unrealistic distributions far beyond the ideal ∗Equal Contributions †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). uncertainty set [15, 13]. As such unrealistic distributions must violate the underlying predicting mechanism, they are prone to be the worst-case and attract much optimization energy in the DRO framework, making the learned model deviate from the true predicting mechanism. Here we argue that the unrealistic distributions mentioned above originate from the distance metrics’ inherent ignorance of data geometry, as illustrated in Figure 1. The Euclidean-norm transportation cost measured by the L2-Wasserstein metric leads to a straight-line transportation path as shown in Figure 1(a) (red dotted line) which deviates from the data manifold (blue region). Therefore, WDRO methods tend to create unrealistic samples beyond the underlying data manifold, resulting in unrealistic distributions. f -divergence can also be interpreted as a data geometry-independent measure of the transportation cost confined in the support of Ptr. Taking χ2-divergence for example, the cost is a constant to transfer per unit of probability weights between samples, like a virtual tunnel (yellow dotted line in Figure 1(a)). In such a case, the noisy samples (e.g. outliers or samples with label noises) are more prone to be the worst-case and thus gather much larger weights than normal samples. The resultant distribution is obviously unrealistic. To mitigate the problem, it is imperative to introduce a new distance metric incorporating data geometry to further constrain the uncertainty set and avoid the undesired cases. As illustrated in Figure 1(a), considering the common assumption that data lie on a low-dimensional manifold [24, 30, 2], we expect the probability density transportation path (the blue dotted line) is restricted within the data manifold (the blue region). In this way, the uncertainty set (i.e. the Geometric Wasserstein Set as shown in Figure 1(b)) could inherently exclude the distributions beyond the data manifold. Furthermore, it is harder to gather probability weights on isolated noisy samples, which also mitigate the undesired cases in f -DRO. In this work, we propose a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance [6] which measures the transportation cost of probability density along the geodesic in a metric space. As the Geometric Wasserstein distance does not enjoy an analytical expression, we derive an approximate algorithm from the Gradient Flow in the Finsler manifold endowed with Geometric Wasserstein Distance (in section 3.2). We further theoretically specify an exponentially vanishing error rate of our approximation as well as a O(1/ √ T ) convergence rate of our algorithm, and characterize the edge cases where GDRO will degenerate to f -DRO or Wasserstein DRO (in section 3.3 and 3.4). Comprehensive experiments encompassing various distributional shifts, including sub-population shifts and class difficulty shifts, validate the effectiveness of our proposed GDRO (in section 4). We also observe a lower Dirichlet Energy (i.e. higher smoothness) of GDRO’s estimated worst-case distribution w.r.t the data manifold compared with existing DRO methods, justifying its adaptability to data geometry. 2 Preliminaries on Distributionally Robust Optimization Notations. X ∈ X denotes the covariates, Y ∈ Y denotes the target, fθ(·) : X → Y is the predictor parameterized by θ ∈ Θ. Ptr(X,Y ) and Pte(X,Y ) abbreviated with Ptr and Pte respectively represent the joint training distribution and test distribution. The random variable of data points is denoted by Z = (X,Y ) ∈ Z . Distributionally Robust Optimization (DRO) is formulated as: θ∗ = argmin θ∈Θ sup P∈P(Ptr) EP [ℓ(fθ(X), Y )], (1) where ℓ is a loss function, P(Ptr) = {P : Dist(P, Ptr) ≤ ϵ} characterizes the uncertainty set surrounding the training distribution restricted by a radius ϵ, and Dist is a distance metric between probability distributions. Most works specify the Dist metric as the f -divergence [23, 11] or the Wasserstein distance [27, 29, 4, 26, 12]. f -divergence DRO (abbr. f -DRO) f -divergence is defined as Df (P∥Q) = ∫ f(dP/dQ)dQ, where f(·) is a convex function and f(1) = 0. Two typical instances of f -divergences are KLdivergence (f(t) = t log t) and χ2-divergence (f(t) = (t − 1)2). [23] theoretically demonstrates the equivalence between χ2-DRO and the variance-regularized empirical risk minimization (ERM) problem, and [11] derives the optimization algorithm for a family of f -DRO. However, as proven in [15], f -DRO faces the over-pessimism problem and ends up giving a classifier only fitting the given training distribution, which we attribute to the ignorance of data geometry. As shown in Figure 1(a), f -divergence only cares about the probability of each sample (only dP, dQ occur). However, data geometry information is crucial for a reasonable uncertainty set, since it is well-accepted that data lie on a low-dimensional manifold and adjacent data points have similar degrees of importance. For example, for heterogeneous data, while one hopes to focus on some sub-population (e.g., put more weights on a group of data), without data geometric information, the distribution in the f -divergence ball is prone to only focus on some isolated samples with higher noises (as shown in Figure 3(a)). And in Figure 3(b), we find the worst-case distribution of f -DRO (with KL-divergence) is not smooth (with larger Dirichlet Energy) w.r.t. the data manifold. Wasserstein DRO (abbr. WDRO) Compared with the f -divergence ball that does not extend the support of the training distribution, the uncertainty set built with Wasserstein distance allows for the extension of the support [27, 29, 4]. [26, 12, 4] convert the original problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO that could be applied to deep neural networks, which protects models from adversarial attacks. However, the flexibility of the Wasserstein ball also causes an over-pessimistic estimation under strong distributional shifts [13], where the created samples are too noisy to obtain a confident model. As demonstrated in Figure 3(a), WDRO adds much more noises to the data and thus hurts the generalization performances in practice. Therefore, to mitigate the over-pessimism problem of DRO, we propose to incorporate the geometric properties into the uncertainty set. Compared with traditional shape-constrained methods [17, 16] for multivariate extreme event analysis that use the unimodality to constitute the uncertainty set, our proposed method characterizes the data manifold in a data-driven way and incorporates it into the DRO framework intrinsically via the Geometric Wasserstein distance metric, which is also compatible with manifold learning and graph learning methods. 3 Proposed Method In this work, we propose Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). In the following of this section, we first introduce the Geometric Wasserstein distance and propose the overall objective of GDRO; then we derive an approximate algorithm for optimization utilizing Gradient Flow; finally, some theoretical properties are proved and connections with existing DRO methods are demonstrated. 3.1 Discrete Geometric Wasserstein Distance GWG0 and GDRO We firstly introduce the Discrete Geometric Wasserstein distance, which extends the Benamou-Brenier formulation of the optimal transport problem to a metric space. The first step is to define a discrete velocity field and its discrete divergence, which we mainly follow the construction by Chow et al. [6]. Consider a given weighted finite graph G0 = (V,E,w) with n nodes, where V = {1, 2, . . . , n} is the vertex set, E is the edge set and w = (wij)i,j∈V is the weight of each edge. A velocity field v = (vij)i,j∈V ∈ Rn×n on G0 is defined to be a skew-symmetric matrix on the edge set E such that vij = −vji if (i, j) ∈ E. The probability set (simplex) P(G0) supported on V is defined as P(G0) = {(pi)ni=1 ∈ Rn| ∑n i=1 pi = 1, pi ≥ 0, for any i ∈ V } and its interior is denoted by Po(G0). κij is a predefined "cross-sectional area" typically interpolated with the associated nodes’ densities pi, pj . The direct approach is to take the arithmetic average such that κij(p) = (pi + pj)/2. However, to ensure the positiveness of p during optimization, we adopt the upwind interpolation:κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi. One could thereafter define the product pv ∈ Rn×n, called flux function on G0, by pv := (vijκij(p))(i,j)∈E . The divergence of pv is divG0(pv) := −( ∑ j∈V :(i,j)∈E √ wijvijκij(p)) n i=1 which is a vector in Rn. The divergence vector is supposed to lie in the tangent space of Po(G0), summing over all the in-fluxes and out-fluxes along edges of a certain node, with each edge transporting a probability density √wijvijκij(p). Now we are ready to define Geometric Wasserstein distance in Equation 2. Definition 3.1 (Discrete Geometric Wasserstein Distance GWG0(·, ·) [6]). Given a finite graph G0, for any pair of distributions p0, p1 ∈ Po(G0), define the Geometric Wasserstein Distance: GW2G0(p 0, p1) := inf v ∫ 1 0 1 2 ∑ (i,j)∈E κij(p)v 2 ijdt : dp dt + divG0(pv) = 0, p(0) = p 0, p(1) = p1 , (2) where v ∈ Rn×n denotes the velocity field on G0, p is a continuously differentiable curve p(t) : [0, 1] → Po(G0), and κij(p) is a pre-defined interpolation function between pi and pj . Intuitively v is a velocity field continuously transporting masses to convert the density distribution from p0 to p1 along a curve in the Wasserstein space [31]. Equation 2 measures the shortest (geodesic) length among all possible plans, which is calculated by integrating a total "kinetic energy" of the velocity field over the transportation process. Compared with the Benamou-Brenier formulation of continuous L2-Wasserstein distance, it ensures that the transportation path stays within the manifold (as the blue dotted line shown in Figure 1(a)), and it induces a smoother estimate of the worst-case probability distribution w.r.t the data structure since weights are exchanged just between neighbors. Then we present the overall objective function of Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). Given the training dataset Dtr = {(xi, yi)}ni=1 and its empirical marginal distribution P̂tr = 1n ∑ i δ(xi), along with a manifold structure represented by graph G0, we intend to obtain a distributionally robust predictor parameterized by θ∗ such that for certain ϵ > 0: θ∗ = argmin θ∈Θ sup P :GW2 G0 (P̂tr,P )≤ϵ { Rn(θ, p) = n∑ i=1 piℓ(fθ(xi), yi)− β n∑ i=1 pi log pi } . (3) We add a minor entropy-regularization with a small β as proposed in the entropy-balancing literature [14] to avoid singular cases and ensure the convergence of our optimization in section 3.2. Owing to the Geometric Wasserstein distance, the uncertainty set of GDRO excludes those distributions supported on points beyond the data manifold and the Geometric Wassserstein Ball is directional in Wasserstein space as it stretches along the data structure, as depicted in Figure 1(b). How is G0 estimated? To characterize the data manifold, the G0 used in GDRO is constructed as a k-nearest neighbor (kNN) graph from the training data only, as the kNN graph is shown to have a good approximation of the geodesic distance within local structures on the manifold [21, 7]. Note that our GDRO is compatible with any manifold learning and graph learning methods. 3.2 Optimization In this subsection, we derive the optimization algorithm for GDRO. Due to the lack of an analytical form of the Geometric Wasserstein distance, we give up providing a prescribed amount ϵ of robustness Algorithm 1 Geometric Wasserstein Distributionally Robust Optimization (GDRO) Input: Training Dataset Dtr = {(xi, yi)}ni=1, learning rate αθ , gradient flow iterations T , entropy term β, manifold representation G0 (learned by kNN algorithm from Dtr). Initialization: Sample weights initialized as (1/n, . . . , 1/n)T . Predictor’s parameters initialized as θ(0). for i = 0 to Epochs do 1. Simulate gradient flow for T time steps according to Equation 5∼6 to learn an approximate worst-case probability weight pT . 2. θ(i+1) ← θ(i) − αθ∇θ( ∑ i p T i ℓi(θ)) end for in Equation 3 and propose an alternate optimization algorithm as an approximation. For fixed probability weights p, the parameter θ could be optimized via gradient descents for Rn(θ, p) w.r.t. θ in parameter space Θ. The inner supremum problem can be approximately solved via gradient ascents for Rn(θ, p) w.r.t. p in the Geometric Wasserstein space (Po(G0),GWG0). And the cost measured by GW2G0(P̂tr, ·) could be approximated with the length of the gradient flow, which is a curve in (Po(G0),GWG0). Here we clarify some notations. p : [0, T ] 7→ Po(G0) denotes the continuous gradient flow, and the probability weight of the i-th sample at time t is abbreviated as pi(t). The time-discretized gradient flow corresponding with the time step τ is denoted as p̂τ : [0, T ] 7→ Po(G0), and p̂τ (t) is abbreviated as p̂tτ . For the optimization, we adopt the time-discretized definition of Gradient Flow [31] for −Rn(θ, p) in the Geometric Wasserstein space (Po(G0),GWG0) as: (with the time step τ ) p̂τ (t+ τ) = arg max p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂τ (t), p). (4) When τ → 0, the time-discretized gradient flow p̂τ becomes the continuous one p. Note that Equation 4 describes the Gradient Flow as a steepest ascent curve locally optimizing for a maximal objective within an infinitesimal Geometric Wasserstein ball, and it coincides with the Lagrangian penalty problem of Equation 3. In theorem 3.1 we would prove that Equation 4 finds the exact solution to a local GDRO at each time step. Following Chow et al. [6], the analytical solution to Equation 4 as τ → 0 could be derived as: dpi dt = ∑ j:(i,j)∈E wijκij(ℓi − ℓj) + β ∑ j:(i,j)∈E wijκij(log pj − log pi), (5) where pi denotes the time-dependent probability function of the i-th sample, li denotes the loss of the i-th sample and we take an upwind interpolation of κ: κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi, so that the probability density transferred on an edge equals the density from the origin node associated with the velocity field. The upwind interpolation guarantees that the probability weight p stays positive along the Gradient Flow in Equation 5. Then we discretize equation 5 with Forward Euler Method: pi(t+ α) = pi(t) + αdpi(t)/dt, (6) where α is a learning rate. For our algorithm, we control the maximum time step as t ≤ T in Equation 6 to approximately restrict the radius of the Geometric Wasserstein ball. We prove in theorem 3.2 that for the final time step t = T , the probability weights p(T ) learned by Equation 6 guarantees a global error rate e−CT from the worst-case risk Rn(θ, p∗) constrained in an ϵ(θ)-radius ball where ϵ(θ) = GW2G0(P̂tr, p(T )) and p ∗ = arg supp{Rn(θ, p) : GW2G0(P̂tr, p) ≤ ϵ(θ)}. The result is similar to conventions in WDRO [29], which gives up providing a prescribed radius of its uncertainty set but turns to an approximation with a intermediate hyperparameter. Pseudo-code of the whole algorithm is shown in Algorithm 1. The whole derivations are in Appendix. 3.3 Theoretical Properties In this section, we prove the equivalence between our Gradient-Flow-based algorithm and a local GDRO problem, and the bound of its global error rate as well as the convergence rate is derived. We first provide the robustness guarantee for the Lagrangian penalty problem in Equation 4. Theorem 3.1 (Local Robustness Guarantees of Lagrangian Penalty Problem). For any τ > 0, t > 0 and given θ, denote the solution of Equation 4 as p∗(θ) = arg supp∈P(G0) Rn(θ, p) − 1 2τ GW 2 G0(p̂ t τ (θ), p). Let ϵτ (θ) = GW 2 G0(p̂ t τ (θ), p ∗(θ)), we have sup p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂ t τ (θ), p) = sup p:GW2 G0 (p̂tτ (θ),p)≤ϵτ (θ) Rn(θ, p). (7) Theorem 3.1 proves that at each time step our Lagrangian penalty problem is equivalent to a local GDRO within the ϵτ (θ)-radius Geometric Wasserstein ball. It further shows that with τ → 0 in Equation 4, our gradient flow constantly finds the steepest descent direction. Then we theoretically analyze the global error rate brought by our approximate algorithm. Theorem 3.2 (Global Error Rate Bound). Given the model parameter θ, denote the approximate worst-case by gradient descent in Equation 6 after time t as pt(θ), and ϵ(θ) = GW2G0(P̂tr, p t(θ)) denotes the distance between our approximation pt and the training distribution P̂tr. Then denote the real worst-case distribution within the ϵ(θ)-radius discrete Geometric Wasserstein-ball as p∗(θ), that is, p∗(θ) = arg sup p:GW2 G0 (P̂tr,p)≤ϵ(θ) n∑ i=1 piℓi − β n∑ i=1 pi log pi. (8) Here we derive the bound w.r.t. the error ratio of objective function Rn(θ, p) (abbr. R(p)). For θ ∈ Θ, there exists C > 0 such that Error Rate = ( R(p∗)−R(pt) ) / ( R(p∗)−R(P̂tr) ) < e−Ct, (9) and when t → ∞, Error Rate → 0. The value of C depends on ℓ, β, n. Theorem 3.2 theoretically characterizes ’how far’ our approximation pt is from the real worst-case p∗ in terms of the drop ratio of the objective function R(p). At last we derive the convergence rate of our Algorithm 1. Theorem 3.3 (Convergence of Algorithm 1). Denote the objective function for the predictor as: F (θ) = sup GW2 G0 (P̂tr,p)≤ϵ(θ) Rn(θ, p), (10) which is assumed as L-smooth and Rn(θ, p) satisfies Lp-smoothness such that ∥∇pRn(θ, p) − ∇pRn(θ, p′)∥2 ≤ Lp∥p − p′∥2. ϵ(θ) follows the definition in Theorem 3.2. Take a constant ∆F ≥ F (θ(0)) − infθ F (θ) and set step size as α = √ ∆F /(LK). For t ≥ T0 where T0 is a constant, denote the upper bound of ∥pt − p∗∥22 as γ and train the model for K steps, we have: 1 K E [ K∑ k=1 ∥∇θF (θ(k))∥22 ] − (1 + 2 √ L∆F /K) 1− 2 √ L∆F /K L2pγ ≤ 2∆F√ ∆FK − 2L∆F . (11) Here we make a common assumption on the smoothness of the objective function as in [29]. As K → ∞, ∇θF (θ(k)) will achieve a square-root convergence only if γ is controlled by the exponentially vanishing error rate in Theorem 3.2. And the accuracy parameter γ remains a fixed effect on optimization accuracy. 3.4 Connections with Conventional DRO Methods In Theorem 3.4, we illustrate the connections of our GDRO with f -DRO. Theorem 3.4 (Connection with f -DRO with KL-diveregence (KL-DRO).). Relax the discrete Geometric Wasserstein-ball regularization (set ϵ → ∞) and set the graph G0 to a fully-connected graph, and then the solution of GDRO is equivalent to the following form of KL-DRO: min θ∈Θ sup p:DKL(p∥P̂tr)≤ϵ̂(θ) n∑ i=1 piℓ(fθ(xi), yi), with ϵ̂(θ) = DKL(p ∗(θ)∥P̂tr), (12) where p∗(θ) = argmaxp ∑n i=1 piℓ(fθ(xi), yi)− β ∑n i=1 pi log pi. Remark (Connections with WDRO). Since conventional WDRO allows distributions to extend training support, our proposed GDRO is intrinsically different from WDRO. Intuitively, for infinite samples, if the graph G0 is set to a fully-connected graph with edge weights wij = ∥zi − zj∥2 and β is set to 0, our GDRO resembles support-restricted version of WDRO. 4 Experiments In this section, we investigate the empirical performance of our proposed GDRO on different simulation and real-world datasets under various kinds of distributional shifts, including sub-population shifts and class difficulty shifts. As for baselines, we compare with empirical risk minimization (ERM), WDRO [4, 29] and two typical f -DRO methods [11], including KL-DRO (f(t) = t ln t) and χ2-DRO (f(t) = (t− 1)2). Implementation Details For all experiments, G0 is constructed as a k-nearest neighbor graph from the training data only at the initialization step. Specifically, we adopt NN-Descent [10] to efficiently estimate the k-nearest neighbor graph for the large-scale dataset Colored MNIST while performing Figure 2: Visualization of learned kNN graph with different k of the regression data, which is projected on the plane spanned by the unit vector of V axis and θS with a projection matrix [ 01,5 01,4 1 θTS 01,4 0 ] . an exact search for k-nearest neighbors in the other experiments. We adopt MSE as the empirical loss function for regression tasks and cross-entropy for classification tasks. We use MLPs for the Colored MNIST and Ionosphere datasets, and linear models in the other experiments. Besides, we find that the two-stage optimization is enough for good performances, as mentioned in [19], and we use it in our experiments. Note that GDRO is compatible with any parameterized models including deep models. The simulation of gradient flow in Equation 6 is implemented by message propagation with DGL package [32], which scales linearly with sample size and enjoys parallelization by GPU. 4.1 Simulation Data In this subsection, we use simulations to verify that our GDRO could deal with sub-population shifts and to some extent resist the label noises. And we also visualize the effects of the kNN algorithm as well as the sensitivity of GDRO to the parameter k. 1. Regression: Sub-population Shifts via Selection Bias Mechanism Data Generation The input features X = [S,U, V ]T ∈ R10 are comprised of stable features S ∈ R5, noisy features U ∈ R4 and the spurious feature V ∈ R: S ∼ N (0, 2I5) ∈ R5, U ∼ N (0, 2I4) ∈ R4, Y = θTSS + 0.1 · S1S2S3 +N (0, 0.5), (13) V ∼ Laplace(sign(r) · Y, 1 5 ln |r| ) ∈ R, (14) where θS ∈ R5 is the coefficient of the true model. |r| > 1 is a factor for each sub-population. S are stable features with the invariant relationship with Y . U are noisy features such that U ⊥ Y . And V is the spurious feature whose relationship with Y is unstable and is controlled by the factor r. Intuitively, sign(r) controls whether the spurious correlation between V and Y is positive or negative. And |r| controls the strength of the spurious correlation: the larger |r| is, the stronger the spurious correlation is. Simulation Setting 1 In training, we generate 10000 points, where the major group contains 95% data with r = 1.9 (i.e. strong positive spurious correlation) and the minor group contains 5% data with r = −1.3 (i.e. weak negative spurious correlation). As shown in Figure 2, the training data is the union of two sub-spaces. In testing, we vary r ∈ {−1.5,−1.7,−1.9,−2.3,−2.7,−3.0} to simulate stronger negative spurious correlations between V and Y . Notably, the testing data also lie on the same manifold as the training. We use the linear model and calculate the root-mean-square errors (RMSE) and the parameter estimation errors Est Error = ∥θ̂ − θ∗∥2 of different methods (θ∗ = [θS , 0, . . . , 0]T ). The results are shown in the Simulation 1 in Table 1. Simulation Setting 2 Then to test whether GDRO could resist label noises, we randomly sample 20 points and add label noises to them via Ỹ = Y + Std(Y ) where std(Y ) denotes the standard derivation of the marginal distribution of Y . The results are shown in the Simulation 2 in Table 1. And we visualize the learned worst-case distribution of three methods in Figure 3(a) and 3(b). Analysis (1) From the results of Simulation 1 and Simulation 2 in Table 1, GDRO outperforms all the baselines in terms of low prediction error on the minor group under different strengths of spurious correlations. (2) From Simulation 2 in Table 1, compared with KL-DRO and χ2-DRO, GDRO is only slightly affected by the label noises. Also, from Figure 3(a), compared with GDRO, KL-DRO puts much heavier weights on the noisy points (red points of f -DRO are much larger). And GDRO focuses more on the minor group (blue points), which results in their different performances under Simulation 2. Further, to investigate this phenomenon, we quantify the smoothness via Dirichlet Energy. In Figure 3(b), we plot the Dirichlet Energy w.r.t the relative entropy KL(P̂∥P̂tr) between the learned distribution P̂ and training distribution P̂tr, which proves that the learned weights of GDRO are much smoother w.r.t. the data manifold. And this property helps GDRO to resist the label noises, since GDRO does not allow extremely high weights on the isolated points. (3) The third sub-figure in Figure 3(a) verifies our analysis on WDRO that it introduces much more label noises (red points). Discussion on kNN To test whether GDRO is sensitive to the parameter k of the kNN graph G0, we vary k ∈ {5, 20, 100} and test the performances of our GDRO under simulation setting 1. We also visualize the kNN graphs in Figure 2, which show that kNN consistently manages to fit the data manifold well until k = 100. And empirical results of Simulation 3 in Table 1 prove that with k < 100, GDRO performs stably better than the baselines with small and moderate k, except that smaller k leads to slower convergence since sparse graphs restrain the flow of probability weights. Still, we present an extreme failure case where KNN achieves a poor approximation of the data manifold. When k increases to an extremely large number as k = 100, the neighborhood of kNN diffuses and two manifolds start to merge on the graph, in which case GDRO could not distinguish between two sub-populations and its performance degrades as shown in the Table 1. Actually, in Theorem 3.4 of this paper, we have proved that with an infinitely large k, GDRO could be reduced to KL-DRO, which completely ignores data geometry. Still, we have to clarify that kNN and GDRO perform stably well for a large range of k. 2. Classification: Sub-population Shifts with High-dimensional Manifold Data Data Generation In this setting, data are high-dimensional but with a low-dimensional structure. The data generation is similar to [25] and is a typical classification setting in OOD generalization. We introduce the spurious correlation between the label Y = {+1,−1} and the spurious attribute A = {+1,−1}. We firstly generate low-dimensional data Xlow = [S, V ]T ∈ R10 as: S ∼ N (Y 1, σ2sI5), V ∼ N (A1, σ2vI5), where A = { Y ,with probability r, −Y ,with probability 1− r. (15) Intuitively, r ∈ [0, 1] tunes the proportions of sub-populations and controls the spurious correlation between A and Y . When r > 0.5, the spurious attribute A is positively correlated with Y ; and when r < 0.5, the spurious correlation becomes negative. And larger |r − 0.5| results in stronger spurious correlation between A and Y . Then to convert the low-dimensional data to high-dimensional space, Xlow is multiplied by a column full rank matrix H as: Xhigh = (HXlow) ∈ R300, (16) where H ∈ R300×10 is full column rank, and we randomly choose H in each run. Simulation Setting For both the training and testing data, we set σ2s = 1.0 and σ2v = 0.3. We use linear models with cross-entropy loss for all methods. In training, we set r = 0.85 (A is positively correlated with Y ). In testing, we design two environments with r1 = 0.5 (A ⊥ Y ) and r2 = 0.0 (A is negatively correlated with Y ) to introduce distributional shifts. Apart from the natural setting without label noises, we also test the performances under label noises. Specifically, we add 4% label noises in the training data by flipping the label Y . We run the experiments 10 times, each time with one random matrix H . We report the mean accuracy in Table 2. Analysis From the results in Table 2, our GDRO outperforms all baselines under the sub-population shifts, and it is not affected much by the label noises, which validates the effectiveness of our GDRO. 4.2 Real-World Data We evaluate our method on four real-world datasets. Due to space limits, we place two of them here, with various kinds of distributions, including sub-populations shifts and class difficulty shifts, and the others can be found in Appendix. We use MLPs with cross-entropy loss in these experiments. Colored MNIST: Sub-population Shifts & Label Noises Following Arjovsky et al. [1], Colored MNIST is a binary classification task constructed on the MNIST dataset. Firstly, a binary label Y is assigned to each image according to its digit: Y = 0 for digit 0 ∼ 4 and Y = 1 for digit 5 ∼ 9. Secondly, we induce noisy labels Ỹ by randomly flipping the label Y with a probability of 0.2. Then we sample the color id C spuriously correlated with Ỹ as C = { +Ỹ , with probability 1 − r, −Ỹ , with probability r. Intuitively, r controls the spurious correlation between Y and C. When r < 0.5, C is positively correlated with Y ; and when r > 0.5, the spurious correlation becomes negative. And |r − 0.5| controls the strength of the spurious correlation. In training, we randomly sample 5000 data points and set r = 0.85 (strong negative spurious correlation between C and Y ) and in testing, we set r = 0 (strong positive spurious correlation), inducing strong shifts between training and testing. Results are shown in Table 3. Ionosphere Radar Classification: Class Difficulty Shifts Ionosphere Radar Dataset [8] consists of return signals from the ionosphere of a phased array radar system in Google Bay, Labrador. The electromagnetic signals were processed by an auto-correlation function to produce 34 continuous attributes. The task is to predict whether the return signal indicates specific physical structures in the ionosphere (good return) or not (bad return). However, the prediction difficulty of two classes is quite different, and ERM was found to achieve a much lower accuracy on bad returns than good ones [28]. In this experiment, both the training and testing sets consist of samples with balanced label distribution. But due to the disparity of class difficulty, the prediction accuracy of two classes is quite different, while DRO methods are expected to achieve similar prediction accuracy for both classes. Therefore, in testing, we report the testing accuracy for the easy class and the hard class respectively, as well as the AUC score of the testing set. Results are shown in Table 3. Analysis From the results on real-world data, we find that all DRO methods (WDRO and f -DROs) show significant promotions to ERM, reflecting the reasonability of our experimental settings. And our proposed GDRO outperforms all baselines significantly when dealing with sub-population shifts and class difficulty shifts, which validates the effectiveness of our GDRO. 5 Related Work Distributionally robust optimization (DRO) directly solves the OOD generalization problem by optimizing the worst-case error in a pre-defined uncertainty set, which is often constrained by moment or support conditions [9, 3], shape constraints [22, 17, 16, 5], f -divergence [23, 11] and Wasserstein distance [26, 29, 4, 12]. [9, 3] set moment or support conditions for the distributions in the uncertainty set. As for shape constraints, one commonly used is unimodality, and [16] uses the orthounimodality to constitute the uncertainty set for DRO for multivariate extreme event analysis. As for f -divergence, [23] theoretically demonstrates that it is equivalent to the variance penalty, and [11] derives the optimization algorithm from its dual reformulation. Compared with f -divergences which require the support of distributions in the uncertainty set is fixed, the uncertainty set built with Wasserstein distance contains distributions with different support and could provide robustness to unseen data. Despite the capacity of a Wasserstein uncertainty set, the optimization of Wasserstein DRO is quite hard. [26, 12, 4] convert the original DRO problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO and could be applied to deep neural networks, which protects the models from adversarial attacks. Besides, DRO methods have also been used for structured data. [18] studies the DRO problem for data generated by a time-homogeneous, ergodic finite-state Markov chain. Although DRO methods could guarantee the OOD generalization performances when the testing distribution is included in the uncertainty set, there are works [15, 13] doubting their real effects in practice. In order to guarantee the OOD generalization ability, in real scenarios, the uncertainty set has to be overwhelmingly large to contain the potential testing distributions. Such overwhelmingly large set makes the learned model make decisions with fairly low confidence, and it is also referred to as the over-pessimism problem. To mitigate such problem, [13] proposes to incorporate additional unlabeled data to further constrain the uncertainty set, and [20] learns the transportation cost function for WDRO with the help of multiple environment data. 6 Conclusion Through this work, we take the first step to incorporate data geometry information to mitigate the over-flexibility problem in DRO. In this work, we use the k-nearest-neighbor graph to characterize the data manifold, while our proposed method is compatible with any manifold learning or graph learning methods. And we believe that a more accurate estimated data structure with advanced manifold learning and graph learning algorithms will further boost the performance of GDRO, which we leave for future work. Acknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004, No. 2020AAA0106300), National Natural Science Foundation of China (No. U1936219, 62141607), Beijing Academy of Artificial Intelligence (BAAI). Bo Li’s research was supported by the National Natural Science Foundation of China (No.72171131); the Tsinghua University Initiative Scientific Research Grant (No. 2019THZWJC11); Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403. We would like to thank Yuting Pan, Renzhe Xu, Hao Zou for helpful comments. Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [N/A] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] All proofs are placed in Appendix. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] All details are placed in Appendix. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] The error bar is quite small in our experiments and therefore we omit it. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [N/A] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on Dro? 2. What are the strengths of the proposed approach, particularly in terms of its motivation, novelty, and empirical demonstration? 3. What are the weaknesses of the paper regarding its limitations and comparisons with other works? 4. Do you have any concerns about the method's applicability to unseen distributions or its training efficiency?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance. A generically applicable approximate algorithm is derived for model optimization. Extensive experiments on both simulation and real-world datasets demonstrate its effectiveness. Strengths And Weaknesses Pros: The proposed method is well motivated and reasonable. This paper studied an important problem of DRO: the uncertainty set is too over-flexible such that it may include implausible worst-case distributions. To address this issue, the authors proposed to use Discrete Geometric Wasserstein distance to construct the uncertainty set, in order to constrain the uncertainty set within the data manifold. The method is somewhat novel and interesting. Both convergence rate and the bounded error rate are provided. And the superiority of the proposed method is also empirically demonstrated through experiments on both simulation and real-world datasets. Cons: Data from unseen distributions may fall out of the manifold constructed by training data. In this case, simply constraining the uncertainty set may not be helpful for OOD generalization. Training efficiency. The authors use a graph to represent the manifold structure. It may be problematic for large-scale datasets since the graph needs to be estimated at every iteration. In the experiments, the authors only compare with ERM and DRO-based methods. It would be a bonus if some general methods for OOD generalization can be included. Questions Since the manifold is constructed by the training set, is it still applicable for unseen distributions? Data from unseen distributions may fall out of the data manifold. Does the graph need to be updated at every iteration? If so, it would be time-consuming to estimate the manifold for large-scale datasets. Limitations Yes.
NIPS
Title Distributionally Robust Optimization with Data Geometry Abstract Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including f -divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry. 1 Introduction Machine learning algorithms with empirical risk minimization often suffer from poor generalization performance under distributional shifts in real applications due to the widespread latent heterogeneity, domain shifts, and data selection bias, etc. It is demanded for machine learning algorithms to achieve uniformly good performances against potential distributional shifts, especially in high-stake applications. Towards this goal, distributionally robust optimization (DRO) [27, 23, 29, 4, 13, 11], stemming from the literature of robust learning, has been proposed and developed in recent years. It optimizes the worst-case distribution within an uncertainty set P(Ptr) lying around the training distribution Ptr. When the testing distribution Pte is contained in P(Ptr), DRO could guarantee the generalization performance on Pte. In principle, the effectiveness of DRO heavily depends on the rationality of its uncertainty set P(Ptr) which is commonly formulated as a ball surrounding the training distribution endowed with a certain distance metric. An ideal uncertainty set should be constituted by all realistic distributions that may be encountered in test environments. However, existing DRO methods adopting the Wasserstein distance (i.e. WDRO methods [27, 29, 4, 13]) or f -divergence distance (i.e. f -DRO methods [23, 11]) tend to generate over-flexible uncertainty sets that incorporate unrealistic distributions far beyond the ideal ∗Equal Contributions †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). uncertainty set [15, 13]. As such unrealistic distributions must violate the underlying predicting mechanism, they are prone to be the worst-case and attract much optimization energy in the DRO framework, making the learned model deviate from the true predicting mechanism. Here we argue that the unrealistic distributions mentioned above originate from the distance metrics’ inherent ignorance of data geometry, as illustrated in Figure 1. The Euclidean-norm transportation cost measured by the L2-Wasserstein metric leads to a straight-line transportation path as shown in Figure 1(a) (red dotted line) which deviates from the data manifold (blue region). Therefore, WDRO methods tend to create unrealistic samples beyond the underlying data manifold, resulting in unrealistic distributions. f -divergence can also be interpreted as a data geometry-independent measure of the transportation cost confined in the support of Ptr. Taking χ2-divergence for example, the cost is a constant to transfer per unit of probability weights between samples, like a virtual tunnel (yellow dotted line in Figure 1(a)). In such a case, the noisy samples (e.g. outliers or samples with label noises) are more prone to be the worst-case and thus gather much larger weights than normal samples. The resultant distribution is obviously unrealistic. To mitigate the problem, it is imperative to introduce a new distance metric incorporating data geometry to further constrain the uncertainty set and avoid the undesired cases. As illustrated in Figure 1(a), considering the common assumption that data lie on a low-dimensional manifold [24, 30, 2], we expect the probability density transportation path (the blue dotted line) is restricted within the data manifold (the blue region). In this way, the uncertainty set (i.e. the Geometric Wasserstein Set as shown in Figure 1(b)) could inherently exclude the distributions beyond the data manifold. Furthermore, it is harder to gather probability weights on isolated noisy samples, which also mitigate the undesired cases in f -DRO. In this work, we propose a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance [6] which measures the transportation cost of probability density along the geodesic in a metric space. As the Geometric Wasserstein distance does not enjoy an analytical expression, we derive an approximate algorithm from the Gradient Flow in the Finsler manifold endowed with Geometric Wasserstein Distance (in section 3.2). We further theoretically specify an exponentially vanishing error rate of our approximation as well as a O(1/ √ T ) convergence rate of our algorithm, and characterize the edge cases where GDRO will degenerate to f -DRO or Wasserstein DRO (in section 3.3 and 3.4). Comprehensive experiments encompassing various distributional shifts, including sub-population shifts and class difficulty shifts, validate the effectiveness of our proposed GDRO (in section 4). We also observe a lower Dirichlet Energy (i.e. higher smoothness) of GDRO’s estimated worst-case distribution w.r.t the data manifold compared with existing DRO methods, justifying its adaptability to data geometry. 2 Preliminaries on Distributionally Robust Optimization Notations. X ∈ X denotes the covariates, Y ∈ Y denotes the target, fθ(·) : X → Y is the predictor parameterized by θ ∈ Θ. Ptr(X,Y ) and Pte(X,Y ) abbreviated with Ptr and Pte respectively represent the joint training distribution and test distribution. The random variable of data points is denoted by Z = (X,Y ) ∈ Z . Distributionally Robust Optimization (DRO) is formulated as: θ∗ = argmin θ∈Θ sup P∈P(Ptr) EP [ℓ(fθ(X), Y )], (1) where ℓ is a loss function, P(Ptr) = {P : Dist(P, Ptr) ≤ ϵ} characterizes the uncertainty set surrounding the training distribution restricted by a radius ϵ, and Dist is a distance metric between probability distributions. Most works specify the Dist metric as the f -divergence [23, 11] or the Wasserstein distance [27, 29, 4, 26, 12]. f -divergence DRO (abbr. f -DRO) f -divergence is defined as Df (P∥Q) = ∫ f(dP/dQ)dQ, where f(·) is a convex function and f(1) = 0. Two typical instances of f -divergences are KLdivergence (f(t) = t log t) and χ2-divergence (f(t) = (t − 1)2). [23] theoretically demonstrates the equivalence between χ2-DRO and the variance-regularized empirical risk minimization (ERM) problem, and [11] derives the optimization algorithm for a family of f -DRO. However, as proven in [15], f -DRO faces the over-pessimism problem and ends up giving a classifier only fitting the given training distribution, which we attribute to the ignorance of data geometry. As shown in Figure 1(a), f -divergence only cares about the probability of each sample (only dP, dQ occur). However, data geometry information is crucial for a reasonable uncertainty set, since it is well-accepted that data lie on a low-dimensional manifold and adjacent data points have similar degrees of importance. For example, for heterogeneous data, while one hopes to focus on some sub-population (e.g., put more weights on a group of data), without data geometric information, the distribution in the f -divergence ball is prone to only focus on some isolated samples with higher noises (as shown in Figure 3(a)). And in Figure 3(b), we find the worst-case distribution of f -DRO (with KL-divergence) is not smooth (with larger Dirichlet Energy) w.r.t. the data manifold. Wasserstein DRO (abbr. WDRO) Compared with the f -divergence ball that does not extend the support of the training distribution, the uncertainty set built with Wasserstein distance allows for the extension of the support [27, 29, 4]. [26, 12, 4] convert the original problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO that could be applied to deep neural networks, which protects models from adversarial attacks. However, the flexibility of the Wasserstein ball also causes an over-pessimistic estimation under strong distributional shifts [13], where the created samples are too noisy to obtain a confident model. As demonstrated in Figure 3(a), WDRO adds much more noises to the data and thus hurts the generalization performances in practice. Therefore, to mitigate the over-pessimism problem of DRO, we propose to incorporate the geometric properties into the uncertainty set. Compared with traditional shape-constrained methods [17, 16] for multivariate extreme event analysis that use the unimodality to constitute the uncertainty set, our proposed method characterizes the data manifold in a data-driven way and incorporates it into the DRO framework intrinsically via the Geometric Wasserstein distance metric, which is also compatible with manifold learning and graph learning methods. 3 Proposed Method In this work, we propose Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). In the following of this section, we first introduce the Geometric Wasserstein distance and propose the overall objective of GDRO; then we derive an approximate algorithm for optimization utilizing Gradient Flow; finally, some theoretical properties are proved and connections with existing DRO methods are demonstrated. 3.1 Discrete Geometric Wasserstein Distance GWG0 and GDRO We firstly introduce the Discrete Geometric Wasserstein distance, which extends the Benamou-Brenier formulation of the optimal transport problem to a metric space. The first step is to define a discrete velocity field and its discrete divergence, which we mainly follow the construction by Chow et al. [6]. Consider a given weighted finite graph G0 = (V,E,w) with n nodes, where V = {1, 2, . . . , n} is the vertex set, E is the edge set and w = (wij)i,j∈V is the weight of each edge. A velocity field v = (vij)i,j∈V ∈ Rn×n on G0 is defined to be a skew-symmetric matrix on the edge set E such that vij = −vji if (i, j) ∈ E. The probability set (simplex) P(G0) supported on V is defined as P(G0) = {(pi)ni=1 ∈ Rn| ∑n i=1 pi = 1, pi ≥ 0, for any i ∈ V } and its interior is denoted by Po(G0). κij is a predefined "cross-sectional area" typically interpolated with the associated nodes’ densities pi, pj . The direct approach is to take the arithmetic average such that κij(p) = (pi + pj)/2. However, to ensure the positiveness of p during optimization, we adopt the upwind interpolation:κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi. One could thereafter define the product pv ∈ Rn×n, called flux function on G0, by pv := (vijκij(p))(i,j)∈E . The divergence of pv is divG0(pv) := −( ∑ j∈V :(i,j)∈E √ wijvijκij(p)) n i=1 which is a vector in Rn. The divergence vector is supposed to lie in the tangent space of Po(G0), summing over all the in-fluxes and out-fluxes along edges of a certain node, with each edge transporting a probability density √wijvijκij(p). Now we are ready to define Geometric Wasserstein distance in Equation 2. Definition 3.1 (Discrete Geometric Wasserstein Distance GWG0(·, ·) [6]). Given a finite graph G0, for any pair of distributions p0, p1 ∈ Po(G0), define the Geometric Wasserstein Distance: GW2G0(p 0, p1) := inf v ∫ 1 0 1 2 ∑ (i,j)∈E κij(p)v 2 ijdt : dp dt + divG0(pv) = 0, p(0) = p 0, p(1) = p1 , (2) where v ∈ Rn×n denotes the velocity field on G0, p is a continuously differentiable curve p(t) : [0, 1] → Po(G0), and κij(p) is a pre-defined interpolation function between pi and pj . Intuitively v is a velocity field continuously transporting masses to convert the density distribution from p0 to p1 along a curve in the Wasserstein space [31]. Equation 2 measures the shortest (geodesic) length among all possible plans, which is calculated by integrating a total "kinetic energy" of the velocity field over the transportation process. Compared with the Benamou-Brenier formulation of continuous L2-Wasserstein distance, it ensures that the transportation path stays within the manifold (as the blue dotted line shown in Figure 1(a)), and it induces a smoother estimate of the worst-case probability distribution w.r.t the data structure since weights are exchanged just between neighbors. Then we present the overall objective function of Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). Given the training dataset Dtr = {(xi, yi)}ni=1 and its empirical marginal distribution P̂tr = 1n ∑ i δ(xi), along with a manifold structure represented by graph G0, we intend to obtain a distributionally robust predictor parameterized by θ∗ such that for certain ϵ > 0: θ∗ = argmin θ∈Θ sup P :GW2 G0 (P̂tr,P )≤ϵ { Rn(θ, p) = n∑ i=1 piℓ(fθ(xi), yi)− β n∑ i=1 pi log pi } . (3) We add a minor entropy-regularization with a small β as proposed in the entropy-balancing literature [14] to avoid singular cases and ensure the convergence of our optimization in section 3.2. Owing to the Geometric Wasserstein distance, the uncertainty set of GDRO excludes those distributions supported on points beyond the data manifold and the Geometric Wassserstein Ball is directional in Wasserstein space as it stretches along the data structure, as depicted in Figure 1(b). How is G0 estimated? To characterize the data manifold, the G0 used in GDRO is constructed as a k-nearest neighbor (kNN) graph from the training data only, as the kNN graph is shown to have a good approximation of the geodesic distance within local structures on the manifold [21, 7]. Note that our GDRO is compatible with any manifold learning and graph learning methods. 3.2 Optimization In this subsection, we derive the optimization algorithm for GDRO. Due to the lack of an analytical form of the Geometric Wasserstein distance, we give up providing a prescribed amount ϵ of robustness Algorithm 1 Geometric Wasserstein Distributionally Robust Optimization (GDRO) Input: Training Dataset Dtr = {(xi, yi)}ni=1, learning rate αθ , gradient flow iterations T , entropy term β, manifold representation G0 (learned by kNN algorithm from Dtr). Initialization: Sample weights initialized as (1/n, . . . , 1/n)T . Predictor’s parameters initialized as θ(0). for i = 0 to Epochs do 1. Simulate gradient flow for T time steps according to Equation 5∼6 to learn an approximate worst-case probability weight pT . 2. θ(i+1) ← θ(i) − αθ∇θ( ∑ i p T i ℓi(θ)) end for in Equation 3 and propose an alternate optimization algorithm as an approximation. For fixed probability weights p, the parameter θ could be optimized via gradient descents for Rn(θ, p) w.r.t. θ in parameter space Θ. The inner supremum problem can be approximately solved via gradient ascents for Rn(θ, p) w.r.t. p in the Geometric Wasserstein space (Po(G0),GWG0). And the cost measured by GW2G0(P̂tr, ·) could be approximated with the length of the gradient flow, which is a curve in (Po(G0),GWG0). Here we clarify some notations. p : [0, T ] 7→ Po(G0) denotes the continuous gradient flow, and the probability weight of the i-th sample at time t is abbreviated as pi(t). The time-discretized gradient flow corresponding with the time step τ is denoted as p̂τ : [0, T ] 7→ Po(G0), and p̂τ (t) is abbreviated as p̂tτ . For the optimization, we adopt the time-discretized definition of Gradient Flow [31] for −Rn(θ, p) in the Geometric Wasserstein space (Po(G0),GWG0) as: (with the time step τ ) p̂τ (t+ τ) = arg max p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂τ (t), p). (4) When τ → 0, the time-discretized gradient flow p̂τ becomes the continuous one p. Note that Equation 4 describes the Gradient Flow as a steepest ascent curve locally optimizing for a maximal objective within an infinitesimal Geometric Wasserstein ball, and it coincides with the Lagrangian penalty problem of Equation 3. In theorem 3.1 we would prove that Equation 4 finds the exact solution to a local GDRO at each time step. Following Chow et al. [6], the analytical solution to Equation 4 as τ → 0 could be derived as: dpi dt = ∑ j:(i,j)∈E wijκij(ℓi − ℓj) + β ∑ j:(i,j)∈E wijκij(log pj − log pi), (5) where pi denotes the time-dependent probability function of the i-th sample, li denotes the loss of the i-th sample and we take an upwind interpolation of κ: κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi, so that the probability density transferred on an edge equals the density from the origin node associated with the velocity field. The upwind interpolation guarantees that the probability weight p stays positive along the Gradient Flow in Equation 5. Then we discretize equation 5 with Forward Euler Method: pi(t+ α) = pi(t) + αdpi(t)/dt, (6) where α is a learning rate. For our algorithm, we control the maximum time step as t ≤ T in Equation 6 to approximately restrict the radius of the Geometric Wasserstein ball. We prove in theorem 3.2 that for the final time step t = T , the probability weights p(T ) learned by Equation 6 guarantees a global error rate e−CT from the worst-case risk Rn(θ, p∗) constrained in an ϵ(θ)-radius ball where ϵ(θ) = GW2G0(P̂tr, p(T )) and p ∗ = arg supp{Rn(θ, p) : GW2G0(P̂tr, p) ≤ ϵ(θ)}. The result is similar to conventions in WDRO [29], which gives up providing a prescribed radius of its uncertainty set but turns to an approximation with a intermediate hyperparameter. Pseudo-code of the whole algorithm is shown in Algorithm 1. The whole derivations are in Appendix. 3.3 Theoretical Properties In this section, we prove the equivalence between our Gradient-Flow-based algorithm and a local GDRO problem, and the bound of its global error rate as well as the convergence rate is derived. We first provide the robustness guarantee for the Lagrangian penalty problem in Equation 4. Theorem 3.1 (Local Robustness Guarantees of Lagrangian Penalty Problem). For any τ > 0, t > 0 and given θ, denote the solution of Equation 4 as p∗(θ) = arg supp∈P(G0) Rn(θ, p) − 1 2τ GW 2 G0(p̂ t τ (θ), p). Let ϵτ (θ) = GW 2 G0(p̂ t τ (θ), p ∗(θ)), we have sup p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂ t τ (θ), p) = sup p:GW2 G0 (p̂tτ (θ),p)≤ϵτ (θ) Rn(θ, p). (7) Theorem 3.1 proves that at each time step our Lagrangian penalty problem is equivalent to a local GDRO within the ϵτ (θ)-radius Geometric Wasserstein ball. It further shows that with τ → 0 in Equation 4, our gradient flow constantly finds the steepest descent direction. Then we theoretically analyze the global error rate brought by our approximate algorithm. Theorem 3.2 (Global Error Rate Bound). Given the model parameter θ, denote the approximate worst-case by gradient descent in Equation 6 after time t as pt(θ), and ϵ(θ) = GW2G0(P̂tr, p t(θ)) denotes the distance between our approximation pt and the training distribution P̂tr. Then denote the real worst-case distribution within the ϵ(θ)-radius discrete Geometric Wasserstein-ball as p∗(θ), that is, p∗(θ) = arg sup p:GW2 G0 (P̂tr,p)≤ϵ(θ) n∑ i=1 piℓi − β n∑ i=1 pi log pi. (8) Here we derive the bound w.r.t. the error ratio of objective function Rn(θ, p) (abbr. R(p)). For θ ∈ Θ, there exists C > 0 such that Error Rate = ( R(p∗)−R(pt) ) / ( R(p∗)−R(P̂tr) ) < e−Ct, (9) and when t → ∞, Error Rate → 0. The value of C depends on ℓ, β, n. Theorem 3.2 theoretically characterizes ’how far’ our approximation pt is from the real worst-case p∗ in terms of the drop ratio of the objective function R(p). At last we derive the convergence rate of our Algorithm 1. Theorem 3.3 (Convergence of Algorithm 1). Denote the objective function for the predictor as: F (θ) = sup GW2 G0 (P̂tr,p)≤ϵ(θ) Rn(θ, p), (10) which is assumed as L-smooth and Rn(θ, p) satisfies Lp-smoothness such that ∥∇pRn(θ, p) − ∇pRn(θ, p′)∥2 ≤ Lp∥p − p′∥2. ϵ(θ) follows the definition in Theorem 3.2. Take a constant ∆F ≥ F (θ(0)) − infθ F (θ) and set step size as α = √ ∆F /(LK). For t ≥ T0 where T0 is a constant, denote the upper bound of ∥pt − p∗∥22 as γ and train the model for K steps, we have: 1 K E [ K∑ k=1 ∥∇θF (θ(k))∥22 ] − (1 + 2 √ L∆F /K) 1− 2 √ L∆F /K L2pγ ≤ 2∆F√ ∆FK − 2L∆F . (11) Here we make a common assumption on the smoothness of the objective function as in [29]. As K → ∞, ∇θF (θ(k)) will achieve a square-root convergence only if γ is controlled by the exponentially vanishing error rate in Theorem 3.2. And the accuracy parameter γ remains a fixed effect on optimization accuracy. 3.4 Connections with Conventional DRO Methods In Theorem 3.4, we illustrate the connections of our GDRO with f -DRO. Theorem 3.4 (Connection with f -DRO with KL-diveregence (KL-DRO).). Relax the discrete Geometric Wasserstein-ball regularization (set ϵ → ∞) and set the graph G0 to a fully-connected graph, and then the solution of GDRO is equivalent to the following form of KL-DRO: min θ∈Θ sup p:DKL(p∥P̂tr)≤ϵ̂(θ) n∑ i=1 piℓ(fθ(xi), yi), with ϵ̂(θ) = DKL(p ∗(θ)∥P̂tr), (12) where p∗(θ) = argmaxp ∑n i=1 piℓ(fθ(xi), yi)− β ∑n i=1 pi log pi. Remark (Connections with WDRO). Since conventional WDRO allows distributions to extend training support, our proposed GDRO is intrinsically different from WDRO. Intuitively, for infinite samples, if the graph G0 is set to a fully-connected graph with edge weights wij = ∥zi − zj∥2 and β is set to 0, our GDRO resembles support-restricted version of WDRO. 4 Experiments In this section, we investigate the empirical performance of our proposed GDRO on different simulation and real-world datasets under various kinds of distributional shifts, including sub-population shifts and class difficulty shifts. As for baselines, we compare with empirical risk minimization (ERM), WDRO [4, 29] and two typical f -DRO methods [11], including KL-DRO (f(t) = t ln t) and χ2-DRO (f(t) = (t− 1)2). Implementation Details For all experiments, G0 is constructed as a k-nearest neighbor graph from the training data only at the initialization step. Specifically, we adopt NN-Descent [10] to efficiently estimate the k-nearest neighbor graph for the large-scale dataset Colored MNIST while performing Figure 2: Visualization of learned kNN graph with different k of the regression data, which is projected on the plane spanned by the unit vector of V axis and θS with a projection matrix [ 01,5 01,4 1 θTS 01,4 0 ] . an exact search for k-nearest neighbors in the other experiments. We adopt MSE as the empirical loss function for regression tasks and cross-entropy for classification tasks. We use MLPs for the Colored MNIST and Ionosphere datasets, and linear models in the other experiments. Besides, we find that the two-stage optimization is enough for good performances, as mentioned in [19], and we use it in our experiments. Note that GDRO is compatible with any parameterized models including deep models. The simulation of gradient flow in Equation 6 is implemented by message propagation with DGL package [32], which scales linearly with sample size and enjoys parallelization by GPU. 4.1 Simulation Data In this subsection, we use simulations to verify that our GDRO could deal with sub-population shifts and to some extent resist the label noises. And we also visualize the effects of the kNN algorithm as well as the sensitivity of GDRO to the parameter k. 1. Regression: Sub-population Shifts via Selection Bias Mechanism Data Generation The input features X = [S,U, V ]T ∈ R10 are comprised of stable features S ∈ R5, noisy features U ∈ R4 and the spurious feature V ∈ R: S ∼ N (0, 2I5) ∈ R5, U ∼ N (0, 2I4) ∈ R4, Y = θTSS + 0.1 · S1S2S3 +N (0, 0.5), (13) V ∼ Laplace(sign(r) · Y, 1 5 ln |r| ) ∈ R, (14) where θS ∈ R5 is the coefficient of the true model. |r| > 1 is a factor for each sub-population. S are stable features with the invariant relationship with Y . U are noisy features such that U ⊥ Y . And V is the spurious feature whose relationship with Y is unstable and is controlled by the factor r. Intuitively, sign(r) controls whether the spurious correlation between V and Y is positive or negative. And |r| controls the strength of the spurious correlation: the larger |r| is, the stronger the spurious correlation is. Simulation Setting 1 In training, we generate 10000 points, where the major group contains 95% data with r = 1.9 (i.e. strong positive spurious correlation) and the minor group contains 5% data with r = −1.3 (i.e. weak negative spurious correlation). As shown in Figure 2, the training data is the union of two sub-spaces. In testing, we vary r ∈ {−1.5,−1.7,−1.9,−2.3,−2.7,−3.0} to simulate stronger negative spurious correlations between V and Y . Notably, the testing data also lie on the same manifold as the training. We use the linear model and calculate the root-mean-square errors (RMSE) and the parameter estimation errors Est Error = ∥θ̂ − θ∗∥2 of different methods (θ∗ = [θS , 0, . . . , 0]T ). The results are shown in the Simulation 1 in Table 1. Simulation Setting 2 Then to test whether GDRO could resist label noises, we randomly sample 20 points and add label noises to them via Ỹ = Y + Std(Y ) where std(Y ) denotes the standard derivation of the marginal distribution of Y . The results are shown in the Simulation 2 in Table 1. And we visualize the learned worst-case distribution of three methods in Figure 3(a) and 3(b). Analysis (1) From the results of Simulation 1 and Simulation 2 in Table 1, GDRO outperforms all the baselines in terms of low prediction error on the minor group under different strengths of spurious correlations. (2) From Simulation 2 in Table 1, compared with KL-DRO and χ2-DRO, GDRO is only slightly affected by the label noises. Also, from Figure 3(a), compared with GDRO, KL-DRO puts much heavier weights on the noisy points (red points of f -DRO are much larger). And GDRO focuses more on the minor group (blue points), which results in their different performances under Simulation 2. Further, to investigate this phenomenon, we quantify the smoothness via Dirichlet Energy. In Figure 3(b), we plot the Dirichlet Energy w.r.t the relative entropy KL(P̂∥P̂tr) between the learned distribution P̂ and training distribution P̂tr, which proves that the learned weights of GDRO are much smoother w.r.t. the data manifold. And this property helps GDRO to resist the label noises, since GDRO does not allow extremely high weights on the isolated points. (3) The third sub-figure in Figure 3(a) verifies our analysis on WDRO that it introduces much more label noises (red points). Discussion on kNN To test whether GDRO is sensitive to the parameter k of the kNN graph G0, we vary k ∈ {5, 20, 100} and test the performances of our GDRO under simulation setting 1. We also visualize the kNN graphs in Figure 2, which show that kNN consistently manages to fit the data manifold well until k = 100. And empirical results of Simulation 3 in Table 1 prove that with k < 100, GDRO performs stably better than the baselines with small and moderate k, except that smaller k leads to slower convergence since sparse graphs restrain the flow of probability weights. Still, we present an extreme failure case where KNN achieves a poor approximation of the data manifold. When k increases to an extremely large number as k = 100, the neighborhood of kNN diffuses and two manifolds start to merge on the graph, in which case GDRO could not distinguish between two sub-populations and its performance degrades as shown in the Table 1. Actually, in Theorem 3.4 of this paper, we have proved that with an infinitely large k, GDRO could be reduced to KL-DRO, which completely ignores data geometry. Still, we have to clarify that kNN and GDRO perform stably well for a large range of k. 2. Classification: Sub-population Shifts with High-dimensional Manifold Data Data Generation In this setting, data are high-dimensional but with a low-dimensional structure. The data generation is similar to [25] and is a typical classification setting in OOD generalization. We introduce the spurious correlation between the label Y = {+1,−1} and the spurious attribute A = {+1,−1}. We firstly generate low-dimensional data Xlow = [S, V ]T ∈ R10 as: S ∼ N (Y 1, σ2sI5), V ∼ N (A1, σ2vI5), where A = { Y ,with probability r, −Y ,with probability 1− r. (15) Intuitively, r ∈ [0, 1] tunes the proportions of sub-populations and controls the spurious correlation between A and Y . When r > 0.5, the spurious attribute A is positively correlated with Y ; and when r < 0.5, the spurious correlation becomes negative. And larger |r − 0.5| results in stronger spurious correlation between A and Y . Then to convert the low-dimensional data to high-dimensional space, Xlow is multiplied by a column full rank matrix H as: Xhigh = (HXlow) ∈ R300, (16) where H ∈ R300×10 is full column rank, and we randomly choose H in each run. Simulation Setting For both the training and testing data, we set σ2s = 1.0 and σ2v = 0.3. We use linear models with cross-entropy loss for all methods. In training, we set r = 0.85 (A is positively correlated with Y ). In testing, we design two environments with r1 = 0.5 (A ⊥ Y ) and r2 = 0.0 (A is negatively correlated with Y ) to introduce distributional shifts. Apart from the natural setting without label noises, we also test the performances under label noises. Specifically, we add 4% label noises in the training data by flipping the label Y . We run the experiments 10 times, each time with one random matrix H . We report the mean accuracy in Table 2. Analysis From the results in Table 2, our GDRO outperforms all baselines under the sub-population shifts, and it is not affected much by the label noises, which validates the effectiveness of our GDRO. 4.2 Real-World Data We evaluate our method on four real-world datasets. Due to space limits, we place two of them here, with various kinds of distributions, including sub-populations shifts and class difficulty shifts, and the others can be found in Appendix. We use MLPs with cross-entropy loss in these experiments. Colored MNIST: Sub-population Shifts & Label Noises Following Arjovsky et al. [1], Colored MNIST is a binary classification task constructed on the MNIST dataset. Firstly, a binary label Y is assigned to each image according to its digit: Y = 0 for digit 0 ∼ 4 and Y = 1 for digit 5 ∼ 9. Secondly, we induce noisy labels Ỹ by randomly flipping the label Y with a probability of 0.2. Then we sample the color id C spuriously correlated with Ỹ as C = { +Ỹ , with probability 1 − r, −Ỹ , with probability r. Intuitively, r controls the spurious correlation between Y and C. When r < 0.5, C is positively correlated with Y ; and when r > 0.5, the spurious correlation becomes negative. And |r − 0.5| controls the strength of the spurious correlation. In training, we randomly sample 5000 data points and set r = 0.85 (strong negative spurious correlation between C and Y ) and in testing, we set r = 0 (strong positive spurious correlation), inducing strong shifts between training and testing. Results are shown in Table 3. Ionosphere Radar Classification: Class Difficulty Shifts Ionosphere Radar Dataset [8] consists of return signals from the ionosphere of a phased array radar system in Google Bay, Labrador. The electromagnetic signals were processed by an auto-correlation function to produce 34 continuous attributes. The task is to predict whether the return signal indicates specific physical structures in the ionosphere (good return) or not (bad return). However, the prediction difficulty of two classes is quite different, and ERM was found to achieve a much lower accuracy on bad returns than good ones [28]. In this experiment, both the training and testing sets consist of samples with balanced label distribution. But due to the disparity of class difficulty, the prediction accuracy of two classes is quite different, while DRO methods are expected to achieve similar prediction accuracy for both classes. Therefore, in testing, we report the testing accuracy for the easy class and the hard class respectively, as well as the AUC score of the testing set. Results are shown in Table 3. Analysis From the results on real-world data, we find that all DRO methods (WDRO and f -DROs) show significant promotions to ERM, reflecting the reasonability of our experimental settings. And our proposed GDRO outperforms all baselines significantly when dealing with sub-population shifts and class difficulty shifts, which validates the effectiveness of our GDRO. 5 Related Work Distributionally robust optimization (DRO) directly solves the OOD generalization problem by optimizing the worst-case error in a pre-defined uncertainty set, which is often constrained by moment or support conditions [9, 3], shape constraints [22, 17, 16, 5], f -divergence [23, 11] and Wasserstein distance [26, 29, 4, 12]. [9, 3] set moment or support conditions for the distributions in the uncertainty set. As for shape constraints, one commonly used is unimodality, and [16] uses the orthounimodality to constitute the uncertainty set for DRO for multivariate extreme event analysis. As for f -divergence, [23] theoretically demonstrates that it is equivalent to the variance penalty, and [11] derives the optimization algorithm from its dual reformulation. Compared with f -divergences which require the support of distributions in the uncertainty set is fixed, the uncertainty set built with Wasserstein distance contains distributions with different support and could provide robustness to unseen data. Despite the capacity of a Wasserstein uncertainty set, the optimization of Wasserstein DRO is quite hard. [26, 12, 4] convert the original DRO problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO and could be applied to deep neural networks, which protects the models from adversarial attacks. Besides, DRO methods have also been used for structured data. [18] studies the DRO problem for data generated by a time-homogeneous, ergodic finite-state Markov chain. Although DRO methods could guarantee the OOD generalization performances when the testing distribution is included in the uncertainty set, there are works [15, 13] doubting their real effects in practice. In order to guarantee the OOD generalization ability, in real scenarios, the uncertainty set has to be overwhelmingly large to contain the potential testing distributions. Such overwhelmingly large set makes the learned model make decisions with fairly low confidence, and it is also referred to as the over-pessimism problem. To mitigate such problem, [13] proposes to incorporate additional unlabeled data to further constrain the uncertainty set, and [20] learns the transportation cost function for WDRO with the help of multiple environment data. 6 Conclusion Through this work, we take the first step to incorporate data geometry information to mitigate the over-flexibility problem in DRO. In this work, we use the k-nearest-neighbor graph to characterize the data manifold, while our proposed method is compatible with any manifold learning or graph learning methods. And we believe that a more accurate estimated data structure with advanced manifold learning and graph learning algorithms will further boost the performance of GDRO, which we leave for future work. Acknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004, No. 2020AAA0106300), National Natural Science Foundation of China (No. U1936219, 62141607), Beijing Academy of Artificial Intelligence (BAAI). Bo Li’s research was supported by the National Natural Science Foundation of China (No.72171131); the Tsinghua University Initiative Scientific Research Grant (No. 2019THZWJC11); Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403. We would like to thank Yuting Pan, Renzhe Xu, Hao Zou for helpful comments. Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [N/A] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] All proofs are placed in Appendix. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] All details are placed in Appendix. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] The error bar is quite small in our experiments and therefore we omit it. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [N/A] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus of the paper regarding distributionally robust optimization? 2. What are the strengths of the proposed approach, particularly in tackling an overlooked issue in the community? 3. What are the weaknesses of the paper, especially regarding its limitations in applying to deep neural networks? 4. Do you have any questions or concerns regarding the theoretical guarantees provided by the authors? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers the data geometry in the distributionally robust optimization (DRO) problems and proposes a novel framework called Geometric Wasserstein DRO (GDRO) to achieve their goal. The authors also provide some theoretical analyses, such as approximated optimization error and convergence rate, to theoretically show the strengths of GDRO. Finally, the experimental results show the effectiveness of the proposed approach. Strengths And Weaknesses Strengths: The motivation and contributions are good. This work attacks an overlooked issue in the DRO community and proposes a reasonable method to alleviate this. This would raise much more research attention in this direction. This paper presents some roughly decent theoretical guarantees, which support the effectiveness theoretically. Weaknesses: This paper lacks comprehensive discussions about the related work. As we all know, DRO has attracted tremendous research interest in the machine learning community, and there are many studies about DRO. The proposed GDRO may has somewhat limitations, since it is not easy to apply to the deep neural networks (DNNs) (Perhaps I am wrong, but at least the authors do not mention it). As a whole, despite some limitations, I believe the proposed method is a qualified work and could bring some new insights into the DRO community. Therefore, I tend to accept this paper. Questions My major concerns are listed below: The authors should provide more discussions related to their work, including but not limited to the following concepts: a) Distributionally Robust Optimization with Markovian Data. b) Orthounimodal Distributionally Robust Optimization. c) Distributionally robust shape and topology optimization. Can the GDRO apply to large DNNs? In Thm.3.1, ℓ ( θ ) is assumed to be convex. What about the non-convex? ================================== My concerns about this work have been well addressed. I am happy to vote for its acceptance. Limitations None
NIPS
Title Distributionally Robust Optimization with Data Geometry Abstract Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including f -divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry. 1 Introduction Machine learning algorithms with empirical risk minimization often suffer from poor generalization performance under distributional shifts in real applications due to the widespread latent heterogeneity, domain shifts, and data selection bias, etc. It is demanded for machine learning algorithms to achieve uniformly good performances against potential distributional shifts, especially in high-stake applications. Towards this goal, distributionally robust optimization (DRO) [27, 23, 29, 4, 13, 11], stemming from the literature of robust learning, has been proposed and developed in recent years. It optimizes the worst-case distribution within an uncertainty set P(Ptr) lying around the training distribution Ptr. When the testing distribution Pte is contained in P(Ptr), DRO could guarantee the generalization performance on Pte. In principle, the effectiveness of DRO heavily depends on the rationality of its uncertainty set P(Ptr) which is commonly formulated as a ball surrounding the training distribution endowed with a certain distance metric. An ideal uncertainty set should be constituted by all realistic distributions that may be encountered in test environments. However, existing DRO methods adopting the Wasserstein distance (i.e. WDRO methods [27, 29, 4, 13]) or f -divergence distance (i.e. f -DRO methods [23, 11]) tend to generate over-flexible uncertainty sets that incorporate unrealistic distributions far beyond the ideal ∗Equal Contributions †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). uncertainty set [15, 13]. As such unrealistic distributions must violate the underlying predicting mechanism, they are prone to be the worst-case and attract much optimization energy in the DRO framework, making the learned model deviate from the true predicting mechanism. Here we argue that the unrealistic distributions mentioned above originate from the distance metrics’ inherent ignorance of data geometry, as illustrated in Figure 1. The Euclidean-norm transportation cost measured by the L2-Wasserstein metric leads to a straight-line transportation path as shown in Figure 1(a) (red dotted line) which deviates from the data manifold (blue region). Therefore, WDRO methods tend to create unrealistic samples beyond the underlying data manifold, resulting in unrealistic distributions. f -divergence can also be interpreted as a data geometry-independent measure of the transportation cost confined in the support of Ptr. Taking χ2-divergence for example, the cost is a constant to transfer per unit of probability weights between samples, like a virtual tunnel (yellow dotted line in Figure 1(a)). In such a case, the noisy samples (e.g. outliers or samples with label noises) are more prone to be the worst-case and thus gather much larger weights than normal samples. The resultant distribution is obviously unrealistic. To mitigate the problem, it is imperative to introduce a new distance metric incorporating data geometry to further constrain the uncertainty set and avoid the undesired cases. As illustrated in Figure 1(a), considering the common assumption that data lie on a low-dimensional manifold [24, 30, 2], we expect the probability density transportation path (the blue dotted line) is restricted within the data manifold (the blue region). In this way, the uncertainty set (i.e. the Geometric Wasserstein Set as shown in Figure 1(b)) could inherently exclude the distributions beyond the data manifold. Furthermore, it is harder to gather probability weights on isolated noisy samples, which also mitigate the undesired cases in f -DRO. In this work, we propose a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance [6] which measures the transportation cost of probability density along the geodesic in a metric space. As the Geometric Wasserstein distance does not enjoy an analytical expression, we derive an approximate algorithm from the Gradient Flow in the Finsler manifold endowed with Geometric Wasserstein Distance (in section 3.2). We further theoretically specify an exponentially vanishing error rate of our approximation as well as a O(1/ √ T ) convergence rate of our algorithm, and characterize the edge cases where GDRO will degenerate to f -DRO or Wasserstein DRO (in section 3.3 and 3.4). Comprehensive experiments encompassing various distributional shifts, including sub-population shifts and class difficulty shifts, validate the effectiveness of our proposed GDRO (in section 4). We also observe a lower Dirichlet Energy (i.e. higher smoothness) of GDRO’s estimated worst-case distribution w.r.t the data manifold compared with existing DRO methods, justifying its adaptability to data geometry. 2 Preliminaries on Distributionally Robust Optimization Notations. X ∈ X denotes the covariates, Y ∈ Y denotes the target, fθ(·) : X → Y is the predictor parameterized by θ ∈ Θ. Ptr(X,Y ) and Pte(X,Y ) abbreviated with Ptr and Pte respectively represent the joint training distribution and test distribution. The random variable of data points is denoted by Z = (X,Y ) ∈ Z . Distributionally Robust Optimization (DRO) is formulated as: θ∗ = argmin θ∈Θ sup P∈P(Ptr) EP [ℓ(fθ(X), Y )], (1) where ℓ is a loss function, P(Ptr) = {P : Dist(P, Ptr) ≤ ϵ} characterizes the uncertainty set surrounding the training distribution restricted by a radius ϵ, and Dist is a distance metric between probability distributions. Most works specify the Dist metric as the f -divergence [23, 11] or the Wasserstein distance [27, 29, 4, 26, 12]. f -divergence DRO (abbr. f -DRO) f -divergence is defined as Df (P∥Q) = ∫ f(dP/dQ)dQ, where f(·) is a convex function and f(1) = 0. Two typical instances of f -divergences are KLdivergence (f(t) = t log t) and χ2-divergence (f(t) = (t − 1)2). [23] theoretically demonstrates the equivalence between χ2-DRO and the variance-regularized empirical risk minimization (ERM) problem, and [11] derives the optimization algorithm for a family of f -DRO. However, as proven in [15], f -DRO faces the over-pessimism problem and ends up giving a classifier only fitting the given training distribution, which we attribute to the ignorance of data geometry. As shown in Figure 1(a), f -divergence only cares about the probability of each sample (only dP, dQ occur). However, data geometry information is crucial for a reasonable uncertainty set, since it is well-accepted that data lie on a low-dimensional manifold and adjacent data points have similar degrees of importance. For example, for heterogeneous data, while one hopes to focus on some sub-population (e.g., put more weights on a group of data), without data geometric information, the distribution in the f -divergence ball is prone to only focus on some isolated samples with higher noises (as shown in Figure 3(a)). And in Figure 3(b), we find the worst-case distribution of f -DRO (with KL-divergence) is not smooth (with larger Dirichlet Energy) w.r.t. the data manifold. Wasserstein DRO (abbr. WDRO) Compared with the f -divergence ball that does not extend the support of the training distribution, the uncertainty set built with Wasserstein distance allows for the extension of the support [27, 29, 4]. [26, 12, 4] convert the original problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO that could be applied to deep neural networks, which protects models from adversarial attacks. However, the flexibility of the Wasserstein ball also causes an over-pessimistic estimation under strong distributional shifts [13], where the created samples are too noisy to obtain a confident model. As demonstrated in Figure 3(a), WDRO adds much more noises to the data and thus hurts the generalization performances in practice. Therefore, to mitigate the over-pessimism problem of DRO, we propose to incorporate the geometric properties into the uncertainty set. Compared with traditional shape-constrained methods [17, 16] for multivariate extreme event analysis that use the unimodality to constitute the uncertainty set, our proposed method characterizes the data manifold in a data-driven way and incorporates it into the DRO framework intrinsically via the Geometric Wasserstein distance metric, which is also compatible with manifold learning and graph learning methods. 3 Proposed Method In this work, we propose Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). In the following of this section, we first introduce the Geometric Wasserstein distance and propose the overall objective of GDRO; then we derive an approximate algorithm for optimization utilizing Gradient Flow; finally, some theoretical properties are proved and connections with existing DRO methods are demonstrated. 3.1 Discrete Geometric Wasserstein Distance GWG0 and GDRO We firstly introduce the Discrete Geometric Wasserstein distance, which extends the Benamou-Brenier formulation of the optimal transport problem to a metric space. The first step is to define a discrete velocity field and its discrete divergence, which we mainly follow the construction by Chow et al. [6]. Consider a given weighted finite graph G0 = (V,E,w) with n nodes, where V = {1, 2, . . . , n} is the vertex set, E is the edge set and w = (wij)i,j∈V is the weight of each edge. A velocity field v = (vij)i,j∈V ∈ Rn×n on G0 is defined to be a skew-symmetric matrix on the edge set E such that vij = −vji if (i, j) ∈ E. The probability set (simplex) P(G0) supported on V is defined as P(G0) = {(pi)ni=1 ∈ Rn| ∑n i=1 pi = 1, pi ≥ 0, for any i ∈ V } and its interior is denoted by Po(G0). κij is a predefined "cross-sectional area" typically interpolated with the associated nodes’ densities pi, pj . The direct approach is to take the arithmetic average such that κij(p) = (pi + pj)/2. However, to ensure the positiveness of p during optimization, we adopt the upwind interpolation:κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi. One could thereafter define the product pv ∈ Rn×n, called flux function on G0, by pv := (vijκij(p))(i,j)∈E . The divergence of pv is divG0(pv) := −( ∑ j∈V :(i,j)∈E √ wijvijκij(p)) n i=1 which is a vector in Rn. The divergence vector is supposed to lie in the tangent space of Po(G0), summing over all the in-fluxes and out-fluxes along edges of a certain node, with each edge transporting a probability density √wijvijκij(p). Now we are ready to define Geometric Wasserstein distance in Equation 2. Definition 3.1 (Discrete Geometric Wasserstein Distance GWG0(·, ·) [6]). Given a finite graph G0, for any pair of distributions p0, p1 ∈ Po(G0), define the Geometric Wasserstein Distance: GW2G0(p 0, p1) := inf v ∫ 1 0 1 2 ∑ (i,j)∈E κij(p)v 2 ijdt : dp dt + divG0(pv) = 0, p(0) = p 0, p(1) = p1 , (2) where v ∈ Rn×n denotes the velocity field on G0, p is a continuously differentiable curve p(t) : [0, 1] → Po(G0), and κij(p) is a pre-defined interpolation function between pi and pj . Intuitively v is a velocity field continuously transporting masses to convert the density distribution from p0 to p1 along a curve in the Wasserstein space [31]. Equation 2 measures the shortest (geodesic) length among all possible plans, which is calculated by integrating a total "kinetic energy" of the velocity field over the transportation process. Compared with the Benamou-Brenier formulation of continuous L2-Wasserstein distance, it ensures that the transportation path stays within the manifold (as the blue dotted line shown in Figure 1(a)), and it induces a smoother estimate of the worst-case probability distribution w.r.t the data structure since weights are exchanged just between neighbors. Then we present the overall objective function of Distributionally Robust Optimization with Geometric Wasserstein distance (GDRO). Given the training dataset Dtr = {(xi, yi)}ni=1 and its empirical marginal distribution P̂tr = 1n ∑ i δ(xi), along with a manifold structure represented by graph G0, we intend to obtain a distributionally robust predictor parameterized by θ∗ such that for certain ϵ > 0: θ∗ = argmin θ∈Θ sup P :GW2 G0 (P̂tr,P )≤ϵ { Rn(θ, p) = n∑ i=1 piℓ(fθ(xi), yi)− β n∑ i=1 pi log pi } . (3) We add a minor entropy-regularization with a small β as proposed in the entropy-balancing literature [14] to avoid singular cases and ensure the convergence of our optimization in section 3.2. Owing to the Geometric Wasserstein distance, the uncertainty set of GDRO excludes those distributions supported on points beyond the data manifold and the Geometric Wassserstein Ball is directional in Wasserstein space as it stretches along the data structure, as depicted in Figure 1(b). How is G0 estimated? To characterize the data manifold, the G0 used in GDRO is constructed as a k-nearest neighbor (kNN) graph from the training data only, as the kNN graph is shown to have a good approximation of the geodesic distance within local structures on the manifold [21, 7]. Note that our GDRO is compatible with any manifold learning and graph learning methods. 3.2 Optimization In this subsection, we derive the optimization algorithm for GDRO. Due to the lack of an analytical form of the Geometric Wasserstein distance, we give up providing a prescribed amount ϵ of robustness Algorithm 1 Geometric Wasserstein Distributionally Robust Optimization (GDRO) Input: Training Dataset Dtr = {(xi, yi)}ni=1, learning rate αθ , gradient flow iterations T , entropy term β, manifold representation G0 (learned by kNN algorithm from Dtr). Initialization: Sample weights initialized as (1/n, . . . , 1/n)T . Predictor’s parameters initialized as θ(0). for i = 0 to Epochs do 1. Simulate gradient flow for T time steps according to Equation 5∼6 to learn an approximate worst-case probability weight pT . 2. θ(i+1) ← θ(i) − αθ∇θ( ∑ i p T i ℓi(θ)) end for in Equation 3 and propose an alternate optimization algorithm as an approximation. For fixed probability weights p, the parameter θ could be optimized via gradient descents for Rn(θ, p) w.r.t. θ in parameter space Θ. The inner supremum problem can be approximately solved via gradient ascents for Rn(θ, p) w.r.t. p in the Geometric Wasserstein space (Po(G0),GWG0). And the cost measured by GW2G0(P̂tr, ·) could be approximated with the length of the gradient flow, which is a curve in (Po(G0),GWG0). Here we clarify some notations. p : [0, T ] 7→ Po(G0) denotes the continuous gradient flow, and the probability weight of the i-th sample at time t is abbreviated as pi(t). The time-discretized gradient flow corresponding with the time step τ is denoted as p̂τ : [0, T ] 7→ Po(G0), and p̂τ (t) is abbreviated as p̂tτ . For the optimization, we adopt the time-discretized definition of Gradient Flow [31] for −Rn(θ, p) in the Geometric Wasserstein space (Po(G0),GWG0) as: (with the time step τ ) p̂τ (t+ τ) = arg max p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂τ (t), p). (4) When τ → 0, the time-discretized gradient flow p̂τ becomes the continuous one p. Note that Equation 4 describes the Gradient Flow as a steepest ascent curve locally optimizing for a maximal objective within an infinitesimal Geometric Wasserstein ball, and it coincides with the Lagrangian penalty problem of Equation 3. In theorem 3.1 we would prove that Equation 4 finds the exact solution to a local GDRO at each time step. Following Chow et al. [6], the analytical solution to Equation 4 as τ → 0 could be derived as: dpi dt = ∑ j:(i,j)∈E wijκij(ℓi − ℓj) + β ∑ j:(i,j)∈E wijκij(log pj − log pi), (5) where pi denotes the time-dependent probability function of the i-th sample, li denotes the loss of the i-th sample and we take an upwind interpolation of κ: κij(p) = I(vij > 0)pj + I(vij ≤ 0)pi, so that the probability density transferred on an edge equals the density from the origin node associated with the velocity field. The upwind interpolation guarantees that the probability weight p stays positive along the Gradient Flow in Equation 5. Then we discretize equation 5 with Forward Euler Method: pi(t+ α) = pi(t) + αdpi(t)/dt, (6) where α is a learning rate. For our algorithm, we control the maximum time step as t ≤ T in Equation 6 to approximately restrict the radius of the Geometric Wasserstein ball. We prove in theorem 3.2 that for the final time step t = T , the probability weights p(T ) learned by Equation 6 guarantees a global error rate e−CT from the worst-case risk Rn(θ, p∗) constrained in an ϵ(θ)-radius ball where ϵ(θ) = GW2G0(P̂tr, p(T )) and p ∗ = arg supp{Rn(θ, p) : GW2G0(P̂tr, p) ≤ ϵ(θ)}. The result is similar to conventions in WDRO [29], which gives up providing a prescribed radius of its uncertainty set but turns to an approximation with a intermediate hyperparameter. Pseudo-code of the whole algorithm is shown in Algorithm 1. The whole derivations are in Appendix. 3.3 Theoretical Properties In this section, we prove the equivalence between our Gradient-Flow-based algorithm and a local GDRO problem, and the bound of its global error rate as well as the convergence rate is derived. We first provide the robustness guarantee for the Lagrangian penalty problem in Equation 4. Theorem 3.1 (Local Robustness Guarantees of Lagrangian Penalty Problem). For any τ > 0, t > 0 and given θ, denote the solution of Equation 4 as p∗(θ) = arg supp∈P(G0) Rn(θ, p) − 1 2τ GW 2 G0(p̂ t τ (θ), p). Let ϵτ (θ) = GW 2 G0(p̂ t τ (θ), p ∗(θ)), we have sup p∈Po(G0) Rn(θ, p)− 1 2τ GW2G0(p̂ t τ (θ), p) = sup p:GW2 G0 (p̂tτ (θ),p)≤ϵτ (θ) Rn(θ, p). (7) Theorem 3.1 proves that at each time step our Lagrangian penalty problem is equivalent to a local GDRO within the ϵτ (θ)-radius Geometric Wasserstein ball. It further shows that with τ → 0 in Equation 4, our gradient flow constantly finds the steepest descent direction. Then we theoretically analyze the global error rate brought by our approximate algorithm. Theorem 3.2 (Global Error Rate Bound). Given the model parameter θ, denote the approximate worst-case by gradient descent in Equation 6 after time t as pt(θ), and ϵ(θ) = GW2G0(P̂tr, p t(θ)) denotes the distance between our approximation pt and the training distribution P̂tr. Then denote the real worst-case distribution within the ϵ(θ)-radius discrete Geometric Wasserstein-ball as p∗(θ), that is, p∗(θ) = arg sup p:GW2 G0 (P̂tr,p)≤ϵ(θ) n∑ i=1 piℓi − β n∑ i=1 pi log pi. (8) Here we derive the bound w.r.t. the error ratio of objective function Rn(θ, p) (abbr. R(p)). For θ ∈ Θ, there exists C > 0 such that Error Rate = ( R(p∗)−R(pt) ) / ( R(p∗)−R(P̂tr) ) < e−Ct, (9) and when t → ∞, Error Rate → 0. The value of C depends on ℓ, β, n. Theorem 3.2 theoretically characterizes ’how far’ our approximation pt is from the real worst-case p∗ in terms of the drop ratio of the objective function R(p). At last we derive the convergence rate of our Algorithm 1. Theorem 3.3 (Convergence of Algorithm 1). Denote the objective function for the predictor as: F (θ) = sup GW2 G0 (P̂tr,p)≤ϵ(θ) Rn(θ, p), (10) which is assumed as L-smooth and Rn(θ, p) satisfies Lp-smoothness such that ∥∇pRn(θ, p) − ∇pRn(θ, p′)∥2 ≤ Lp∥p − p′∥2. ϵ(θ) follows the definition in Theorem 3.2. Take a constant ∆F ≥ F (θ(0)) − infθ F (θ) and set step size as α = √ ∆F /(LK). For t ≥ T0 where T0 is a constant, denote the upper bound of ∥pt − p∗∥22 as γ and train the model for K steps, we have: 1 K E [ K∑ k=1 ∥∇θF (θ(k))∥22 ] − (1 + 2 √ L∆F /K) 1− 2 √ L∆F /K L2pγ ≤ 2∆F√ ∆FK − 2L∆F . (11) Here we make a common assumption on the smoothness of the objective function as in [29]. As K → ∞, ∇θF (θ(k)) will achieve a square-root convergence only if γ is controlled by the exponentially vanishing error rate in Theorem 3.2. And the accuracy parameter γ remains a fixed effect on optimization accuracy. 3.4 Connections with Conventional DRO Methods In Theorem 3.4, we illustrate the connections of our GDRO with f -DRO. Theorem 3.4 (Connection with f -DRO with KL-diveregence (KL-DRO).). Relax the discrete Geometric Wasserstein-ball regularization (set ϵ → ∞) and set the graph G0 to a fully-connected graph, and then the solution of GDRO is equivalent to the following form of KL-DRO: min θ∈Θ sup p:DKL(p∥P̂tr)≤ϵ̂(θ) n∑ i=1 piℓ(fθ(xi), yi), with ϵ̂(θ) = DKL(p ∗(θ)∥P̂tr), (12) where p∗(θ) = argmaxp ∑n i=1 piℓ(fθ(xi), yi)− β ∑n i=1 pi log pi. Remark (Connections with WDRO). Since conventional WDRO allows distributions to extend training support, our proposed GDRO is intrinsically different from WDRO. Intuitively, for infinite samples, if the graph G0 is set to a fully-connected graph with edge weights wij = ∥zi − zj∥2 and β is set to 0, our GDRO resembles support-restricted version of WDRO. 4 Experiments In this section, we investigate the empirical performance of our proposed GDRO on different simulation and real-world datasets under various kinds of distributional shifts, including sub-population shifts and class difficulty shifts. As for baselines, we compare with empirical risk minimization (ERM), WDRO [4, 29] and two typical f -DRO methods [11], including KL-DRO (f(t) = t ln t) and χ2-DRO (f(t) = (t− 1)2). Implementation Details For all experiments, G0 is constructed as a k-nearest neighbor graph from the training data only at the initialization step. Specifically, we adopt NN-Descent [10] to efficiently estimate the k-nearest neighbor graph for the large-scale dataset Colored MNIST while performing Figure 2: Visualization of learned kNN graph with different k of the regression data, which is projected on the plane spanned by the unit vector of V axis and θS with a projection matrix [ 01,5 01,4 1 θTS 01,4 0 ] . an exact search for k-nearest neighbors in the other experiments. We adopt MSE as the empirical loss function for regression tasks and cross-entropy for classification tasks. We use MLPs for the Colored MNIST and Ionosphere datasets, and linear models in the other experiments. Besides, we find that the two-stage optimization is enough for good performances, as mentioned in [19], and we use it in our experiments. Note that GDRO is compatible with any parameterized models including deep models. The simulation of gradient flow in Equation 6 is implemented by message propagation with DGL package [32], which scales linearly with sample size and enjoys parallelization by GPU. 4.1 Simulation Data In this subsection, we use simulations to verify that our GDRO could deal with sub-population shifts and to some extent resist the label noises. And we also visualize the effects of the kNN algorithm as well as the sensitivity of GDRO to the parameter k. 1. Regression: Sub-population Shifts via Selection Bias Mechanism Data Generation The input features X = [S,U, V ]T ∈ R10 are comprised of stable features S ∈ R5, noisy features U ∈ R4 and the spurious feature V ∈ R: S ∼ N (0, 2I5) ∈ R5, U ∼ N (0, 2I4) ∈ R4, Y = θTSS + 0.1 · S1S2S3 +N (0, 0.5), (13) V ∼ Laplace(sign(r) · Y, 1 5 ln |r| ) ∈ R, (14) where θS ∈ R5 is the coefficient of the true model. |r| > 1 is a factor for each sub-population. S are stable features with the invariant relationship with Y . U are noisy features such that U ⊥ Y . And V is the spurious feature whose relationship with Y is unstable and is controlled by the factor r. Intuitively, sign(r) controls whether the spurious correlation between V and Y is positive or negative. And |r| controls the strength of the spurious correlation: the larger |r| is, the stronger the spurious correlation is. Simulation Setting 1 In training, we generate 10000 points, where the major group contains 95% data with r = 1.9 (i.e. strong positive spurious correlation) and the minor group contains 5% data with r = −1.3 (i.e. weak negative spurious correlation). As shown in Figure 2, the training data is the union of two sub-spaces. In testing, we vary r ∈ {−1.5,−1.7,−1.9,−2.3,−2.7,−3.0} to simulate stronger negative spurious correlations between V and Y . Notably, the testing data also lie on the same manifold as the training. We use the linear model and calculate the root-mean-square errors (RMSE) and the parameter estimation errors Est Error = ∥θ̂ − θ∗∥2 of different methods (θ∗ = [θS , 0, . . . , 0]T ). The results are shown in the Simulation 1 in Table 1. Simulation Setting 2 Then to test whether GDRO could resist label noises, we randomly sample 20 points and add label noises to them via Ỹ = Y + Std(Y ) where std(Y ) denotes the standard derivation of the marginal distribution of Y . The results are shown in the Simulation 2 in Table 1. And we visualize the learned worst-case distribution of three methods in Figure 3(a) and 3(b). Analysis (1) From the results of Simulation 1 and Simulation 2 in Table 1, GDRO outperforms all the baselines in terms of low prediction error on the minor group under different strengths of spurious correlations. (2) From Simulation 2 in Table 1, compared with KL-DRO and χ2-DRO, GDRO is only slightly affected by the label noises. Also, from Figure 3(a), compared with GDRO, KL-DRO puts much heavier weights on the noisy points (red points of f -DRO are much larger). And GDRO focuses more on the minor group (blue points), which results in their different performances under Simulation 2. Further, to investigate this phenomenon, we quantify the smoothness via Dirichlet Energy. In Figure 3(b), we plot the Dirichlet Energy w.r.t the relative entropy KL(P̂∥P̂tr) between the learned distribution P̂ and training distribution P̂tr, which proves that the learned weights of GDRO are much smoother w.r.t. the data manifold. And this property helps GDRO to resist the label noises, since GDRO does not allow extremely high weights on the isolated points. (3) The third sub-figure in Figure 3(a) verifies our analysis on WDRO that it introduces much more label noises (red points). Discussion on kNN To test whether GDRO is sensitive to the parameter k of the kNN graph G0, we vary k ∈ {5, 20, 100} and test the performances of our GDRO under simulation setting 1. We also visualize the kNN graphs in Figure 2, which show that kNN consistently manages to fit the data manifold well until k = 100. And empirical results of Simulation 3 in Table 1 prove that with k < 100, GDRO performs stably better than the baselines with small and moderate k, except that smaller k leads to slower convergence since sparse graphs restrain the flow of probability weights. Still, we present an extreme failure case where KNN achieves a poor approximation of the data manifold. When k increases to an extremely large number as k = 100, the neighborhood of kNN diffuses and two manifolds start to merge on the graph, in which case GDRO could not distinguish between two sub-populations and its performance degrades as shown in the Table 1. Actually, in Theorem 3.4 of this paper, we have proved that with an infinitely large k, GDRO could be reduced to KL-DRO, which completely ignores data geometry. Still, we have to clarify that kNN and GDRO perform stably well for a large range of k. 2. Classification: Sub-population Shifts with High-dimensional Manifold Data Data Generation In this setting, data are high-dimensional but with a low-dimensional structure. The data generation is similar to [25] and is a typical classification setting in OOD generalization. We introduce the spurious correlation between the label Y = {+1,−1} and the spurious attribute A = {+1,−1}. We firstly generate low-dimensional data Xlow = [S, V ]T ∈ R10 as: S ∼ N (Y 1, σ2sI5), V ∼ N (A1, σ2vI5), where A = { Y ,with probability r, −Y ,with probability 1− r. (15) Intuitively, r ∈ [0, 1] tunes the proportions of sub-populations and controls the spurious correlation between A and Y . When r > 0.5, the spurious attribute A is positively correlated with Y ; and when r < 0.5, the spurious correlation becomes negative. And larger |r − 0.5| results in stronger spurious correlation between A and Y . Then to convert the low-dimensional data to high-dimensional space, Xlow is multiplied by a column full rank matrix H as: Xhigh = (HXlow) ∈ R300, (16) where H ∈ R300×10 is full column rank, and we randomly choose H in each run. Simulation Setting For both the training and testing data, we set σ2s = 1.0 and σ2v = 0.3. We use linear models with cross-entropy loss for all methods. In training, we set r = 0.85 (A is positively correlated with Y ). In testing, we design two environments with r1 = 0.5 (A ⊥ Y ) and r2 = 0.0 (A is negatively correlated with Y ) to introduce distributional shifts. Apart from the natural setting without label noises, we also test the performances under label noises. Specifically, we add 4% label noises in the training data by flipping the label Y . We run the experiments 10 times, each time with one random matrix H . We report the mean accuracy in Table 2. Analysis From the results in Table 2, our GDRO outperforms all baselines under the sub-population shifts, and it is not affected much by the label noises, which validates the effectiveness of our GDRO. 4.2 Real-World Data We evaluate our method on four real-world datasets. Due to space limits, we place two of them here, with various kinds of distributions, including sub-populations shifts and class difficulty shifts, and the others can be found in Appendix. We use MLPs with cross-entropy loss in these experiments. Colored MNIST: Sub-population Shifts & Label Noises Following Arjovsky et al. [1], Colored MNIST is a binary classification task constructed on the MNIST dataset. Firstly, a binary label Y is assigned to each image according to its digit: Y = 0 for digit 0 ∼ 4 and Y = 1 for digit 5 ∼ 9. Secondly, we induce noisy labels Ỹ by randomly flipping the label Y with a probability of 0.2. Then we sample the color id C spuriously correlated with Ỹ as C = { +Ỹ , with probability 1 − r, −Ỹ , with probability r. Intuitively, r controls the spurious correlation between Y and C. When r < 0.5, C is positively correlated with Y ; and when r > 0.5, the spurious correlation becomes negative. And |r − 0.5| controls the strength of the spurious correlation. In training, we randomly sample 5000 data points and set r = 0.85 (strong negative spurious correlation between C and Y ) and in testing, we set r = 0 (strong positive spurious correlation), inducing strong shifts between training and testing. Results are shown in Table 3. Ionosphere Radar Classification: Class Difficulty Shifts Ionosphere Radar Dataset [8] consists of return signals from the ionosphere of a phased array radar system in Google Bay, Labrador. The electromagnetic signals were processed by an auto-correlation function to produce 34 continuous attributes. The task is to predict whether the return signal indicates specific physical structures in the ionosphere (good return) or not (bad return). However, the prediction difficulty of two classes is quite different, and ERM was found to achieve a much lower accuracy on bad returns than good ones [28]. In this experiment, both the training and testing sets consist of samples with balanced label distribution. But due to the disparity of class difficulty, the prediction accuracy of two classes is quite different, while DRO methods are expected to achieve similar prediction accuracy for both classes. Therefore, in testing, we report the testing accuracy for the easy class and the hard class respectively, as well as the AUC score of the testing set. Results are shown in Table 3. Analysis From the results on real-world data, we find that all DRO methods (WDRO and f -DROs) show significant promotions to ERM, reflecting the reasonability of our experimental settings. And our proposed GDRO outperforms all baselines significantly when dealing with sub-population shifts and class difficulty shifts, which validates the effectiveness of our GDRO. 5 Related Work Distributionally robust optimization (DRO) directly solves the OOD generalization problem by optimizing the worst-case error in a pre-defined uncertainty set, which is often constrained by moment or support conditions [9, 3], shape constraints [22, 17, 16, 5], f -divergence [23, 11] and Wasserstein distance [26, 29, 4, 12]. [9, 3] set moment or support conditions for the distributions in the uncertainty set. As for shape constraints, one commonly used is unimodality, and [16] uses the orthounimodality to constitute the uncertainty set for DRO for multivariate extreme event analysis. As for f -divergence, [23] theoretically demonstrates that it is equivalent to the variance penalty, and [11] derives the optimization algorithm from its dual reformulation. Compared with f -divergences which require the support of distributions in the uncertainty set is fixed, the uncertainty set built with Wasserstein distance contains distributions with different support and could provide robustness to unseen data. Despite the capacity of a Wasserstein uncertainty set, the optimization of Wasserstein DRO is quite hard. [26, 12, 4] convert the original DRO problem into a regularized ERM problem, but it is suitable only for a limited class of loss functions and transportation cost functions. [29] proposes an approximate optimization method for Wasserstein DRO and could be applied to deep neural networks, which protects the models from adversarial attacks. Besides, DRO methods have also been used for structured data. [18] studies the DRO problem for data generated by a time-homogeneous, ergodic finite-state Markov chain. Although DRO methods could guarantee the OOD generalization performances when the testing distribution is included in the uncertainty set, there are works [15, 13] doubting their real effects in practice. In order to guarantee the OOD generalization ability, in real scenarios, the uncertainty set has to be overwhelmingly large to contain the potential testing distributions. Such overwhelmingly large set makes the learned model make decisions with fairly low confidence, and it is also referred to as the over-pessimism problem. To mitigate such problem, [13] proposes to incorporate additional unlabeled data to further constrain the uncertainty set, and [20] learns the transportation cost function for WDRO with the help of multiple environment data. 6 Conclusion Through this work, we take the first step to incorporate data geometry information to mitigate the over-flexibility problem in DRO. In this work, we use the k-nearest-neighbor graph to characterize the data manifold, while our proposed method is compatible with any manifold learning or graph learning methods. And we believe that a more accurate estimated data structure with advanced manifold learning and graph learning algorithms will further boost the performance of GDRO, which we leave for future work. Acknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004, No. 2020AAA0106300), National Natural Science Foundation of China (No. U1936219, 62141607), Beijing Academy of Artificial Intelligence (BAAI). Bo Li’s research was supported by the National Natural Science Foundation of China (No.72171131); the Tsinghua University Initiative Scientific Research Grant (No. 2019THZWJC11); Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403. We would like to thank Yuting Pan, Renzhe Xu, Hao Zou for helpful comments. Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [N/A] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] All proofs are placed in Appendix. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] All details are placed in Appendix. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] The error bar is quite small in our experiments and therefore we omit it. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [N/A] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. How does NeMF take a neural network to represent a semantic correspondence or a matching cost? 2. What are the strengths and weaknesses of the proposed approach regarding its contribution, novelty, and reproducibility? 3. What are the strengths and weaknesses of the paper compared to prior works in pre-training optical flow datasets? 4. What are the questions raised regarding the paper's experiment section, motivation, and conclusions? 5. How can numerical results demonstrate the estimation of low-dimensional manifolds using NN-Descent? 6. How can the effect of k be studied when using kNN? 7. Which experiments use directly provided G_0, and which use estimated G_0 by NN-Descent? 8. What does the performance of GDRO indicate regarding the minor ratio in Figure 3, and what does it suggest about G_0 and the code? 9. How can the limitations of the paper be better addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work aims to solve the problem that DRO is too “pessimistic” (the uncertainty set is too large) and often leads to poor results in practice. The motivation is that “high dimensional data approximately reside on low dimensional manifolds” (lines 6-7), so this work tries to constrain the uncertainty set on this low dimensional manifold. To do this, this work (i) uses the NN-Descent method to estimate the low dimensional manifold; (ii) formulates the GDRO objective and optimizes it with alternating optimization and proves that it converges. The authors then conduct a series of experiments on synthetic and real datasets and claim that the proposed method is better than existing DRO methods. Strengths And Weaknesses I really like the high-level idea of this paper. “DRO is too pessimistic” is a well-known open problem in this field, and this work tries to solve this problem by constraining the uncertainty set on a low dimensional manifold, which makes lots of sense. I also think that a lot of credit should be given to the authors for the theory part: The GDRO objective is nicely formulated, and optimizing this objective with alternating optimization makes sense. The convergence result is also very nice. The major weaknesses of this work, however, come from the experiment section. Recall that the core motivation of this work is that “high dimensional data approximately reside on low dimensional manifolds” (lines 6-7), whereas there is a huge gap between the motivation and the experiments, which makes the experiments very confusing and unconvincing. Take the first experiment in Section 4.1 as an example. First of all, the experimental setting is very confusing. I suppose that the task is to infer Y from (S, V). I also couldn’t find the definition of alpha_V, so I suppose that it is used to define V. Thus, in this task the input data is 2-dimensional, and (S, V) seems to also reside on a 2-dimensional manifold, which is the union of a number of 1-dimensional curves (if alpha_V is in [a, b] for some a and b). I don’t think this could be called “high dimensional data approximately residing on low dimensional manifolds”. Then the authors claim that GDRO is much better than other DRO on this task. I don’t know how the graph G_0 is estimated. I suppose that the authors just simply provide G_0 to the algorithm because NN-Descent is only used “for large-scaled datasets” (lines 109-110). So it seems to me that the reason why GDRO is so good is that G_0 leaks some additional information about the target distribution to it but not to other methods, not because it leverages the fact that the data resides on a low dimensional manifold. Of course, it is nice that GDRO could utilize this additional leaked information from G_0. The question is: How to get this G_0 in practice? The authors propose to estimate G_0 with NN-Descent, but they don’t demonstrate how well NN-Descent can estimate G_0 on realistic tasks. If G_0 is not well estimated, and the target distribution is outside the estimated manifold, then I imagine that GDRO could completely fail. Moreover, most of the tasks in the experiments are not really high-dimensional (<50 dimensions), and all tasks seem to follow some simple, unrealistic structures, which make it easier for GDRO to achieve high performances. It is questionable whether these good performances are transferable to real-world applications with realistic distribution shifts. A valid experimental setting I would suggest the authors try is the following: The input data comes from a low dimensional manifold in a high dimensional space (at least 200 dimensions), but the structure of this manifold is unknown (for instance, introduce randomness into the manifold structure), so GDRO must first estimate G_0 by itself. This setting is closer to the authors’ motivation that “high dimensional data approximately reside on low dimensional manifolds”. Otherwise, it is always questionable whether the performance gain of GDRO comes from the information leakage from the provided G_0, rather than its ability to estimate and utilize the low-dimensional manifold. In summary, I really like the high-level idea and the theory part of this paper, but the experiment section does require a lot of improvement. Currently, there is a huge gap between the authors’ motivation and the experiments, making the main conclusion of this paper highly debatable. For this reason, I recommend rejecting this paper for this time, and hope that the authors could resubmit after rewriting the experiment section. ****** Post Rebuttal ****** The authors have revised the paper as suggested, so I would like to raise my rating to accept. Questions I suggest the authors provide some numerical results to demonstrate how well NN-Descent can estimate the low-dimensional manifold. This is very important, because if the manifold is not well estimated and the target distribution is outside the estimated manifold, then GDRO could completely fail. Moreover, since the authors are using kNN, studying the effect of k is also important. If the method works for some k but not for others, then the authors need to elaborate on how to select a proper k with the training samples alone. Could the authors clarify for which of the experiments is the graph G_0 directly provided to GDRO, and for which of them is the G_0 estimated by NN-Descent? In Figure 3, on the Retiring Adults dataset, GDRO seems to maintain the same high performance regardless of the minor ratio. This is not a good signal. When does the performance of GDRO start to drop? For instance, does GDRO still have a very high performance when the minor ratio is 0.01? What about when the ratio is 0? If GDRO always has such a high performance, then I believe that either G_0 provides too much information to GDRO (for instance, the target distribution could be directly obtained from G_0), or there is a bug in the code. Limitations The limitations are not sufficiently addressed.
NIPS
Title Robust Imitation of Diverse Behaviors Abstract Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. 1 Introduction Building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, in this work we combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product of this is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations. We first introduce a variational autoencoder (VAE) [15, 26] for supervised imitation, consisting of a bi-directional LSTM [13, 32, 9] encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state, while modelling correlations among states with a WaveNet [39]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker, implemented in the MuJoCo physics engine [38], show that the VAE learns a structured semantic embedding space, which allows for smooth policy interpolation. While supervised policies that condition on demonstrations (such as our VAE or the recent approach of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too much from the demonstration trajectories. These limitations of supervised learning for imitation, also known as behavioral cloning (BC) [24], are well known [28, 29]. ⇤Joint First authors. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning, allowing the agent to interact with the environment during training. GAIL allows one to learn more robust policies with fewer demonstrations, but adversarial training introduces another difficulty called mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples. This will cause the learned policy to capture only a subset of control behaviors (which can be viewed as modes of a distribution), rather than allocating capacity to cover all modes. Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section 3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE policy alone is brittle and GAIL alone does not capture all of the diverse behaviors. 2 Background and Related Work We begin our brief review with generative models. One canonical way of training generative models is to maximize the likelihood of the data: max P i log p ✓ (x i ). This is equivalent to minimizing the Kullback-Leibler divergence between the distribution of the data and the model: D KL (p data (·)||p ✓ (·)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable. One class of highly-expressive yet tractable models are the auto-regressive models which decompose the log likelihood as log p(x) = P i log p ✓ (x i |x <i ). Auto-regressive models have been highly effective in both image and audio generation [40, 39]. Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model over the latent variables, q (z|x), and optimize a lower bound of the log-likelihood: E q (z|xi) [log p✓(xi|z)] DKL (q (z|xi)||p(z)) log p(x). (1) For continuous latent variables, this bound can be optimized efficiently via the re-parameterization trick [15, 26]. This class of models are often referred to as VAEs. GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks: a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples, predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs optimize the following objective function min G max D E pdata(x) [logD(x)] + Ep(z) [log(1 D(G(z))] . (2) Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has focused on alleviating mode collapse in image modeling [2, 4, 19, 25, 42, 11, 27], but so far these have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp and at times realistic image samples [40], but they tend to be slow to sample from and unlike VAEs do not immediately provide a latent vector representation of the data. This is why we used VAEs to learn representations of demonstration trajectories. We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a behavior provided via a demonstration. It is natural to view imitation learning from the perspective of generative modeling. However, unlike in image and audio modeling, in imitation the generation process is constrained by the environment and the agent’s actions, with observations becoming accessible through interaction. Imitation learning brings its own unique challenges. In this paper, we assume that we have been provided with demonstrations {⌧ i } i where the i-th trajectory of state-action pairs is ⌧ i = {xi1, ai1, · · · , xi Ti , a i Ti }. These trajectories may have been produced by either an artificial or natural agent. As in generative modeling, we can easily apply maximum likelihood to imitation learning. For instance, if the dynamics are tractable, we can maximize the likelihood of the states directly: max ✓ P i P Ti t=1 log p(x i t+1|xit,⇡✓(xit)). If a model of the dynamics is unavailable, we can instead maximize the likelihood of the actions: max ✓ P i P Ti t=1 log ⇡✓(a i t |xi t ). The latter approach is what we referred to as behavioral cloning (BC) in the introduction. When demonstrations are plentiful, BC is effective [24, 30, 6]. Without abundant data, BC is known to be inadequate [28, 29, 12]. The inefficiencies of BC stem from the sequential nature of the problem. When using BC, even the slightest errors in mimicking the demonstration behavior can quickly accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but for BC to achieve this, the corrective behaviors have to appear frequently in the training data. GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and learn from these interactions. It constructs a reward function using GANs to measure the similarity between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the following objective function min ✓ max E ⇡E [logD (x, a)] + E⇡✓ [log(1 D (x, a))] , (3) where ⇡ E denotes the expert policy that generated the demonstration trajectories. To avoid differentiating through the system dynamics, policy gradient algorithms are used to train the policy by maximizing the discounted sum of rewards r (x t , a t ) = log(1 D (x t , a t )). Maximizing this reward, which may differ from the expert reward, drives ⇡ ✓ to expert-like regions of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize the learning process [31]. GAIL has become a popular choice for imitation learning [16] and there already exist model-based [3] and third-person [36] extensions. Two recent GAIL-based approaches [17, 10] introduce additional reward signals that encourage the policy to make use of latent variables which would correspond to different types of demonstrations after training. These approaches are complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation. The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to recent authoritative surveys on the topic [5, 1, 14]. Inspired by recent works, including [12, 36, 6], we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations. In graphics, a significant effort has been devoted to the design physics controllers that take advantage of motion capture data, or key-frames and other inputs provided by animators [33, 35, 43, 22]. Yet, as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires significant human insight. Our focus is on flexible, general imitation methods. 3 A Generative Modeling Approach to Imitating Diverse Behaviors 3.1 Behavioral cloning with variational autoencoders suited for control In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having a distribution q (z|x1:T ) to better regularize the latent space. In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize the following loss: L(↵, w, ; ⌧ i )= E q (z|xi1:Ti ) " TiX t=1 log ⇡ ↵ (a i t |xi t , z)+log p w (x i t+1|xit, z) # +D KL q (z|xi1:Ti)||p(z) Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as our demonstration encoding. The action decoder is an MLP that maps the concatenation of the state and the embedding to the parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model [39]. In particular, it conditions on the embedding z and previous state x t 1 to generate the vector xt autoregressively. That is, the autoregression is over the components of the vector x t . Wavenet lessens the load of the encoder which no longer has to carry information that can be captured by modeling auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a mixture of Gaussians as the output of the WaveNet. 3.2 Diverse generative adversarial imitation learning As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations. Our solution to obtain more robust policies from few demonstrations, which are also capable of diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions, we condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior q (z|x1:T ). Specifically, we train the discriminator by optimizing the following objective max E ⌧i⇠⇡E ( E q(z|xi1:Ti ) " 1 T i TiX t=1 logD (x i t , a i t |z) + E ⇡✓ [log(1 D (x, a|z))] #) . (4) A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs [21]. We condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3. Since our discriminator is conditional, the reward function is also conditional: rt (x t , a t |z) = log(1 D (x t , a t |z)). We also clip the reward so that it is upper-bounded. Conditioning on z allows us to generate an infinite number of reward functions each of them tailored to imitating a different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular mode due to the diversity of reward functions. To better motivate our objective, let us temporarily leave the context of imitation learning and consider the following alternative value function for training GANs min G max D V (G,D) = Z y p(y) Z z q(z|y) logD(y|z) + Z ŷ G(ŷ|z) log(1 D(ŷ|z))dŷ dydz. This function is a simplification of our objective function. Furthermore, it satisfies the following property. Lemma 1. Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z) p(y) , then V (G,D) = Z z p(z) Z y p(y|z) logD(y|z)dy + Z x̂ G(ŷ|z) log(1 D(ŷ|z))dŷ dz. Algorithm 1 Diverse generative adversarial imitation learning. INPUT: Demonstration trajectories {⌧i}i and VAE encoder q. repeat for j 2 {1, · · · , n} do Sample trajectory ⌧j from the demonstration set and sample zj ⇠ q(·|xj1:Tj ). Run policy ⇡✓(·|zj) to obtain the trajectory b⌧j . end for Update policy parameters via TRPO with rewards rjt (x j t , a j t |zj) = log(1 D (x j t , a j t |zj)). Update discriminator parameters from i to i+1 with gradient: r 8 < : 1 n nX j=1 2 4 1 Tj TjX t=1 logD (x j t , a j t |zj) 3 5 + 2 4 1 b Tj bTjX t=1 log(1 D (bxjt ,ba j t |zj)) 3 5 9 = ; until Max iteration or time reached. If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes C(G) = 2 Z z p(z)JSD [p( · |z) ||G( · |z)] dz log 4, (5) where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize this divergence, and it is well documented that optimizing it leads to mode seeking behavior [37]. The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture of Gaussians and p(z) describes the distribution over the mixture components. In this case, the conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity, we can expect the conditional distributions to be close to uni-modal and therefore our generators to be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible and hence our choice of training q with VAEs. We now turn our attention to some algorithmic considerations. We can use the VAE policy ⇡ ↵ (a t |x t , z) to accelerate the training of ⇡ ✓ (a t |x t , z). One possible route is to initialize the weights ✓ to ↵. However, before the policy behaves reasonably, the noise injected into the policy for exploration (when using stochastic policy gradients) can cause poor initial performance. Instead, we fix ↵ and structure the conditional policy as follows ⇡ ✓ ( · |x, z) = N ( · |µ ✓ (x, z) + µ ↵ (x, z), ✓ (x, z)) , where µ ↵ is the mean of the VAE policy. Finally, the policy parameterized by ✓ is optimized with TRPO [31] while holding parameters ↵ fixed, as shown in Algorithm 1. 4 Experiments The primary focus of our experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems our full learning procedure is critical. We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible structure that an be exploited for control. Finally, we show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller. All experiments are conducted with the MuJoCo physics engine [38]. For details of the simulation and the experimental setup please see appendix. 4.1 Robotic arm reaching We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a robotics arm developed by Kinova Robotics. To obtain demonstrations, we trained 60 independent policies to reach to random target locations2 in the workspace starting from the same initial configuration. We generated 30 trajectories from each of the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data. The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in Figure 2. Representative examples can be found in the video in the supplemental material. To further investigate the nature of the embedding space we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories. We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment are illustrated with a representative pair in Figure 3. We observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint) space, highlighting the semantic meaningfulness of the discovered latent space. 4.2 2D Walker We found reaching behavior to be relatively easy to imitate, presumably because it does not involve much physical contact. As a more challenging test we consider bipedal locomotion. We train 60 neural network policies for a 2d walker to serve as demonstrations3. These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top right). Besides the target speed the reward function imposes few constraints on the behavior. The resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for most purposes this diversity is undesirable, for the present experiment we consider it a feature. 2See appendix for details 3See section A.2 in the appendix for details. We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because our relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, we also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also exhibit dramatically less diversity; see video. A general problem of adversarial training is that there is no easy way to quantitatively assess the quality of learned models. Here, since we aim to imitate particular demonstration trajectories that were trained to achieve particular target speed(s) we can use the difference between the speed of the demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality of the imitation (cf. also [12]). The general quality of the learned model and the improvement achieved by the adversarial stage of our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all 60 policies) from the training set, compute the corresponding embedding vectors using the encoder, and use both the VAE policy as well as the improved policy from the adversarial stage to imitate each of the trajectories. We determine the absolute values of the difference between the average speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the adversarial training greatly improves reliability of the controller as well as the ability of the model to accurately match the speed of the demonstration. We also include addition quantitative analysis of our approach using this speed metric in Appendix B. Video of our agent imitating a diverse set of behaviors can be found in the supplemental material. To assess generalization to novel trajectories we encode and subsequently imitate trajectories not contained in the training set. The supplemental video contains several representative examples, demonstrating that the style of movement is successfully imitated for previously unseen trajectories. Finally, we analyze the structure of the embedding space. We embed training trajectories and perform dimensionality reduction with t-SNE [41]. The result is shown in Fig. 4. It reveals a clear clustering according to movement speeds thus recovering the nature of the task context for the demonstration trajectories. We further find that trajectories that are nearby in embedding space tend to correspond to similar movement styles even when differing in speed. 4.3 Complex humanoid We consider a humanoid body of high dimensionality that poses a hard control problem. The construction of this body and associated control policies is described in [20], and is briefly summarized in the appendix (section A.3) for completness. We generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles (see section A.3). Examples of such trajectories are shown in Fig. 5 and in the supplemental video. The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture data base4. Each trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data). Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. We analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the supplemental video. For practical purposes it is desirable to allow the controller to transition from one behavior to another. We test this possibility in an experiment similar to the one for the Jaco arm: We determine the embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by linearly interpolating the embeddings of the two demonstration trajectories over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. Representative examples of these transitions can be found in the supplemental video. 5 Conclusions We have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that learns, from a moderate number of demonstration trajectories (1) a semantically well structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation. Our experimental results demonstrate that our approach can work on a variety of control problems, and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms. 4See appendix for details.
1. What is the main contribution of the paper in imitation learning? 2. What are the strengths of the proposed approach, particularly in its architecture and performance? 3. What are the weaknesses of the paper regarding comparisons with other baselines and sample efficiency? 4. How does the reviewer assess the clarity and presentation of the paper's content? 5. Are there any relevant works that the author should consider mentioning in their discussion?
Review
Review The paper proposes a deep-learning-based approach to imitation learning which is sample-efficient and is able to imitate many diverse behaviors. The architecture can be seen as conditional generative adversarial imitation learning (GAIL). The conditioning vector is an embedding of a demonstrated trajectory, provided by a variational autoencoder. This results in one-shot imitation learning: at test time, a new demonstration can be embedded and provided as a conditioning vector to the imitation policy. The authors evaluate the method on several simulated motor control tasks. Detailed comments. Pros: 1) The architecture seems clean and logical, and seems to do the job well. 2) In particular, VAE for trajectory embedding is new compared to recent related approaches. 3) The proposed approach is able to learn complex and diverse behaviors and outperforms both the VAE alone (quantitatively) and GAIL alone (qualitatively). 4) Interpolation between different policies/styles is impressive. Cons: 1) Comparisons to baselines could be more detailed. Currently the comparison to GAIL is purely qualitative, as far as I can see, and only performed on one task. It intuitively seems that GAIL would not perform well, but perhaps it is worth showing clearly in the paper. 2) A discussion of sample efficiency compared to GAIL and VAE would be interesting. What if one trains GAIL per style or target speed - would that work as well as the proposed method? Multi-modality shouldn’t be an issue then. Will VAE work given much more data? 3) Presentation is not always clear, in particular I had hard time figuring out the notation in Section 3. 4) There has been some work on hybrids of VAEs and GANs, which seem worth mentioning when generative models are discussed, like: Autoencoding beyond pixels using a learned similarity metric, Larsen et al., ICML 2016 Generating Images with Perceptual Similarity Metrics based on Deep Networks, Dosovitskiy&Brox. NIPS 2016 These works share the intuition that good coverage of VAEs can be combined with sharp results generated by GANs. 5) Some more extensive analysis of the approach would be interesting. How sensitive is it to hyperparameters? How important is it to use VAE, not usual AE or supervised learning? How difficult will it be for others to apply it to new tasks? 6) A related submission mentioned in lines 124-126 seems quite similar. I have no way to judge to which extent, but seems like it may be worth double-checking. If the work is from the same authors, it can be problematic. Overall, the work is interesting and proposes an elegant and well-performing approach. I think it should be accepted.
NIPS
Title Robust Imitation of Diverse Behaviors Abstract Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. 1 Introduction Building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, in this work we combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product of this is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations. We first introduce a variational autoencoder (VAE) [15, 26] for supervised imitation, consisting of a bi-directional LSTM [13, 32, 9] encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state, while modelling correlations among states with a WaveNet [39]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker, implemented in the MuJoCo physics engine [38], show that the VAE learns a structured semantic embedding space, which allows for smooth policy interpolation. While supervised policies that condition on demonstrations (such as our VAE or the recent approach of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too much from the demonstration trajectories. These limitations of supervised learning for imitation, also known as behavioral cloning (BC) [24], are well known [28, 29]. ⇤Joint First authors. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning, allowing the agent to interact with the environment during training. GAIL allows one to learn more robust policies with fewer demonstrations, but adversarial training introduces another difficulty called mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples. This will cause the learned policy to capture only a subset of control behaviors (which can be viewed as modes of a distribution), rather than allocating capacity to cover all modes. Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section 3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE policy alone is brittle and GAIL alone does not capture all of the diverse behaviors. 2 Background and Related Work We begin our brief review with generative models. One canonical way of training generative models is to maximize the likelihood of the data: max P i log p ✓ (x i ). This is equivalent to minimizing the Kullback-Leibler divergence between the distribution of the data and the model: D KL (p data (·)||p ✓ (·)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable. One class of highly-expressive yet tractable models are the auto-regressive models which decompose the log likelihood as log p(x) = P i log p ✓ (x i |x <i ). Auto-regressive models have been highly effective in both image and audio generation [40, 39]. Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model over the latent variables, q (z|x), and optimize a lower bound of the log-likelihood: E q (z|xi) [log p✓(xi|z)] DKL (q (z|xi)||p(z)) log p(x). (1) For continuous latent variables, this bound can be optimized efficiently via the re-parameterization trick [15, 26]. This class of models are often referred to as VAEs. GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks: a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples, predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs optimize the following objective function min G max D E pdata(x) [logD(x)] + Ep(z) [log(1 D(G(z))] . (2) Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has focused on alleviating mode collapse in image modeling [2, 4, 19, 25, 42, 11, 27], but so far these have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp and at times realistic image samples [40], but they tend to be slow to sample from and unlike VAEs do not immediately provide a latent vector representation of the data. This is why we used VAEs to learn representations of demonstration trajectories. We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a behavior provided via a demonstration. It is natural to view imitation learning from the perspective of generative modeling. However, unlike in image and audio modeling, in imitation the generation process is constrained by the environment and the agent’s actions, with observations becoming accessible through interaction. Imitation learning brings its own unique challenges. In this paper, we assume that we have been provided with demonstrations {⌧ i } i where the i-th trajectory of state-action pairs is ⌧ i = {xi1, ai1, · · · , xi Ti , a i Ti }. These trajectories may have been produced by either an artificial or natural agent. As in generative modeling, we can easily apply maximum likelihood to imitation learning. For instance, if the dynamics are tractable, we can maximize the likelihood of the states directly: max ✓ P i P Ti t=1 log p(x i t+1|xit,⇡✓(xit)). If a model of the dynamics is unavailable, we can instead maximize the likelihood of the actions: max ✓ P i P Ti t=1 log ⇡✓(a i t |xi t ). The latter approach is what we referred to as behavioral cloning (BC) in the introduction. When demonstrations are plentiful, BC is effective [24, 30, 6]. Without abundant data, BC is known to be inadequate [28, 29, 12]. The inefficiencies of BC stem from the sequential nature of the problem. When using BC, even the slightest errors in mimicking the demonstration behavior can quickly accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but for BC to achieve this, the corrective behaviors have to appear frequently in the training data. GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and learn from these interactions. It constructs a reward function using GANs to measure the similarity between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the following objective function min ✓ max E ⇡E [logD (x, a)] + E⇡✓ [log(1 D (x, a))] , (3) where ⇡ E denotes the expert policy that generated the demonstration trajectories. To avoid differentiating through the system dynamics, policy gradient algorithms are used to train the policy by maximizing the discounted sum of rewards r (x t , a t ) = log(1 D (x t , a t )). Maximizing this reward, which may differ from the expert reward, drives ⇡ ✓ to expert-like regions of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize the learning process [31]. GAIL has become a popular choice for imitation learning [16] and there already exist model-based [3] and third-person [36] extensions. Two recent GAIL-based approaches [17, 10] introduce additional reward signals that encourage the policy to make use of latent variables which would correspond to different types of demonstrations after training. These approaches are complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation. The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to recent authoritative surveys on the topic [5, 1, 14]. Inspired by recent works, including [12, 36, 6], we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations. In graphics, a significant effort has been devoted to the design physics controllers that take advantage of motion capture data, or key-frames and other inputs provided by animators [33, 35, 43, 22]. Yet, as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires significant human insight. Our focus is on flexible, general imitation methods. 3 A Generative Modeling Approach to Imitating Diverse Behaviors 3.1 Behavioral cloning with variational autoencoders suited for control In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having a distribution q (z|x1:T ) to better regularize the latent space. In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize the following loss: L(↵, w, ; ⌧ i )= E q (z|xi1:Ti ) " TiX t=1 log ⇡ ↵ (a i t |xi t , z)+log p w (x i t+1|xit, z) # +D KL q (z|xi1:Ti)||p(z) Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as our demonstration encoding. The action decoder is an MLP that maps the concatenation of the state and the embedding to the parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model [39]. In particular, it conditions on the embedding z and previous state x t 1 to generate the vector xt autoregressively. That is, the autoregression is over the components of the vector x t . Wavenet lessens the load of the encoder which no longer has to carry information that can be captured by modeling auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a mixture of Gaussians as the output of the WaveNet. 3.2 Diverse generative adversarial imitation learning As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations. Our solution to obtain more robust policies from few demonstrations, which are also capable of diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions, we condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior q (z|x1:T ). Specifically, we train the discriminator by optimizing the following objective max E ⌧i⇠⇡E ( E q(z|xi1:Ti ) " 1 T i TiX t=1 logD (x i t , a i t |z) + E ⇡✓ [log(1 D (x, a|z))] #) . (4) A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs [21]. We condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3. Since our discriminator is conditional, the reward function is also conditional: rt (x t , a t |z) = log(1 D (x t , a t |z)). We also clip the reward so that it is upper-bounded. Conditioning on z allows us to generate an infinite number of reward functions each of them tailored to imitating a different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular mode due to the diversity of reward functions. To better motivate our objective, let us temporarily leave the context of imitation learning and consider the following alternative value function for training GANs min G max D V (G,D) = Z y p(y) Z z q(z|y) logD(y|z) + Z ŷ G(ŷ|z) log(1 D(ŷ|z))dŷ dydz. This function is a simplification of our objective function. Furthermore, it satisfies the following property. Lemma 1. Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z) p(y) , then V (G,D) = Z z p(z) Z y p(y|z) logD(y|z)dy + Z x̂ G(ŷ|z) log(1 D(ŷ|z))dŷ dz. Algorithm 1 Diverse generative adversarial imitation learning. INPUT: Demonstration trajectories {⌧i}i and VAE encoder q. repeat for j 2 {1, · · · , n} do Sample trajectory ⌧j from the demonstration set and sample zj ⇠ q(·|xj1:Tj ). Run policy ⇡✓(·|zj) to obtain the trajectory b⌧j . end for Update policy parameters via TRPO with rewards rjt (x j t , a j t |zj) = log(1 D (x j t , a j t |zj)). Update discriminator parameters from i to i+1 with gradient: r 8 < : 1 n nX j=1 2 4 1 Tj TjX t=1 logD (x j t , a j t |zj) 3 5 + 2 4 1 b Tj bTjX t=1 log(1 D (bxjt ,ba j t |zj)) 3 5 9 = ; until Max iteration or time reached. If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes C(G) = 2 Z z p(z)JSD [p( · |z) ||G( · |z)] dz log 4, (5) where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize this divergence, and it is well documented that optimizing it leads to mode seeking behavior [37]. The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture of Gaussians and p(z) describes the distribution over the mixture components. In this case, the conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity, we can expect the conditional distributions to be close to uni-modal and therefore our generators to be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible and hence our choice of training q with VAEs. We now turn our attention to some algorithmic considerations. We can use the VAE policy ⇡ ↵ (a t |x t , z) to accelerate the training of ⇡ ✓ (a t |x t , z). One possible route is to initialize the weights ✓ to ↵. However, before the policy behaves reasonably, the noise injected into the policy for exploration (when using stochastic policy gradients) can cause poor initial performance. Instead, we fix ↵ and structure the conditional policy as follows ⇡ ✓ ( · |x, z) = N ( · |µ ✓ (x, z) + µ ↵ (x, z), ✓ (x, z)) , where µ ↵ is the mean of the VAE policy. Finally, the policy parameterized by ✓ is optimized with TRPO [31] while holding parameters ↵ fixed, as shown in Algorithm 1. 4 Experiments The primary focus of our experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems our full learning procedure is critical. We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible structure that an be exploited for control. Finally, we show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller. All experiments are conducted with the MuJoCo physics engine [38]. For details of the simulation and the experimental setup please see appendix. 4.1 Robotic arm reaching We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a robotics arm developed by Kinova Robotics. To obtain demonstrations, we trained 60 independent policies to reach to random target locations2 in the workspace starting from the same initial configuration. We generated 30 trajectories from each of the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data. The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in Figure 2. Representative examples can be found in the video in the supplemental material. To further investigate the nature of the embedding space we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories. We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment are illustrated with a representative pair in Figure 3. We observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint) space, highlighting the semantic meaningfulness of the discovered latent space. 4.2 2D Walker We found reaching behavior to be relatively easy to imitate, presumably because it does not involve much physical contact. As a more challenging test we consider bipedal locomotion. We train 60 neural network policies for a 2d walker to serve as demonstrations3. These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top right). Besides the target speed the reward function imposes few constraints on the behavior. The resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for most purposes this diversity is undesirable, for the present experiment we consider it a feature. 2See appendix for details 3See section A.2 in the appendix for details. We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because our relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, we also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also exhibit dramatically less diversity; see video. A general problem of adversarial training is that there is no easy way to quantitatively assess the quality of learned models. Here, since we aim to imitate particular demonstration trajectories that were trained to achieve particular target speed(s) we can use the difference between the speed of the demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality of the imitation (cf. also [12]). The general quality of the learned model and the improvement achieved by the adversarial stage of our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all 60 policies) from the training set, compute the corresponding embedding vectors using the encoder, and use both the VAE policy as well as the improved policy from the adversarial stage to imitate each of the trajectories. We determine the absolute values of the difference between the average speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the adversarial training greatly improves reliability of the controller as well as the ability of the model to accurately match the speed of the demonstration. We also include addition quantitative analysis of our approach using this speed metric in Appendix B. Video of our agent imitating a diverse set of behaviors can be found in the supplemental material. To assess generalization to novel trajectories we encode and subsequently imitate trajectories not contained in the training set. The supplemental video contains several representative examples, demonstrating that the style of movement is successfully imitated for previously unseen trajectories. Finally, we analyze the structure of the embedding space. We embed training trajectories and perform dimensionality reduction with t-SNE [41]. The result is shown in Fig. 4. It reveals a clear clustering according to movement speeds thus recovering the nature of the task context for the demonstration trajectories. We further find that trajectories that are nearby in embedding space tend to correspond to similar movement styles even when differing in speed. 4.3 Complex humanoid We consider a humanoid body of high dimensionality that poses a hard control problem. The construction of this body and associated control policies is described in [20], and is briefly summarized in the appendix (section A.3) for completness. We generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles (see section A.3). Examples of such trajectories are shown in Fig. 5 and in the supplemental video. The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture data base4. Each trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data). Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. We analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the supplemental video. For practical purposes it is desirable to allow the controller to transition from one behavior to another. We test this possibility in an experiment similar to the one for the Jaco arm: We determine the embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by linearly interpolating the embeddings of the two demonstration trajectories over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. Representative examples of these transitions can be found in the supplemental video. 5 Conclusions We have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that learns, from a moderate number of demonstration trajectories (1) a semantically well structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation. Our experimental results demonstrate that our approach can work on a variety of control problems, and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms. 4See appendix for details.
1. What is the focus of the paper regarding combining VAE and GAIL? 2. What are the strengths of the proposed approach, particularly in resolving the mode-collapsing problem? 3. Do you have any concerns or questions about the method, such as the conditioning process or its relation to one-shot imitation learning? 4. Are there any suggestions for additional experiments to further support the effectiveness of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper proposes to combine VAE with GAIL to make the GAIL method more robust by resolving the mode-collapsing problem. For this, the latent feature distribution of demonstration trajectories is learned using VAE and the GAIL objective is modified to be optimized for the expectation over the latent feature distribution. Overall, it becomes solving conditional GAILs where the conditioned embedding is given by VAE encoder and this leads to robust policy learning. The authors claim that, because the learned trajectory feature captures the semantic properties of the demonstration, conditioning on this the resulting generation distribution becomes close to uni-modal. The experiment shows the effectiveness of the proposed method on three continuous control tasks. The paper is well-written and the proposed approach and the experimental results are interesting. I overall enjoyed reading the paper. The followings are some of the questions and comments. Could you elaborate more why q(z|x) is only conditioned on the states x but not along with actions a? And how this is related to the one-shot imitation learning (described in line 127-128)? It could be interesting to see the t-SNE visualization of the trajectory embedding generated by the vanilla GAIL policy? This would provide some additional evidence on the mode-collapsing claim. The proposed method seems helpful in improving sample complexity. Experiment on the varying number of demonstrations or policies would have been interesting to see (e.g., on the speed difference metric).
NIPS
Title Robust Imitation of Diverse Behaviors Abstract Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. 1 Introduction Building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, in this work we combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product of this is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations. We first introduce a variational autoencoder (VAE) [15, 26] for supervised imitation, consisting of a bi-directional LSTM [13, 32, 9] encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state, while modelling correlations among states with a WaveNet [39]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker, implemented in the MuJoCo physics engine [38], show that the VAE learns a structured semantic embedding space, which allows for smooth policy interpolation. While supervised policies that condition on demonstrations (such as our VAE or the recent approach of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too much from the demonstration trajectories. These limitations of supervised learning for imitation, also known as behavioral cloning (BC) [24], are well known [28, 29]. ⇤Joint First authors. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning, allowing the agent to interact with the environment during training. GAIL allows one to learn more robust policies with fewer demonstrations, but adversarial training introduces another difficulty called mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples. This will cause the learned policy to capture only a subset of control behaviors (which can be viewed as modes of a distribution), rather than allocating capacity to cover all modes. Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section 3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE policy alone is brittle and GAIL alone does not capture all of the diverse behaviors. 2 Background and Related Work We begin our brief review with generative models. One canonical way of training generative models is to maximize the likelihood of the data: max P i log p ✓ (x i ). This is equivalent to minimizing the Kullback-Leibler divergence between the distribution of the data and the model: D KL (p data (·)||p ✓ (·)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable. One class of highly-expressive yet tractable models are the auto-regressive models which decompose the log likelihood as log p(x) = P i log p ✓ (x i |x <i ). Auto-regressive models have been highly effective in both image and audio generation [40, 39]. Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model over the latent variables, q (z|x), and optimize a lower bound of the log-likelihood: E q (z|xi) [log p✓(xi|z)] DKL (q (z|xi)||p(z)) log p(x). (1) For continuous latent variables, this bound can be optimized efficiently via the re-parameterization trick [15, 26]. This class of models are often referred to as VAEs. GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks: a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples, predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs optimize the following objective function min G max D E pdata(x) [logD(x)] + Ep(z) [log(1 D(G(z))] . (2) Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has focused on alleviating mode collapse in image modeling [2, 4, 19, 25, 42, 11, 27], but so far these have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp and at times realistic image samples [40], but they tend to be slow to sample from and unlike VAEs do not immediately provide a latent vector representation of the data. This is why we used VAEs to learn representations of demonstration trajectories. We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a behavior provided via a demonstration. It is natural to view imitation learning from the perspective of generative modeling. However, unlike in image and audio modeling, in imitation the generation process is constrained by the environment and the agent’s actions, with observations becoming accessible through interaction. Imitation learning brings its own unique challenges. In this paper, we assume that we have been provided with demonstrations {⌧ i } i where the i-th trajectory of state-action pairs is ⌧ i = {xi1, ai1, · · · , xi Ti , a i Ti }. These trajectories may have been produced by either an artificial or natural agent. As in generative modeling, we can easily apply maximum likelihood to imitation learning. For instance, if the dynamics are tractable, we can maximize the likelihood of the states directly: max ✓ P i P Ti t=1 log p(x i t+1|xit,⇡✓(xit)). If a model of the dynamics is unavailable, we can instead maximize the likelihood of the actions: max ✓ P i P Ti t=1 log ⇡✓(a i t |xi t ). The latter approach is what we referred to as behavioral cloning (BC) in the introduction. When demonstrations are plentiful, BC is effective [24, 30, 6]. Without abundant data, BC is known to be inadequate [28, 29, 12]. The inefficiencies of BC stem from the sequential nature of the problem. When using BC, even the slightest errors in mimicking the demonstration behavior can quickly accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but for BC to achieve this, the corrective behaviors have to appear frequently in the training data. GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and learn from these interactions. It constructs a reward function using GANs to measure the similarity between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the following objective function min ✓ max E ⇡E [logD (x, a)] + E⇡✓ [log(1 D (x, a))] , (3) where ⇡ E denotes the expert policy that generated the demonstration trajectories. To avoid differentiating through the system dynamics, policy gradient algorithms are used to train the policy by maximizing the discounted sum of rewards r (x t , a t ) = log(1 D (x t , a t )). Maximizing this reward, which may differ from the expert reward, drives ⇡ ✓ to expert-like regions of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize the learning process [31]. GAIL has become a popular choice for imitation learning [16] and there already exist model-based [3] and third-person [36] extensions. Two recent GAIL-based approaches [17, 10] introduce additional reward signals that encourage the policy to make use of latent variables which would correspond to different types of demonstrations after training. These approaches are complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation. The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to recent authoritative surveys on the topic [5, 1, 14]. Inspired by recent works, including [12, 36, 6], we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations. In graphics, a significant effort has been devoted to the design physics controllers that take advantage of motion capture data, or key-frames and other inputs provided by animators [33, 35, 43, 22]. Yet, as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires significant human insight. Our focus is on flexible, general imitation methods. 3 A Generative Modeling Approach to Imitating Diverse Behaviors 3.1 Behavioral cloning with variational autoencoders suited for control In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having a distribution q (z|x1:T ) to better regularize the latent space. In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize the following loss: L(↵, w, ; ⌧ i )= E q (z|xi1:Ti ) " TiX t=1 log ⇡ ↵ (a i t |xi t , z)+log p w (x i t+1|xit, z) # +D KL q (z|xi1:Ti)||p(z) Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as our demonstration encoding. The action decoder is an MLP that maps the concatenation of the state and the embedding to the parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model [39]. In particular, it conditions on the embedding z and previous state x t 1 to generate the vector xt autoregressively. That is, the autoregression is over the components of the vector x t . Wavenet lessens the load of the encoder which no longer has to carry information that can be captured by modeling auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a mixture of Gaussians as the output of the WaveNet. 3.2 Diverse generative adversarial imitation learning As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations. Our solution to obtain more robust policies from few demonstrations, which are also capable of diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions, we condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior q (z|x1:T ). Specifically, we train the discriminator by optimizing the following objective max E ⌧i⇠⇡E ( E q(z|xi1:Ti ) " 1 T i TiX t=1 logD (x i t , a i t |z) + E ⇡✓ [log(1 D (x, a|z))] #) . (4) A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs [21]. We condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3. Since our discriminator is conditional, the reward function is also conditional: rt (x t , a t |z) = log(1 D (x t , a t |z)). We also clip the reward so that it is upper-bounded. Conditioning on z allows us to generate an infinite number of reward functions each of them tailored to imitating a different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular mode due to the diversity of reward functions. To better motivate our objective, let us temporarily leave the context of imitation learning and consider the following alternative value function for training GANs min G max D V (G,D) = Z y p(y) Z z q(z|y) logD(y|z) + Z ŷ G(ŷ|z) log(1 D(ŷ|z))dŷ dydz. This function is a simplification of our objective function. Furthermore, it satisfies the following property. Lemma 1. Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z) p(y) , then V (G,D) = Z z p(z) Z y p(y|z) logD(y|z)dy + Z x̂ G(ŷ|z) log(1 D(ŷ|z))dŷ dz. Algorithm 1 Diverse generative adversarial imitation learning. INPUT: Demonstration trajectories {⌧i}i and VAE encoder q. repeat for j 2 {1, · · · , n} do Sample trajectory ⌧j from the demonstration set and sample zj ⇠ q(·|xj1:Tj ). Run policy ⇡✓(·|zj) to obtain the trajectory b⌧j . end for Update policy parameters via TRPO with rewards rjt (x j t , a j t |zj) = log(1 D (x j t , a j t |zj)). Update discriminator parameters from i to i+1 with gradient: r 8 < : 1 n nX j=1 2 4 1 Tj TjX t=1 logD (x j t , a j t |zj) 3 5 + 2 4 1 b Tj bTjX t=1 log(1 D (bxjt ,ba j t |zj)) 3 5 9 = ; until Max iteration or time reached. If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes C(G) = 2 Z z p(z)JSD [p( · |z) ||G( · |z)] dz log 4, (5) where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize this divergence, and it is well documented that optimizing it leads to mode seeking behavior [37]. The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture of Gaussians and p(z) describes the distribution over the mixture components. In this case, the conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity, we can expect the conditional distributions to be close to uni-modal and therefore our generators to be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible and hence our choice of training q with VAEs. We now turn our attention to some algorithmic considerations. We can use the VAE policy ⇡ ↵ (a t |x t , z) to accelerate the training of ⇡ ✓ (a t |x t , z). One possible route is to initialize the weights ✓ to ↵. However, before the policy behaves reasonably, the noise injected into the policy for exploration (when using stochastic policy gradients) can cause poor initial performance. Instead, we fix ↵ and structure the conditional policy as follows ⇡ ✓ ( · |x, z) = N ( · |µ ✓ (x, z) + µ ↵ (x, z), ✓ (x, z)) , where µ ↵ is the mean of the VAE policy. Finally, the policy parameterized by ✓ is optimized with TRPO [31] while holding parameters ↵ fixed, as shown in Algorithm 1. 4 Experiments The primary focus of our experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems our full learning procedure is critical. We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible structure that an be exploited for control. Finally, we show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller. All experiments are conducted with the MuJoCo physics engine [38]. For details of the simulation and the experimental setup please see appendix. 4.1 Robotic arm reaching We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a robotics arm developed by Kinova Robotics. To obtain demonstrations, we trained 60 independent policies to reach to random target locations2 in the workspace starting from the same initial configuration. We generated 30 trajectories from each of the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data. The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in Figure 2. Representative examples can be found in the video in the supplemental material. To further investigate the nature of the embedding space we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories. We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment are illustrated with a representative pair in Figure 3. We observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint) space, highlighting the semantic meaningfulness of the discovered latent space. 4.2 2D Walker We found reaching behavior to be relatively easy to imitate, presumably because it does not involve much physical contact. As a more challenging test we consider bipedal locomotion. We train 60 neural network policies for a 2d walker to serve as demonstrations3. These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top right). Besides the target speed the reward function imposes few constraints on the behavior. The resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for most purposes this diversity is undesirable, for the present experiment we consider it a feature. 2See appendix for details 3See section A.2 in the appendix for details. We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because our relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, we also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also exhibit dramatically less diversity; see video. A general problem of adversarial training is that there is no easy way to quantitatively assess the quality of learned models. Here, since we aim to imitate particular demonstration trajectories that were trained to achieve particular target speed(s) we can use the difference between the speed of the demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality of the imitation (cf. also [12]). The general quality of the learned model and the improvement achieved by the adversarial stage of our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all 60 policies) from the training set, compute the corresponding embedding vectors using the encoder, and use both the VAE policy as well as the improved policy from the adversarial stage to imitate each of the trajectories. We determine the absolute values of the difference between the average speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the adversarial training greatly improves reliability of the controller as well as the ability of the model to accurately match the speed of the demonstration. We also include addition quantitative analysis of our approach using this speed metric in Appendix B. Video of our agent imitating a diverse set of behaviors can be found in the supplemental material. To assess generalization to novel trajectories we encode and subsequently imitate trajectories not contained in the training set. The supplemental video contains several representative examples, demonstrating that the style of movement is successfully imitated for previously unseen trajectories. Finally, we analyze the structure of the embedding space. We embed training trajectories and perform dimensionality reduction with t-SNE [41]. The result is shown in Fig. 4. It reveals a clear clustering according to movement speeds thus recovering the nature of the task context for the demonstration trajectories. We further find that trajectories that are nearby in embedding space tend to correspond to similar movement styles even when differing in speed. 4.3 Complex humanoid We consider a humanoid body of high dimensionality that poses a hard control problem. The construction of this body and associated control policies is described in [20], and is briefly summarized in the appendix (section A.3) for completness. We generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles (see section A.3). Examples of such trajectories are shown in Fig. 5 and in the supplemental video. The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture data base4. Each trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data). Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. We analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the supplemental video. For practical purposes it is desirable to allow the controller to transition from one behavior to another. We test this possibility in an experiment similar to the one for the Jaco arm: We determine the embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by linearly interpolating the embeddings of the two demonstration trajectories over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. Representative examples of these transitions can be found in the supplemental video. 5 Conclusions We have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that learns, from a moderate number of demonstration trajectories (1) a semantically well structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation. Our experimental results demonstrate that our approach can work on a variety of control problems, and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms. 4See appendix for details.
1. What is the main contribution of the paper regarding modeling joint action/state trajectory spaces? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like GAIL? 3. How does the reviewer assess the clarity and completeness of the paper's content, particularly in the modeling section? 4. What are the limitations of the experimental evaluation, and how could they be addressed? 5. Are there any concerns regarding the originality and novelty of the work, given its reliance on previous research?
Review
Review This work deals with the problem of modeling joint action/state trajectory spaces of dynamical systems using a supervised imitation learning paradigm. It approaches the problem by defining a Variational Autoencoder (VAE) that maps the state sequence, via a latent representation, into a (state,action) pair. The approach is validated on three articulated body controller modeling problems: a robotic arm, a 2D biped, and a 3D human body motion modeling. Prior work: the work closely follows (extends) the approach of Generative Adversarial Imitation Learning [11]. The difference here is that, through the use of VAE, the authors claim to have avoided the pitfall of GAN know as mode collapsing. Summary + Addresses a challenging problem of learning complex dynamics controllers / control policies + Well-written introduction / motivation + Appealing qualitative results on the three evaluation problems. Interesting experiments with motion transitioning. - Modeling formulation is somewhat different from GAIL (latent representation) but it rather closely follows GAIL - Many key details are omitted (either on purpose, placed in appendix, or simply absent, like the lack of definitions of terms in the modeling section, details of the planner model, simulation process, or the details of experimental settings) - Experimental evaluation is largely subjective (videos of robotic arm/biped/3D human motion) - Paper appears written in haste or rewritten from a SIGGRAPH like submission - Relies significantly on work presented in an anonymous NIPS submission Detailed comments In general, I like the focus of this paper; designing complex controllers for dynamic systems is an extremely challenging task and the approach proposed here looks reasonable. The use of the VAE is justified and the results, subjectively, are appealing. However, so many details are missing here and some parts of the discussion are unclear or not justified, in my view. For instance, the whole modeling section (3), while not so obscure, still ends up not defining many terms. Eg, what is \pi_E in the first eq of 3.2? (Not numbered, btw) What is \theta in eq3? (also used elsewhere) . What are all the weight terms in the policy model, end of sec 3 (not numbered)? You say you initialize theta to alpha wights, but what does that mean? What is the purpose of Lemma 1? The whole statement following it, l 135-145 sound unclear and contradictory. Eg, you say that eq5 avoids the problem of collapse, yet you state in those lines that is also has the same problem? The decoder models for (x,a) are not explicitly defined. How is the trajectory simulation process actually accomplished? This is somewhat tersely described in l168-172. How are transitions between categories of different motion types modeled as you do not explicitly encode the category class? It is obviously done in the z-space, but is it some linear interpolation? It appears to be (l174-176) but is it timed or instantaneous? What are all the numbers in the Action Decoder Sizes column of Tab 1, appendix?
NIPS
Title Multi-marginal Wasserstein GAN Abstract Multiple marginal matching problem aims at learning mappings to match a source domain to multiple target domains and it has attracted great attention in many applications, such as multi-domain image translation. However, addressing this problem has two critical challenges: (i) Measuring the multi-marginal distance among different domains is very intractable; (ii) It is very difficult to exploit cross-domain correlations to match the target domain distributions. In this paper, we propose a novel Multi-marginal Wasserstein GAN (MWGAN) to minimize Wasserstein distance among domains. Specifically, with the help of multi-marginal optimal transport theory, we develop a new adversarial objective function with innerand inter-domain constraints to exploit cross-domain correlations. Moreover, we theoretically analyze the generalization performance of MWGAN, and empirically evaluate it on the balanced and imbalanced translation tasks. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of MWGAN. 1 Introduction Multiple marginal matching (M3) problem aims to map an input image (source domain) to multiple target domains (see Figure 1(a)), and it has been applied in computer vision, e.g., multi-domain image translation [10, 23, 25]. In practice, the unsupervised image translation [30] gains particular interest because of its label-free property. However, due to the lack of corresponding images, this task is extremely hard to learn stable mappings to match a source distribution to multiple target distributions. Recently, some methods [10, 30] address M3 problem, which, however, face two main challenges. First, existing methods often neglect to jointly optimize the multi-marginal distance among domains, which cannot guarantee the generalization performance of methods and may lead to distribution mismatching issue. Recently, CycleGAN [51] and UNIT [32] repeatedly optimize every pair of two different domains separately (see Figure 1(b)). In this sense, they are computationally expensive and may have poor generalization performance. Moreover, UFDN [30] and StarGAN [10] essentially measure the distance between an input distribution and a mixture of all target distributions (see Figure 1(b)). As a result, they may suffer from distribution mismatching issue. Therefore, it is necessary to explore a new method to measure and optimize the multi-marginal distance. Second, it is very challenging to exploit the cross-domain correlations to match target domains. Existing methods [51, 30] only focus on the correlations between the source and target domains, since they measure the distance between two distributions (see Figure 1(b)). However, these methods often ignore the correlations among target domains, and thus they are hard to fully capture information to improve the performance. Moreover, when the source and target domains are significantly different, or the number of target domains is large, the translation task turns to be difficult for existing methods to exploit the cross-domain correlations. ∗Authors contributed equally. †Corresponding author. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we seek to use multi-marginal Wasserstein distance to solve M3 problem, but directly optimizing it is intractable. Therefore, we develop a new dual formulation to make it tractable and propose a novel multi-marginal Wasserstein GAN (MWGAN) by enforcing inner- and inter-domain constraints to exploit the correlations among domains. The contributions of this paper are summarized as follows: • We propose a novel GAN method (called MWGAN) to optimize a feasible multi-marginal distance among different domains. MWGAN overcomes the limitations of existing methods by alleviating the distribution mismatching issue and exploiting cross-domain correlations. • We define and analyze the generalization of our proposed method for the multiple domain translation task, which is more important than existing generalization analyses [13, 36] studying only on two domains and non-trivial for multiple domains. • We empirically show that MWGAN is able to solve the imbalanced image translation task well when the source and target domains are significantly different. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of our proposed method. 2 Related Work Generative adversarial networks (GANs). Deep neural networks have theoretical and experimental explorations [7, 21, 48, 49, 53]. In particular, GANs [17] have been successfully applied in computer vision tasks, such as image generation [3, 6, 18, 20], image translation [2, 10, 19] and video prediction [35]. Specifically, a generator tries to produce realistic samples, while a discriminator tries to distinguish between generated data and real data. Recently, some studies try to improve the quality [5, 9, 26] and diversity [43] of generated images, and improve the mechanism of GANs [1, 11, 38, 39] to deal with the unstable training and mode collapse problems. Multi-domain image translation. M3 problem can be applied in domain adaptation [45] and image translation [27, 52]. CycleGAN [51], DiscoGAN [28], DualGAN [47] and UNIT [32] are proposed to address two-domain image translation task. However, in Figure 1(b), these methods measure the distance between every pair of distributions multiple times, which is computationally expensive when applied to the multi-domain image translation task. Recently, StarGAN [10] and AttGAN [23] use a single model to perform multi-domain image translation. UFDN [30] translates images by learning domain-invariant representation for cross-domains. Essentially, the above three methods are two-domain image translation methods because they measure the distance between an input distribution and a uniform mixture of other target distributions (see Figure 1(b)). Therefore, these methods may suffer from distribution mismatching issue and obtain misleading feedback for updating models when the source and target domains are significantly different. In addition, we discuss the difference between some GAN methods in Section I in supplementary materials. 3 Problem Definition Notation. We use calligraphic letters (e.g., X ) to denote space, capital letters (e.g., X) to denote random variables, and bold lower case letter (e.g., x) to denote the corresponding values. Let D=(X ,P) be the domain, P or µ be the marginal distribution over X and P(X ) be the set of all the probability measures over X . For convenience, let X=Rd, and let I={0, ..., N} and [N ]={1, ..., N}. Multiple marginal matching (M3) problem. In this paper, M3 problem aims to learn mappings to match a source domain to multiple target domains. For simplicity, we consider one source domain Ds={X ,Ps} and N target domains Di={X ,Pti}, i∈[N ], where Ps is the source distribution, and Pti is the i-th real target distribution. Let gi, i∈[N ] be the generative models parameterized by θi, and Pθi be the generated distribution in the i-th target domain. In this problem, the goal is to learn multiple generative models such that each generated distribution Pθi in the i-th target domain can be close to the corresponding real target distribution Pti (see Figure 1(a)). Optimal transport (OT) theory. Recently, OT [42] theory has attracted great attention in many applications [3, 46]. Directly solving the primal formulation of OT [40] might be intractable [16]. To address this, we consider the dual formulation of the multi-marginal OT problem as follows. Problem I (Dual problem [40]) Given N+1 marginals µi∈P(X ), potential functions fi, i∈I , and a cost function c(X(0), . . . , X(N)) : Rd(N+1)→R, the dual Kantorovich problem can be defined as: W (µ0, ..., µN )= sup fi ∑ i ∫ fi ( X(i) ) dµi ( X(i) ) , s.t. ∑ i fi ( X(i) ) ≤c ( X(0), ..., X(N) ) . (1) In practice, we optimize the discrete case of Problem I. Specifically, given samples {x(0)j }j∈J0 and {x(i)j }j∈Ji drawn from source domain distribution Ps and generated target distributions Pθi , i∈[N ], respectively, where Ji is an index set and ni=|Ji| is the number of samples, we have: Problem II (Discrete dual problem) Let F={f0, . . . , fN} be the set of Kantorovich potentials, then the discrete dual problem ĥ(F ) can be defined as: max F ĥ(F )= ∑ i 1 ni ∑ j∈Ji fi ( x (i) j ) , s.t. ∑ i fi ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (2) Unfortunately, it is challenging to optimize Problem II due to the intractable inequality constraints and multiple potential functions. To address this, we seek to propose a new optimization method. 4 Multi-marginal Wasserstein GAN 4.1 A New Dual Formulation For two domains, WGAN [3] solves Problem II by setting f0=f and f1=−f . However, it is hard to extend WGAN to multiple domains. To address this, we propose a new dual formulation in order to optimize Problem II. To this end, we use a shared potential in Problem II, which is supported by empirical and theoretical evidence. In the multi-domain image translation task, the domains are often correlated, and thus share similar properties and differ only in details (see Figure 1(a)). The cross-domain correlations can be exploited by the shared potential function (see Section J in supplementary materials). More importantly, the optimal objectives of Problem II and the following problem can be equal under some conditions (see Section B in supplementary materials). Problem III Let Fλ={λ0f, . . . , λNf} be Kantorovich potentials, then we define dual problem as: max Fλ ĥ(Fλ)= ∑ i λi ni ∑ j∈Ji f ( x (i) j ) , s.t. ∑ i λif ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (3) To further build the relationship between Problem II and Problem III, we have the following theorem so that Problem III can be optimized well by GAN-based methods (see Subsection 4.2). Theorem 1 Suppose the domains are connected, the cost function c is continuously differentiable and each µi is absolutely continuous. If (f0, . . . , fN ) and (λ0f, . . . , λNf) are solutions to Problem I, then there exist some constants εi for each i ∈ I such that ∑ i εi = 0 and fi = λif + εi. Remark 1 From Theorem 1, if we train a shared function f to obtain a solution of Problem I, we have an equivalent Wasserstein distance, i.e., ∑ i fi= ∑ i λif regardless of whatever the value εi is. Therefore, we are able to optimize Problem III instead of intractable Problem II in practice. Algorithm 1 Multi-marginal WGAN. Input: Training data {xj}n0j=1 in the initial domain, {x̂ (i) j } ni j=1 in the i-th target domain; batch size mbs; the number of iterations of the discriminator per generator iteration ncritic; Uniform distribution U [0, 1]. Output: The discriminator f , the generators {gi}i∈[N ] and the classifier φ 1: while not converged do 2: for t = 0, . . . , ncritic do 3: Sample x∼P̂s and x̂∼P̂θi , ∀i, and x̃← ρx + (1− ρ)x̂, where ρ∼U [0, 1] 4: Update f by ascending the gradient: ∇w[Ex∼P̂s [f (x)]− ∑ i λ + i Ex̂∼P̂θi [f (x̂)]−Rτ (f)] 5: Update classifier φ by descending the gradient∇v[Cα(φ)] 6: end for 7: Update each generator gi by descending the gradient: ∇θi [−λ + i Ex̂∼P̂θi [f (x̂)]−Mα(gi)] 8: end while 4.2 Proposed Objective Function To minimize Wasserstein distance among domains, we now present a novel multi-marginal Wasserstein GAN (MWGAN) based on the proposed dual formulation in (3). Specifically, let F={f :Rd→R} be the class of discriminators parameterized by w, and G={g:Rd→Rd} be the class of generators and gi∈G is parameterized by θi. Motivated by the adversarial mechanism of WGAN, let λ0=1 and λi:=−λ+i , λ + i >0, i∈[N ], then Problem III can be rewritten as follows: Problem IV (Multi-marginal Wasserstein GAN) Given a discriminator f∈F and generators gi∈G, i∈[N ], we can define the following multi-marginal Wasserstein distance as W ( P̂s, P̂θ1 , . . . , P̂θN ) = maxf Ex∼P̂s [f(x)]− ∑ i λ+i Ex̂∼P̂θi [f (x̂)] , s.t. P̂θi∈Di, f∈Ω. (4) where P̂s is the real source distribution, and the distribution P̂θi is generated by gi in the i-th domain, Ω={f |f(x)− ∑ i∈[N ] λ + i f(x̂ (i))≤c(x, x̂(1), . . . , x̂(N)), f∈F} with x∈P̂s and x̂(i)∈P̂θi , i∈[N ]. In Problem IV, we refer to P̂θi∈Di, i∈[N ] as inner-domain constraints and f∈Ω as inter-domain constraints (See Subsections 4.3 and 4.4). The influence of these constraints are investigated in Section N of supplementary materials. Note that λ+i reflects the importance of the i-th target domain. In practice, we set λ+i =1/N, i∈[N ] when no prior knowledge is available on the target domains. To minimize Problem IV, we optimize the generators with the following update rule. Theorem 2 If each generator gi∈G, i∈[N ] is locally Lipschitz (see more details of Assumption 1 [3]), then there exists a discriminator f to Problem IV, we have the gradient ∇θiW (P̂s, P̂θ1 , . . . , P̂θN ) = −λ+i Ex∼P̂s [∇θif(gi(x))] for all θi, i∈[N ] when all terms are well-defined. Theorem 2 provides a good update rule for optimizing MWGAN. Specifically, we first train an optimal discriminator f and then update each generator along the direction of Ex∼P̂s [∇θif(gi(x))]. The detailed algorithm is shown in Algorithm 1. Specifically, the generators cooperatively exploit multi-domain correlations (see Section J in supplementary materials) and generate samples in the specific target domain to fool the discriminator; the discriminator enforces generated data in target domains to maintain the similar features from the source domain. 4.3 Inner-domain Constraints In Problem IV, the distribution Pθi generated by the generator gi should belong to the i-th domain for any i. To this end, we introduce an auxiliary domain classification loss and the mutual information. Domain classification loss. Given an input x:=x(0) and generator gi, we aim to translate the input x to an output x̂(i) which can be classified to the target domain Di correctly. To achieve this goal, we introduce an auxiliary classifier φ: X→Y parameterized by v to optimize the generators. Specifically, we label real data x∼P̂ti as 1, where P̂ti is an empirical distribution in the i-th target domain, and we label generated data x̂(i)∼P̂θi as 0. Then, the domain classification loss w.r.t. φ can be defined as: Cα(φ) = α · Ex′∼P̂ti∪P̂θi [` (φ (x ′) , y)] , (5) where α is a hyper-parameter, y is corresponding to x′, and `(·, ·) is a binary classification loss, such as hinge loss [50], mean square loss [34], cross-entropy loss [17] and Wasserstein loss [12]. Mutual information maximization. After learning the classifier φ, we maximize the lower bound of the mutual information [8, 23] between the generated image and the corresponding domain, i.e., Mα(gi) = α · Ex∼P̂s [ log φ ( y(i)=1 ∣∣∣ gi(x))] . (6) By maximizing the mutual information in (6), we correlate the generated image gi(x) with the i-th domain, and then we are able to translate the source image to the specified domain. 4.4 Inter-domain Constraints Then, we enforce the inter-domain constraints in Problem IV, i.e., the discriminator f∈F∩Ω. One can let discriminator be 1-Lipschitz continuous, but it may ignore the dependency among domains (see Section H in supplementary materials). Thus, we relax the constraints by the following lemma. Lemma 1 (Constraints relaxation) If the cost function c(·) is measured by `2 norm, then there exists Lf≥1 such that the constraints in Problem IV satisfy ∑ i |f(x)−f(x̂(i))|/‖x−x̂(i)‖≤Lf . Note that Lf measures the dependency among domains (see Section G in supplementary materials). In practice, Lf can be calculated with the cost function, or treated as a tuning parameter for simplicity. Inter-domain gradient penalty. In practice, directly enforcing the inequality constraints in Lemma 1 would have poor performance when generated samples are far from real data. We thus propose the following inter-domain gradient penalty. Specifically, given real data x in the source domain and generated samples x̂(i), if x̂(i) can be properly close to x, as suggested in [37], we can calculate its gradient and introduce the following regularization term into the objective of MWGAN, i.e., Rτ (f) = τ · (∑ i Ex̃(i)∼Q̂i ∥∥∥∇f (x̃(i))∥∥∥−Lf)2 + , (7) where (·)+= max{0, ·}, τ is a hyper-parameter, x̃(i) is sampled between x and x̂(i), and Q̂i, i∈[N ] is a constructed distribution relying on some sampling strategy. In practice, one can construct a distribution where samples x̃(i) can be interpolated between real data x and generated data x̂(i) for every domain [18]. Note that the gradient penalty captures the dependency of domains since the cost function in Problem IV measures the distance among all domains jointly. 5 Theoretical Analysis In this section, we provide the generalization analysis for the proposed method. Motivated by [4], we give a new definition of generalization for multiple distributions as follows. Definition 1 (Generalization) Let Ps and Pθi be the continuous real and generated distributions, and P̂s and P̂θi be the empirical real and generated distributions. The distribution distance W (·, . . . , ·) is said to generalize with n training samples and error , if for every true generated distribution Pθi , the following inequality holds with high probability,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (8) In Definition 1, the generalization bound measures the difference between the expected distance and the empirical distance. In practice, our goal is to train MWGAN to obtain a small empirical distance, so that the expected distance would also be small. With the help of Definition 1, we are able to analyze the generalization ability of the proposed method. Let κ be the capacity of the discriminator, and if the discriminator is L-Lipschitz continuous and bounded in [−∆,∆], then we have the following generalization bound. Theorem 3 (Generalization bound) Given the continuous real and generated distributions Ps and Pθi , i∈I, and the empirical versions P̂s and P̂θi , i∈I with at least n samples in each domain, there is a universal constant C such that n≥Cκ∆2 log(Lκ/ )/ 2 with the error , the following generalization bound is satisfied with probability at least 1−e−κ,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (9) Theorem 3 shows that MWGAN has a good generalization ability with enough training data in each domain. In practice, if successfully minimizing the multi-domain Wasserstein distance i.e., W (P̂s, P̂θ1 , . . . , P̂θN ), the expected distance W (Ps,Pθ1 , . . . ,PθN ) can also be small. 6 Experiments Implementation details. All experiments are conducted based on PyTorch, with an NVIDIA TITAN X GPU.3 We use Adam [29] with β1=0.5 and β2=0.999 and set the learning rate as 0.0001. We train the model 100k iterations with batch size 16. We set α=10, τ=10 and Lf to be the number of target domains in Loss (7). The details of the loss function and the network architectures of the discriminator, generators and classifier can be referred to Section P in supplementary materials. Baselines. We adopt the following methods as baselines: (i) CycleGAN [51] is a two-domain image translation method which can be flexibly extended to perform the multi-domain image translation task. (ii) UFDN [30] and (iii) StarGAN [10] are multi-domain image translation methods. Datasets. We conduct experiments on three datasets. Note that all images are resized as 128×128. (i) Toy dataset. We generate a Gaussian distribution in the source domain, and other six Gaussian or Uniform distributions in the target domains. More details can be found in the supplemental materials. (ii) CelebA [33] contains 202,599 face images, where each image has 40 binary attributes. We use the following attributes: hair color (black, blond and brown), eyeglasses, mustache and pale skin. In the first experiment, we use black hair images as the source domain, and use the blond hair, eyeglasses, mustache and pale skin images as target domains. In the second experiment, we extract 50k Canny edges from CelebA. We take edge images as the source domain and hair images as target domains. (iii) Style painting [51]. The size of Real scene, Monet, Van Gogh and Ukiyo-e is 6287, 1073, 400 and 563, respectively. We take real scene images as the source domain, and others as target domains. Evaluation Metrics. We use the following evaluation metrics: (i) Fréchet Inception Distance (FID) [24] evaluates the quality of the translated images. In general, a lower FID score means better performance. (ii) Classification accuracy widely used in [10, 23] evaluates the probability that the generated images belong to corresponding target domains. Specifically, we train a classifier on CelebA (90% for training and 10% for testing) using ResNet-18 [22], resulting in a near-perfect accuracy, then use the classifier to measure the classification accuracy of the generated images. 6.1 Results on Toy Dataset We compare MWGAN with UFDN and StarGAN on toy dataset to verify the limitations mentioned in Section 2. Specifically, we measure the distribution matching ability and plot the value surface of the discriminator. Here, the value surface depicts the outputs of the discriminator [18, 31]. In Figure 2, MWGAN matches the target domain distributions very well as it is able to capture the geometric information of real distribution using a low-capacity network. Moreover, the value surface shows that the discriminator provides correct gradients to update the generators. However, the baseline methods are very sensitive to the type of source and target domain distributions. With the same capacity, the baseline methods on similar distributions (top row) are able to match the target domain distributions. However, they cannot match the target domain distribution well when the initial and the target domain distributions are different (see bottom row of Figure 2). 3The source code of our method is available: https://github.com/caojiezhang/MWGAN. 6.2 Results on CelebA We compare MWGAN with several baselines on both balanced and imbalanced translation tasks. (i) Balanced image translation task. In this experiment, we train the generators to produce single attribute images, and then synthesize multi-attribute images using the composite generators. We generate attributes in order of {Blond hair, Eyeglasses, Mustache, Pale skin}. Taking two attributes as an example, let g1 and g2 be the generators of Blond hair and Eyeglasses images, respectively, then images with Blond hair and Eyeglasses attributes are generated by the composite generators g2◦g1. Qualitative results. In Figure 3, MWGAN has a better or comparable performance than baselines on the single attribute translation task, but achieves the highest visual quality of multi-attributes translation results. In other words, MWGAN has good generalization performance. However, CycleGAN is hard to synthesize multi-attributes. UFDN cannot guarantee the identity of the translated images and produces images with blurring structures. Moreover, StarGAN highly depends on the number of transferred domains and the synthesized images sometimes lack the perceptual realism. Quantitative results. We further compare FID and classification accuracy for the single-attribute results. For the multi-attribute results, we only report classification accuracy because FID is no longer a valid measure and may give misleading results when training data are not sufficient [24]. In Table 1, MWGAN achieves the lowest FID and comparable classification accuracy, indicating that it produces realistic single-attribute images of the highest quality. In Table 2, MWGAN achieves the highest classification accuracy and thus synthesizes the most realistic multi-attribute images. (ii) Imbalanced image translation task. In this experiment, we compare MWGAN with baselines on the Edge→CelebA translation task. Note that this task is unbalanced because the information of edge images is much less than facial attribute images. Qualitative results. In Figure 4, MWGAN is able to generate the most natural-looking facial images with the corresponding attributes from edge images. In contrast, UFDN fails to preserve the facial texture of an edge image, and generates images with very blurry and distorted structure. In addition, CycleGAN and StarGAN mostly preserve the domain information but cannot maintain the sharpness of images and the facial structure information. Moreover, this experiment also shows the superiority of our method on the imbalanced image translation task. Quantitative results. In Table 3, MWGAN achieves the lowest FID, showing that it is able to produce the most realistic facial attributes from the edge images. In contrast, the FID values of baselines are large because these methods are hard to generate sharp and realistic images. We also perform a perceptual evaluation with AMT for this task (see Section M in supplementary materials). 6.3 Results on Painting Translation In this experiment, we finally train our model on the painting dataset to conduct the style transfer task [41, 44]. As suggested in [14, 15, 51], we only show the qualitative results. Note that this translation task is also imbalanced because the input and target distributions are significantly different. In Figure 5, MWGAN generates painting images with higher visual quality. In contrast, UFDN fails to generate clearly structural painting images because it is hard to learn domain-invariant representation when domains are highly imbalanced. CycleGAN cannot fully learn some useful information from painting images to scene images. When taking a painting image as an input, StarGAN may obtain misleading information to update the generator. In this sense, when all domains are significantly different, StarGAN may not learn a good single generator to synthesize images of multiple domains. 7 Conclusion In this paper, we have proposed a novel multi-marginal Wasserstein GAN (MWGAN) for multiple marginal matching problem. Specifically, with the help of multi-marginal optimal transport theory, we develop a new dual formulation for better adversarial learning on the unsupervised multi-domain image translation task. Moreover, we theoretically define and further analyze the generalization ability of the proposed method. Extensive experiments on both toy and real-world datasets demonstrate the effectiveness of the proposed method. Acknowledgements This work is partially funded by Guangdong Provincial Scientific and Technological Funds under Grants 2018B010107001, National Natural Science Foundation of China (NSFC) 61602185, key project of NSFC (No. 61836003), Fundamental Research Funds for the Central Universities D2191240, Program for Guangdong Introducing Innovative and Enterpreneurial Teams 2017ZT07X183, and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201902). This work is also partially funded by Microsoft Research Asia (MSRA Collaborative Research Program 2019).
1. What is the main contribution of the paper regarding optimal transportation theory? 2. What are the strengths of the paper, particularly in its theoretical analysis and experimental justification? 3. Do you have any questions or concerns regarding the assumptions made in the paper, such as the uniform potential function with different weights? 4. How does the reviewer assess the effectiveness and significance of the proposed method compared to other works, including WGAN? 5. What are the reviewer's thoughts on the discussion of generalization in the paper, especially regarding its definition and application to unseen test examples?
Review
Review Bases on the optimal transportation theory, the authors have developed a new approach to tackle the multiple marginal matching problem. The authors have provided details to derive tractable objective function, which can be formulated as a GAN problem eventually. Theoretical analysis on the generalization of the method has been conducted. Experiments on both toy and real-world data are helpful to justify the effectiveness of the method. In Section 4.1, the authors linked potential functions in multiple domains using a uniform potential function with different weights. This assumption seems to be a bit strong. Some more explanations are necessary to justify the reasonability of this formulation. In Problem IV, the authors state that 1/N can be taken as a default value for lambda. If so, the problem will be nearly the same with the objective function of classical WGAN. Further considering the generators for different domain, the whole process of the algorithm may not show significant difference from WGAN, except that more than one generator has been used. In addition, the correlation between multiple domains seems to be investigated in a very straightforward way by assuming that a shared discriminator (in WGAN) for multiple domains. It is therefore unclear whether this simply approach indeed is helpful for exploring the correlation. It is very interesting to discuss the generalization in the paper. According to definition 1, the generalization is defined over training sample. How about the generalization over unseen test examples? ------------------------------------------------------------------------------ The authors addressed my concerns on the technical details in the rebuttal. The proposed algorithm is theoretically motivated and has shown performance advantages in experiments. This paper is interesting to me, and I would like to vote for an acceptance.
NIPS
Title Multi-marginal Wasserstein GAN Abstract Multiple marginal matching problem aims at learning mappings to match a source domain to multiple target domains and it has attracted great attention in many applications, such as multi-domain image translation. However, addressing this problem has two critical challenges: (i) Measuring the multi-marginal distance among different domains is very intractable; (ii) It is very difficult to exploit cross-domain correlations to match the target domain distributions. In this paper, we propose a novel Multi-marginal Wasserstein GAN (MWGAN) to minimize Wasserstein distance among domains. Specifically, with the help of multi-marginal optimal transport theory, we develop a new adversarial objective function with innerand inter-domain constraints to exploit cross-domain correlations. Moreover, we theoretically analyze the generalization performance of MWGAN, and empirically evaluate it on the balanced and imbalanced translation tasks. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of MWGAN. 1 Introduction Multiple marginal matching (M3) problem aims to map an input image (source domain) to multiple target domains (see Figure 1(a)), and it has been applied in computer vision, e.g., multi-domain image translation [10, 23, 25]. In practice, the unsupervised image translation [30] gains particular interest because of its label-free property. However, due to the lack of corresponding images, this task is extremely hard to learn stable mappings to match a source distribution to multiple target distributions. Recently, some methods [10, 30] address M3 problem, which, however, face two main challenges. First, existing methods often neglect to jointly optimize the multi-marginal distance among domains, which cannot guarantee the generalization performance of methods and may lead to distribution mismatching issue. Recently, CycleGAN [51] and UNIT [32] repeatedly optimize every pair of two different domains separately (see Figure 1(b)). In this sense, they are computationally expensive and may have poor generalization performance. Moreover, UFDN [30] and StarGAN [10] essentially measure the distance between an input distribution and a mixture of all target distributions (see Figure 1(b)). As a result, they may suffer from distribution mismatching issue. Therefore, it is necessary to explore a new method to measure and optimize the multi-marginal distance. Second, it is very challenging to exploit the cross-domain correlations to match target domains. Existing methods [51, 30] only focus on the correlations between the source and target domains, since they measure the distance between two distributions (see Figure 1(b)). However, these methods often ignore the correlations among target domains, and thus they are hard to fully capture information to improve the performance. Moreover, when the source and target domains are significantly different, or the number of target domains is large, the translation task turns to be difficult for existing methods to exploit the cross-domain correlations. ∗Authors contributed equally. †Corresponding author. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we seek to use multi-marginal Wasserstein distance to solve M3 problem, but directly optimizing it is intractable. Therefore, we develop a new dual formulation to make it tractable and propose a novel multi-marginal Wasserstein GAN (MWGAN) by enforcing inner- and inter-domain constraints to exploit the correlations among domains. The contributions of this paper are summarized as follows: • We propose a novel GAN method (called MWGAN) to optimize a feasible multi-marginal distance among different domains. MWGAN overcomes the limitations of existing methods by alleviating the distribution mismatching issue and exploiting cross-domain correlations. • We define and analyze the generalization of our proposed method for the multiple domain translation task, which is more important than existing generalization analyses [13, 36] studying only on two domains and non-trivial for multiple domains. • We empirically show that MWGAN is able to solve the imbalanced image translation task well when the source and target domains are significantly different. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of our proposed method. 2 Related Work Generative adversarial networks (GANs). Deep neural networks have theoretical and experimental explorations [7, 21, 48, 49, 53]. In particular, GANs [17] have been successfully applied in computer vision tasks, such as image generation [3, 6, 18, 20], image translation [2, 10, 19] and video prediction [35]. Specifically, a generator tries to produce realistic samples, while a discriminator tries to distinguish between generated data and real data. Recently, some studies try to improve the quality [5, 9, 26] and diversity [43] of generated images, and improve the mechanism of GANs [1, 11, 38, 39] to deal with the unstable training and mode collapse problems. Multi-domain image translation. M3 problem can be applied in domain adaptation [45] and image translation [27, 52]. CycleGAN [51], DiscoGAN [28], DualGAN [47] and UNIT [32] are proposed to address two-domain image translation task. However, in Figure 1(b), these methods measure the distance between every pair of distributions multiple times, which is computationally expensive when applied to the multi-domain image translation task. Recently, StarGAN [10] and AttGAN [23] use a single model to perform multi-domain image translation. UFDN [30] translates images by learning domain-invariant representation for cross-domains. Essentially, the above three methods are two-domain image translation methods because they measure the distance between an input distribution and a uniform mixture of other target distributions (see Figure 1(b)). Therefore, these methods may suffer from distribution mismatching issue and obtain misleading feedback for updating models when the source and target domains are significantly different. In addition, we discuss the difference between some GAN methods in Section I in supplementary materials. 3 Problem Definition Notation. We use calligraphic letters (e.g., X ) to denote space, capital letters (e.g., X) to denote random variables, and bold lower case letter (e.g., x) to denote the corresponding values. Let D=(X ,P) be the domain, P or µ be the marginal distribution over X and P(X ) be the set of all the probability measures over X . For convenience, let X=Rd, and let I={0, ..., N} and [N ]={1, ..., N}. Multiple marginal matching (M3) problem. In this paper, M3 problem aims to learn mappings to match a source domain to multiple target domains. For simplicity, we consider one source domain Ds={X ,Ps} and N target domains Di={X ,Pti}, i∈[N ], where Ps is the source distribution, and Pti is the i-th real target distribution. Let gi, i∈[N ] be the generative models parameterized by θi, and Pθi be the generated distribution in the i-th target domain. In this problem, the goal is to learn multiple generative models such that each generated distribution Pθi in the i-th target domain can be close to the corresponding real target distribution Pti (see Figure 1(a)). Optimal transport (OT) theory. Recently, OT [42] theory has attracted great attention in many applications [3, 46]. Directly solving the primal formulation of OT [40] might be intractable [16]. To address this, we consider the dual formulation of the multi-marginal OT problem as follows. Problem I (Dual problem [40]) Given N+1 marginals µi∈P(X ), potential functions fi, i∈I , and a cost function c(X(0), . . . , X(N)) : Rd(N+1)→R, the dual Kantorovich problem can be defined as: W (µ0, ..., µN )= sup fi ∑ i ∫ fi ( X(i) ) dµi ( X(i) ) , s.t. ∑ i fi ( X(i) ) ≤c ( X(0), ..., X(N) ) . (1) In practice, we optimize the discrete case of Problem I. Specifically, given samples {x(0)j }j∈J0 and {x(i)j }j∈Ji drawn from source domain distribution Ps and generated target distributions Pθi , i∈[N ], respectively, where Ji is an index set and ni=|Ji| is the number of samples, we have: Problem II (Discrete dual problem) Let F={f0, . . . , fN} be the set of Kantorovich potentials, then the discrete dual problem ĥ(F ) can be defined as: max F ĥ(F )= ∑ i 1 ni ∑ j∈Ji fi ( x (i) j ) , s.t. ∑ i fi ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (2) Unfortunately, it is challenging to optimize Problem II due to the intractable inequality constraints and multiple potential functions. To address this, we seek to propose a new optimization method. 4 Multi-marginal Wasserstein GAN 4.1 A New Dual Formulation For two domains, WGAN [3] solves Problem II by setting f0=f and f1=−f . However, it is hard to extend WGAN to multiple domains. To address this, we propose a new dual formulation in order to optimize Problem II. To this end, we use a shared potential in Problem II, which is supported by empirical and theoretical evidence. In the multi-domain image translation task, the domains are often correlated, and thus share similar properties and differ only in details (see Figure 1(a)). The cross-domain correlations can be exploited by the shared potential function (see Section J in supplementary materials). More importantly, the optimal objectives of Problem II and the following problem can be equal under some conditions (see Section B in supplementary materials). Problem III Let Fλ={λ0f, . . . , λNf} be Kantorovich potentials, then we define dual problem as: max Fλ ĥ(Fλ)= ∑ i λi ni ∑ j∈Ji f ( x (i) j ) , s.t. ∑ i λif ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (3) To further build the relationship between Problem II and Problem III, we have the following theorem so that Problem III can be optimized well by GAN-based methods (see Subsection 4.2). Theorem 1 Suppose the domains are connected, the cost function c is continuously differentiable and each µi is absolutely continuous. If (f0, . . . , fN ) and (λ0f, . . . , λNf) are solutions to Problem I, then there exist some constants εi for each i ∈ I such that ∑ i εi = 0 and fi = λif + εi. Remark 1 From Theorem 1, if we train a shared function f to obtain a solution of Problem I, we have an equivalent Wasserstein distance, i.e., ∑ i fi= ∑ i λif regardless of whatever the value εi is. Therefore, we are able to optimize Problem III instead of intractable Problem II in practice. Algorithm 1 Multi-marginal WGAN. Input: Training data {xj}n0j=1 in the initial domain, {x̂ (i) j } ni j=1 in the i-th target domain; batch size mbs; the number of iterations of the discriminator per generator iteration ncritic; Uniform distribution U [0, 1]. Output: The discriminator f , the generators {gi}i∈[N ] and the classifier φ 1: while not converged do 2: for t = 0, . . . , ncritic do 3: Sample x∼P̂s and x̂∼P̂θi , ∀i, and x̃← ρx + (1− ρ)x̂, where ρ∼U [0, 1] 4: Update f by ascending the gradient: ∇w[Ex∼P̂s [f (x)]− ∑ i λ + i Ex̂∼P̂θi [f (x̂)]−Rτ (f)] 5: Update classifier φ by descending the gradient∇v[Cα(φ)] 6: end for 7: Update each generator gi by descending the gradient: ∇θi [−λ + i Ex̂∼P̂θi [f (x̂)]−Mα(gi)] 8: end while 4.2 Proposed Objective Function To minimize Wasserstein distance among domains, we now present a novel multi-marginal Wasserstein GAN (MWGAN) based on the proposed dual formulation in (3). Specifically, let F={f :Rd→R} be the class of discriminators parameterized by w, and G={g:Rd→Rd} be the class of generators and gi∈G is parameterized by θi. Motivated by the adversarial mechanism of WGAN, let λ0=1 and λi:=−λ+i , λ + i >0, i∈[N ], then Problem III can be rewritten as follows: Problem IV (Multi-marginal Wasserstein GAN) Given a discriminator f∈F and generators gi∈G, i∈[N ], we can define the following multi-marginal Wasserstein distance as W ( P̂s, P̂θ1 , . . . , P̂θN ) = maxf Ex∼P̂s [f(x)]− ∑ i λ+i Ex̂∼P̂θi [f (x̂)] , s.t. P̂θi∈Di, f∈Ω. (4) where P̂s is the real source distribution, and the distribution P̂θi is generated by gi in the i-th domain, Ω={f |f(x)− ∑ i∈[N ] λ + i f(x̂ (i))≤c(x, x̂(1), . . . , x̂(N)), f∈F} with x∈P̂s and x̂(i)∈P̂θi , i∈[N ]. In Problem IV, we refer to P̂θi∈Di, i∈[N ] as inner-domain constraints and f∈Ω as inter-domain constraints (See Subsections 4.3 and 4.4). The influence of these constraints are investigated in Section N of supplementary materials. Note that λ+i reflects the importance of the i-th target domain. In practice, we set λ+i =1/N, i∈[N ] when no prior knowledge is available on the target domains. To minimize Problem IV, we optimize the generators with the following update rule. Theorem 2 If each generator gi∈G, i∈[N ] is locally Lipschitz (see more details of Assumption 1 [3]), then there exists a discriminator f to Problem IV, we have the gradient ∇θiW (P̂s, P̂θ1 , . . . , P̂θN ) = −λ+i Ex∼P̂s [∇θif(gi(x))] for all θi, i∈[N ] when all terms are well-defined. Theorem 2 provides a good update rule for optimizing MWGAN. Specifically, we first train an optimal discriminator f and then update each generator along the direction of Ex∼P̂s [∇θif(gi(x))]. The detailed algorithm is shown in Algorithm 1. Specifically, the generators cooperatively exploit multi-domain correlations (see Section J in supplementary materials) and generate samples in the specific target domain to fool the discriminator; the discriminator enforces generated data in target domains to maintain the similar features from the source domain. 4.3 Inner-domain Constraints In Problem IV, the distribution Pθi generated by the generator gi should belong to the i-th domain for any i. To this end, we introduce an auxiliary domain classification loss and the mutual information. Domain classification loss. Given an input x:=x(0) and generator gi, we aim to translate the input x to an output x̂(i) which can be classified to the target domain Di correctly. To achieve this goal, we introduce an auxiliary classifier φ: X→Y parameterized by v to optimize the generators. Specifically, we label real data x∼P̂ti as 1, where P̂ti is an empirical distribution in the i-th target domain, and we label generated data x̂(i)∼P̂θi as 0. Then, the domain classification loss w.r.t. φ can be defined as: Cα(φ) = α · Ex′∼P̂ti∪P̂θi [` (φ (x ′) , y)] , (5) where α is a hyper-parameter, y is corresponding to x′, and `(·, ·) is a binary classification loss, such as hinge loss [50], mean square loss [34], cross-entropy loss [17] and Wasserstein loss [12]. Mutual information maximization. After learning the classifier φ, we maximize the lower bound of the mutual information [8, 23] between the generated image and the corresponding domain, i.e., Mα(gi) = α · Ex∼P̂s [ log φ ( y(i)=1 ∣∣∣ gi(x))] . (6) By maximizing the mutual information in (6), we correlate the generated image gi(x) with the i-th domain, and then we are able to translate the source image to the specified domain. 4.4 Inter-domain Constraints Then, we enforce the inter-domain constraints in Problem IV, i.e., the discriminator f∈F∩Ω. One can let discriminator be 1-Lipschitz continuous, but it may ignore the dependency among domains (see Section H in supplementary materials). Thus, we relax the constraints by the following lemma. Lemma 1 (Constraints relaxation) If the cost function c(·) is measured by `2 norm, then there exists Lf≥1 such that the constraints in Problem IV satisfy ∑ i |f(x)−f(x̂(i))|/‖x−x̂(i)‖≤Lf . Note that Lf measures the dependency among domains (see Section G in supplementary materials). In practice, Lf can be calculated with the cost function, or treated as a tuning parameter for simplicity. Inter-domain gradient penalty. In practice, directly enforcing the inequality constraints in Lemma 1 would have poor performance when generated samples are far from real data. We thus propose the following inter-domain gradient penalty. Specifically, given real data x in the source domain and generated samples x̂(i), if x̂(i) can be properly close to x, as suggested in [37], we can calculate its gradient and introduce the following regularization term into the objective of MWGAN, i.e., Rτ (f) = τ · (∑ i Ex̃(i)∼Q̂i ∥∥∥∇f (x̃(i))∥∥∥−Lf)2 + , (7) where (·)+= max{0, ·}, τ is a hyper-parameter, x̃(i) is sampled between x and x̂(i), and Q̂i, i∈[N ] is a constructed distribution relying on some sampling strategy. In practice, one can construct a distribution where samples x̃(i) can be interpolated between real data x and generated data x̂(i) for every domain [18]. Note that the gradient penalty captures the dependency of domains since the cost function in Problem IV measures the distance among all domains jointly. 5 Theoretical Analysis In this section, we provide the generalization analysis for the proposed method. Motivated by [4], we give a new definition of generalization for multiple distributions as follows. Definition 1 (Generalization) Let Ps and Pθi be the continuous real and generated distributions, and P̂s and P̂θi be the empirical real and generated distributions. The distribution distance W (·, . . . , ·) is said to generalize with n training samples and error , if for every true generated distribution Pθi , the following inequality holds with high probability,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (8) In Definition 1, the generalization bound measures the difference between the expected distance and the empirical distance. In practice, our goal is to train MWGAN to obtain a small empirical distance, so that the expected distance would also be small. With the help of Definition 1, we are able to analyze the generalization ability of the proposed method. Let κ be the capacity of the discriminator, and if the discriminator is L-Lipschitz continuous and bounded in [−∆,∆], then we have the following generalization bound. Theorem 3 (Generalization bound) Given the continuous real and generated distributions Ps and Pθi , i∈I, and the empirical versions P̂s and P̂θi , i∈I with at least n samples in each domain, there is a universal constant C such that n≥Cκ∆2 log(Lκ/ )/ 2 with the error , the following generalization bound is satisfied with probability at least 1−e−κ,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (9) Theorem 3 shows that MWGAN has a good generalization ability with enough training data in each domain. In practice, if successfully minimizing the multi-domain Wasserstein distance i.e., W (P̂s, P̂θ1 , . . . , P̂θN ), the expected distance W (Ps,Pθ1 , . . . ,PθN ) can also be small. 6 Experiments Implementation details. All experiments are conducted based on PyTorch, with an NVIDIA TITAN X GPU.3 We use Adam [29] with β1=0.5 and β2=0.999 and set the learning rate as 0.0001. We train the model 100k iterations with batch size 16. We set α=10, τ=10 and Lf to be the number of target domains in Loss (7). The details of the loss function and the network architectures of the discriminator, generators and classifier can be referred to Section P in supplementary materials. Baselines. We adopt the following methods as baselines: (i) CycleGAN [51] is a two-domain image translation method which can be flexibly extended to perform the multi-domain image translation task. (ii) UFDN [30] and (iii) StarGAN [10] are multi-domain image translation methods. Datasets. We conduct experiments on three datasets. Note that all images are resized as 128×128. (i) Toy dataset. We generate a Gaussian distribution in the source domain, and other six Gaussian or Uniform distributions in the target domains. More details can be found in the supplemental materials. (ii) CelebA [33] contains 202,599 face images, where each image has 40 binary attributes. We use the following attributes: hair color (black, blond and brown), eyeglasses, mustache and pale skin. In the first experiment, we use black hair images as the source domain, and use the blond hair, eyeglasses, mustache and pale skin images as target domains. In the second experiment, we extract 50k Canny edges from CelebA. We take edge images as the source domain and hair images as target domains. (iii) Style painting [51]. The size of Real scene, Monet, Van Gogh and Ukiyo-e is 6287, 1073, 400 and 563, respectively. We take real scene images as the source domain, and others as target domains. Evaluation Metrics. We use the following evaluation metrics: (i) Fréchet Inception Distance (FID) [24] evaluates the quality of the translated images. In general, a lower FID score means better performance. (ii) Classification accuracy widely used in [10, 23] evaluates the probability that the generated images belong to corresponding target domains. Specifically, we train a classifier on CelebA (90% for training and 10% for testing) using ResNet-18 [22], resulting in a near-perfect accuracy, then use the classifier to measure the classification accuracy of the generated images. 6.1 Results on Toy Dataset We compare MWGAN with UFDN and StarGAN on toy dataset to verify the limitations mentioned in Section 2. Specifically, we measure the distribution matching ability and plot the value surface of the discriminator. Here, the value surface depicts the outputs of the discriminator [18, 31]. In Figure 2, MWGAN matches the target domain distributions very well as it is able to capture the geometric information of real distribution using a low-capacity network. Moreover, the value surface shows that the discriminator provides correct gradients to update the generators. However, the baseline methods are very sensitive to the type of source and target domain distributions. With the same capacity, the baseline methods on similar distributions (top row) are able to match the target domain distributions. However, they cannot match the target domain distribution well when the initial and the target domain distributions are different (see bottom row of Figure 2). 3The source code of our method is available: https://github.com/caojiezhang/MWGAN. 6.2 Results on CelebA We compare MWGAN with several baselines on both balanced and imbalanced translation tasks. (i) Balanced image translation task. In this experiment, we train the generators to produce single attribute images, and then synthesize multi-attribute images using the composite generators. We generate attributes in order of {Blond hair, Eyeglasses, Mustache, Pale skin}. Taking two attributes as an example, let g1 and g2 be the generators of Blond hair and Eyeglasses images, respectively, then images with Blond hair and Eyeglasses attributes are generated by the composite generators g2◦g1. Qualitative results. In Figure 3, MWGAN has a better or comparable performance than baselines on the single attribute translation task, but achieves the highest visual quality of multi-attributes translation results. In other words, MWGAN has good generalization performance. However, CycleGAN is hard to synthesize multi-attributes. UFDN cannot guarantee the identity of the translated images and produces images with blurring structures. Moreover, StarGAN highly depends on the number of transferred domains and the synthesized images sometimes lack the perceptual realism. Quantitative results. We further compare FID and classification accuracy for the single-attribute results. For the multi-attribute results, we only report classification accuracy because FID is no longer a valid measure and may give misleading results when training data are not sufficient [24]. In Table 1, MWGAN achieves the lowest FID and comparable classification accuracy, indicating that it produces realistic single-attribute images of the highest quality. In Table 2, MWGAN achieves the highest classification accuracy and thus synthesizes the most realistic multi-attribute images. (ii) Imbalanced image translation task. In this experiment, we compare MWGAN with baselines on the Edge→CelebA translation task. Note that this task is unbalanced because the information of edge images is much less than facial attribute images. Qualitative results. In Figure 4, MWGAN is able to generate the most natural-looking facial images with the corresponding attributes from edge images. In contrast, UFDN fails to preserve the facial texture of an edge image, and generates images with very blurry and distorted structure. In addition, CycleGAN and StarGAN mostly preserve the domain information but cannot maintain the sharpness of images and the facial structure information. Moreover, this experiment also shows the superiority of our method on the imbalanced image translation task. Quantitative results. In Table 3, MWGAN achieves the lowest FID, showing that it is able to produce the most realistic facial attributes from the edge images. In contrast, the FID values of baselines are large because these methods are hard to generate sharp and realistic images. We also perform a perceptual evaluation with AMT for this task (see Section M in supplementary materials). 6.3 Results on Painting Translation In this experiment, we finally train our model on the painting dataset to conduct the style transfer task [41, 44]. As suggested in [14, 15, 51], we only show the qualitative results. Note that this translation task is also imbalanced because the input and target distributions are significantly different. In Figure 5, MWGAN generates painting images with higher visual quality. In contrast, UFDN fails to generate clearly structural painting images because it is hard to learn domain-invariant representation when domains are highly imbalanced. CycleGAN cannot fully learn some useful information from painting images to scene images. When taking a painting image as an input, StarGAN may obtain misleading information to update the generator. In this sense, when all domains are significantly different, StarGAN may not learn a good single generator to synthesize images of multiple domains. 7 Conclusion In this paper, we have proposed a novel multi-marginal Wasserstein GAN (MWGAN) for multiple marginal matching problem. Specifically, with the help of multi-marginal optimal transport theory, we develop a new dual formulation for better adversarial learning on the unsupervised multi-domain image translation task. Moreover, we theoretically define and further analyze the generalization ability of the proposed method. Extensive experiments on both toy and real-world datasets demonstrate the effectiveness of the proposed method. Acknowledgements This work is partially funded by Guangdong Provincial Scientific and Technological Funds under Grants 2018B010107001, National Natural Science Foundation of China (NSFC) 61602185, key project of NSFC (No. 61836003), Fundamental Research Funds for the Central Universities D2191240, Program for Guangdong Introducing Innovative and Enterpreneurial Teams 2017ZT07X183, and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201902). This work is also partially funded by Microsoft Research Asia (MSRA Collaborative Research Program 2019).
1. What is the focus of the paper regarding Multi Marginal Wasserstein GANs? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical motivation and training framework? 3. Do you have any concerns regarding the approximation made in the paper, and how does it impact the results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or suggestions regarding the experimental section, such as the choice of criteria or the application of multiple attributes?
Review
Review Multi Marginal Wasserstein GAN's goal is to match a source domain distribution to multiple target domain distributions. While dedicated GAN frameworks exist, as noted by the authors (CycleGAN, StarGAN, …), their generated samples suffer from blurriness, especially when resulting from multiple targets. Moreover, the main statement of this work is that MWGAN is theoretically motivated, unlike previous works. Computing the multi marginal Wasserstein distance between several domains is intractable in its primal form. Thus, as proposed in WGAN, the authors express their problem in the dual form, resulting in equation 1. Since, the dual formulation is a maximization problem under infinite constraint, which remains intractable, the authors simplify the problem by only considering it on its empirical version (equation 2). Eventually, the authors argue that if the potential functions can be expressed by a unique function up to a multiplicative constant, then they can simplify Problem III, that they finally enounce in Problem IV as the Multi Marginal Wasserstein GAN (MWGAN). When it comes to the training of MWGAN, the framework requires two additional terms as presented in Algorithm 1: -inner domain constraints: a classifier to constrain generator number i to sample from the i-th domain -inter-domain constraints: when the generators have not converged yet, it may jeopardize the training to constraint the generated samples to follow the inequality constraints or Problem III. Then the authors propose a softer version that balances the loss function. Something that is not clear is that it seems that the authors claim to solve the multi marginal Wasserstein distance, which is theoretically wrong as they make hard approximations on the family of potential functions, which may mislead the reader. Moreover, there is no discussion on cases where this approximation may be true or any discussions on the tightness of this approximation. Nevertheless, the authors rely on the section Theoretical Discussion, to promote the generalization ability of their method with enough training data. Again, I am not sure that those bounds are true for their approximation and I would appreciate some clarifications. When it comes to the experimental sections, results have been conducted thoroughly on several datasets used two criterions FID (which is not useful in the multi-target transfer) and a classifier trained on the ground truth data to recognize the domains. An AMT perceptual evaluation as conducted by StarGAN would have been interested also. Eventually, the results and their illustrations are promising. I would be curious about the composite generator: when applying multiple attributes, how do you pick the order of compositions (I expect it not to commute). In a Nutshell here is the list of the main pros and cons of this work: - pros: The experiments are state of the art with high quality generated samples for multiple attributes - cons: the authors overly claim their theoretical guarantees thanks to the multi marginal Wasserstein distance, without any analyses of the tightness of their approximations on the potential functions.
NIPS
Title Multi-marginal Wasserstein GAN Abstract Multiple marginal matching problem aims at learning mappings to match a source domain to multiple target domains and it has attracted great attention in many applications, such as multi-domain image translation. However, addressing this problem has two critical challenges: (i) Measuring the multi-marginal distance among different domains is very intractable; (ii) It is very difficult to exploit cross-domain correlations to match the target domain distributions. In this paper, we propose a novel Multi-marginal Wasserstein GAN (MWGAN) to minimize Wasserstein distance among domains. Specifically, with the help of multi-marginal optimal transport theory, we develop a new adversarial objective function with innerand inter-domain constraints to exploit cross-domain correlations. Moreover, we theoretically analyze the generalization performance of MWGAN, and empirically evaluate it on the balanced and imbalanced translation tasks. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of MWGAN. 1 Introduction Multiple marginal matching (M3) problem aims to map an input image (source domain) to multiple target domains (see Figure 1(a)), and it has been applied in computer vision, e.g., multi-domain image translation [10, 23, 25]. In practice, the unsupervised image translation [30] gains particular interest because of its label-free property. However, due to the lack of corresponding images, this task is extremely hard to learn stable mappings to match a source distribution to multiple target distributions. Recently, some methods [10, 30] address M3 problem, which, however, face two main challenges. First, existing methods often neglect to jointly optimize the multi-marginal distance among domains, which cannot guarantee the generalization performance of methods and may lead to distribution mismatching issue. Recently, CycleGAN [51] and UNIT [32] repeatedly optimize every pair of two different domains separately (see Figure 1(b)). In this sense, they are computationally expensive and may have poor generalization performance. Moreover, UFDN [30] and StarGAN [10] essentially measure the distance between an input distribution and a mixture of all target distributions (see Figure 1(b)). As a result, they may suffer from distribution mismatching issue. Therefore, it is necessary to explore a new method to measure and optimize the multi-marginal distance. Second, it is very challenging to exploit the cross-domain correlations to match target domains. Existing methods [51, 30] only focus on the correlations between the source and target domains, since they measure the distance between two distributions (see Figure 1(b)). However, these methods often ignore the correlations among target domains, and thus they are hard to fully capture information to improve the performance. Moreover, when the source and target domains are significantly different, or the number of target domains is large, the translation task turns to be difficult for existing methods to exploit the cross-domain correlations. ∗Authors contributed equally. †Corresponding author. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we seek to use multi-marginal Wasserstein distance to solve M3 problem, but directly optimizing it is intractable. Therefore, we develop a new dual formulation to make it tractable and propose a novel multi-marginal Wasserstein GAN (MWGAN) by enforcing inner- and inter-domain constraints to exploit the correlations among domains. The contributions of this paper are summarized as follows: • We propose a novel GAN method (called MWGAN) to optimize a feasible multi-marginal distance among different domains. MWGAN overcomes the limitations of existing methods by alleviating the distribution mismatching issue and exploiting cross-domain correlations. • We define and analyze the generalization of our proposed method for the multiple domain translation task, which is more important than existing generalization analyses [13, 36] studying only on two domains and non-trivial for multiple domains. • We empirically show that MWGAN is able to solve the imbalanced image translation task well when the source and target domains are significantly different. Extensive experiments on toy and real-world datasets demonstrate the effectiveness of our proposed method. 2 Related Work Generative adversarial networks (GANs). Deep neural networks have theoretical and experimental explorations [7, 21, 48, 49, 53]. In particular, GANs [17] have been successfully applied in computer vision tasks, such as image generation [3, 6, 18, 20], image translation [2, 10, 19] and video prediction [35]. Specifically, a generator tries to produce realistic samples, while a discriminator tries to distinguish between generated data and real data. Recently, some studies try to improve the quality [5, 9, 26] and diversity [43] of generated images, and improve the mechanism of GANs [1, 11, 38, 39] to deal with the unstable training and mode collapse problems. Multi-domain image translation. M3 problem can be applied in domain adaptation [45] and image translation [27, 52]. CycleGAN [51], DiscoGAN [28], DualGAN [47] and UNIT [32] are proposed to address two-domain image translation task. However, in Figure 1(b), these methods measure the distance between every pair of distributions multiple times, which is computationally expensive when applied to the multi-domain image translation task. Recently, StarGAN [10] and AttGAN [23] use a single model to perform multi-domain image translation. UFDN [30] translates images by learning domain-invariant representation for cross-domains. Essentially, the above three methods are two-domain image translation methods because they measure the distance between an input distribution and a uniform mixture of other target distributions (see Figure 1(b)). Therefore, these methods may suffer from distribution mismatching issue and obtain misleading feedback for updating models when the source and target domains are significantly different. In addition, we discuss the difference between some GAN methods in Section I in supplementary materials. 3 Problem Definition Notation. We use calligraphic letters (e.g., X ) to denote space, capital letters (e.g., X) to denote random variables, and bold lower case letter (e.g., x) to denote the corresponding values. Let D=(X ,P) be the domain, P or µ be the marginal distribution over X and P(X ) be the set of all the probability measures over X . For convenience, let X=Rd, and let I={0, ..., N} and [N ]={1, ..., N}. Multiple marginal matching (M3) problem. In this paper, M3 problem aims to learn mappings to match a source domain to multiple target domains. For simplicity, we consider one source domain Ds={X ,Ps} and N target domains Di={X ,Pti}, i∈[N ], where Ps is the source distribution, and Pti is the i-th real target distribution. Let gi, i∈[N ] be the generative models parameterized by θi, and Pθi be the generated distribution in the i-th target domain. In this problem, the goal is to learn multiple generative models such that each generated distribution Pθi in the i-th target domain can be close to the corresponding real target distribution Pti (see Figure 1(a)). Optimal transport (OT) theory. Recently, OT [42] theory has attracted great attention in many applications [3, 46]. Directly solving the primal formulation of OT [40] might be intractable [16]. To address this, we consider the dual formulation of the multi-marginal OT problem as follows. Problem I (Dual problem [40]) Given N+1 marginals µi∈P(X ), potential functions fi, i∈I , and a cost function c(X(0), . . . , X(N)) : Rd(N+1)→R, the dual Kantorovich problem can be defined as: W (µ0, ..., µN )= sup fi ∑ i ∫ fi ( X(i) ) dµi ( X(i) ) , s.t. ∑ i fi ( X(i) ) ≤c ( X(0), ..., X(N) ) . (1) In practice, we optimize the discrete case of Problem I. Specifically, given samples {x(0)j }j∈J0 and {x(i)j }j∈Ji drawn from source domain distribution Ps and generated target distributions Pθi , i∈[N ], respectively, where Ji is an index set and ni=|Ji| is the number of samples, we have: Problem II (Discrete dual problem) Let F={f0, . . . , fN} be the set of Kantorovich potentials, then the discrete dual problem ĥ(F ) can be defined as: max F ĥ(F )= ∑ i 1 ni ∑ j∈Ji fi ( x (i) j ) , s.t. ∑ i fi ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (2) Unfortunately, it is challenging to optimize Problem II due to the intractable inequality constraints and multiple potential functions. To address this, we seek to propose a new optimization method. 4 Multi-marginal Wasserstein GAN 4.1 A New Dual Formulation For two domains, WGAN [3] solves Problem II by setting f0=f and f1=−f . However, it is hard to extend WGAN to multiple domains. To address this, we propose a new dual formulation in order to optimize Problem II. To this end, we use a shared potential in Problem II, which is supported by empirical and theoretical evidence. In the multi-domain image translation task, the domains are often correlated, and thus share similar properties and differ only in details (see Figure 1(a)). The cross-domain correlations can be exploited by the shared potential function (see Section J in supplementary materials). More importantly, the optimal objectives of Problem II and the following problem can be equal under some conditions (see Section B in supplementary materials). Problem III Let Fλ={λ0f, . . . , λNf} be Kantorovich potentials, then we define dual problem as: max Fλ ĥ(Fλ)= ∑ i λi ni ∑ j∈Ji f ( x (i) j ) , s.t. ∑ i λif ( x (i) ki ) ≤c ( x (0) k0 , . . . ,x (N) kN ) ,∀ki∈[ni]. (3) To further build the relationship between Problem II and Problem III, we have the following theorem so that Problem III can be optimized well by GAN-based methods (see Subsection 4.2). Theorem 1 Suppose the domains are connected, the cost function c is continuously differentiable and each µi is absolutely continuous. If (f0, . . . , fN ) and (λ0f, . . . , λNf) are solutions to Problem I, then there exist some constants εi for each i ∈ I such that ∑ i εi = 0 and fi = λif + εi. Remark 1 From Theorem 1, if we train a shared function f to obtain a solution of Problem I, we have an equivalent Wasserstein distance, i.e., ∑ i fi= ∑ i λif regardless of whatever the value εi is. Therefore, we are able to optimize Problem III instead of intractable Problem II in practice. Algorithm 1 Multi-marginal WGAN. Input: Training data {xj}n0j=1 in the initial domain, {x̂ (i) j } ni j=1 in the i-th target domain; batch size mbs; the number of iterations of the discriminator per generator iteration ncritic; Uniform distribution U [0, 1]. Output: The discriminator f , the generators {gi}i∈[N ] and the classifier φ 1: while not converged do 2: for t = 0, . . . , ncritic do 3: Sample x∼P̂s and x̂∼P̂θi , ∀i, and x̃← ρx + (1− ρ)x̂, where ρ∼U [0, 1] 4: Update f by ascending the gradient: ∇w[Ex∼P̂s [f (x)]− ∑ i λ + i Ex̂∼P̂θi [f (x̂)]−Rτ (f)] 5: Update classifier φ by descending the gradient∇v[Cα(φ)] 6: end for 7: Update each generator gi by descending the gradient: ∇θi [−λ + i Ex̂∼P̂θi [f (x̂)]−Mα(gi)] 8: end while 4.2 Proposed Objective Function To minimize Wasserstein distance among domains, we now present a novel multi-marginal Wasserstein GAN (MWGAN) based on the proposed dual formulation in (3). Specifically, let F={f :Rd→R} be the class of discriminators parameterized by w, and G={g:Rd→Rd} be the class of generators and gi∈G is parameterized by θi. Motivated by the adversarial mechanism of WGAN, let λ0=1 and λi:=−λ+i , λ + i >0, i∈[N ], then Problem III can be rewritten as follows: Problem IV (Multi-marginal Wasserstein GAN) Given a discriminator f∈F and generators gi∈G, i∈[N ], we can define the following multi-marginal Wasserstein distance as W ( P̂s, P̂θ1 , . . . , P̂θN ) = maxf Ex∼P̂s [f(x)]− ∑ i λ+i Ex̂∼P̂θi [f (x̂)] , s.t. P̂θi∈Di, f∈Ω. (4) where P̂s is the real source distribution, and the distribution P̂θi is generated by gi in the i-th domain, Ω={f |f(x)− ∑ i∈[N ] λ + i f(x̂ (i))≤c(x, x̂(1), . . . , x̂(N)), f∈F} with x∈P̂s and x̂(i)∈P̂θi , i∈[N ]. In Problem IV, we refer to P̂θi∈Di, i∈[N ] as inner-domain constraints and f∈Ω as inter-domain constraints (See Subsections 4.3 and 4.4). The influence of these constraints are investigated in Section N of supplementary materials. Note that λ+i reflects the importance of the i-th target domain. In practice, we set λ+i =1/N, i∈[N ] when no prior knowledge is available on the target domains. To minimize Problem IV, we optimize the generators with the following update rule. Theorem 2 If each generator gi∈G, i∈[N ] is locally Lipschitz (see more details of Assumption 1 [3]), then there exists a discriminator f to Problem IV, we have the gradient ∇θiW (P̂s, P̂θ1 , . . . , P̂θN ) = −λ+i Ex∼P̂s [∇θif(gi(x))] for all θi, i∈[N ] when all terms are well-defined. Theorem 2 provides a good update rule for optimizing MWGAN. Specifically, we first train an optimal discriminator f and then update each generator along the direction of Ex∼P̂s [∇θif(gi(x))]. The detailed algorithm is shown in Algorithm 1. Specifically, the generators cooperatively exploit multi-domain correlations (see Section J in supplementary materials) and generate samples in the specific target domain to fool the discriminator; the discriminator enforces generated data in target domains to maintain the similar features from the source domain. 4.3 Inner-domain Constraints In Problem IV, the distribution Pθi generated by the generator gi should belong to the i-th domain for any i. To this end, we introduce an auxiliary domain classification loss and the mutual information. Domain classification loss. Given an input x:=x(0) and generator gi, we aim to translate the input x to an output x̂(i) which can be classified to the target domain Di correctly. To achieve this goal, we introduce an auxiliary classifier φ: X→Y parameterized by v to optimize the generators. Specifically, we label real data x∼P̂ti as 1, where P̂ti is an empirical distribution in the i-th target domain, and we label generated data x̂(i)∼P̂θi as 0. Then, the domain classification loss w.r.t. φ can be defined as: Cα(φ) = α · Ex′∼P̂ti∪P̂θi [` (φ (x ′) , y)] , (5) where α is a hyper-parameter, y is corresponding to x′, and `(·, ·) is a binary classification loss, such as hinge loss [50], mean square loss [34], cross-entropy loss [17] and Wasserstein loss [12]. Mutual information maximization. After learning the classifier φ, we maximize the lower bound of the mutual information [8, 23] between the generated image and the corresponding domain, i.e., Mα(gi) = α · Ex∼P̂s [ log φ ( y(i)=1 ∣∣∣ gi(x))] . (6) By maximizing the mutual information in (6), we correlate the generated image gi(x) with the i-th domain, and then we are able to translate the source image to the specified domain. 4.4 Inter-domain Constraints Then, we enforce the inter-domain constraints in Problem IV, i.e., the discriminator f∈F∩Ω. One can let discriminator be 1-Lipschitz continuous, but it may ignore the dependency among domains (see Section H in supplementary materials). Thus, we relax the constraints by the following lemma. Lemma 1 (Constraints relaxation) If the cost function c(·) is measured by `2 norm, then there exists Lf≥1 such that the constraints in Problem IV satisfy ∑ i |f(x)−f(x̂(i))|/‖x−x̂(i)‖≤Lf . Note that Lf measures the dependency among domains (see Section G in supplementary materials). In practice, Lf can be calculated with the cost function, or treated as a tuning parameter for simplicity. Inter-domain gradient penalty. In practice, directly enforcing the inequality constraints in Lemma 1 would have poor performance when generated samples are far from real data. We thus propose the following inter-domain gradient penalty. Specifically, given real data x in the source domain and generated samples x̂(i), if x̂(i) can be properly close to x, as suggested in [37], we can calculate its gradient and introduce the following regularization term into the objective of MWGAN, i.e., Rτ (f) = τ · (∑ i Ex̃(i)∼Q̂i ∥∥∥∇f (x̃(i))∥∥∥−Lf)2 + , (7) where (·)+= max{0, ·}, τ is a hyper-parameter, x̃(i) is sampled between x and x̂(i), and Q̂i, i∈[N ] is a constructed distribution relying on some sampling strategy. In practice, one can construct a distribution where samples x̃(i) can be interpolated between real data x and generated data x̂(i) for every domain [18]. Note that the gradient penalty captures the dependency of domains since the cost function in Problem IV measures the distance among all domains jointly. 5 Theoretical Analysis In this section, we provide the generalization analysis for the proposed method. Motivated by [4], we give a new definition of generalization for multiple distributions as follows. Definition 1 (Generalization) Let Ps and Pθi be the continuous real and generated distributions, and P̂s and P̂θi be the empirical real and generated distributions. The distribution distance W (·, . . . , ·) is said to generalize with n training samples and error , if for every true generated distribution Pθi , the following inequality holds with high probability,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (8) In Definition 1, the generalization bound measures the difference between the expected distance and the empirical distance. In practice, our goal is to train MWGAN to obtain a small empirical distance, so that the expected distance would also be small. With the help of Definition 1, we are able to analyze the generalization ability of the proposed method. Let κ be the capacity of the discriminator, and if the discriminator is L-Lipschitz continuous and bounded in [−∆,∆], then we have the following generalization bound. Theorem 3 (Generalization bound) Given the continuous real and generated distributions Ps and Pθi , i∈I, and the empirical versions P̂s and P̂θi , i∈I with at least n samples in each domain, there is a universal constant C such that n≥Cκ∆2 log(Lκ/ )/ 2 with the error , the following generalization bound is satisfied with probability at least 1−e−κ,∣∣∣W (P̂s, P̂θ1 , . . . , P̂θN)−W (Ps,Pθ1 , . . . ,PθN )∣∣∣ ≤ . (9) Theorem 3 shows that MWGAN has a good generalization ability with enough training data in each domain. In practice, if successfully minimizing the multi-domain Wasserstein distance i.e., W (P̂s, P̂θ1 , . . . , P̂θN ), the expected distance W (Ps,Pθ1 , . . . ,PθN ) can also be small. 6 Experiments Implementation details. All experiments are conducted based on PyTorch, with an NVIDIA TITAN X GPU.3 We use Adam [29] with β1=0.5 and β2=0.999 and set the learning rate as 0.0001. We train the model 100k iterations with batch size 16. We set α=10, τ=10 and Lf to be the number of target domains in Loss (7). The details of the loss function and the network architectures of the discriminator, generators and classifier can be referred to Section P in supplementary materials. Baselines. We adopt the following methods as baselines: (i) CycleGAN [51] is a two-domain image translation method which can be flexibly extended to perform the multi-domain image translation task. (ii) UFDN [30] and (iii) StarGAN [10] are multi-domain image translation methods. Datasets. We conduct experiments on three datasets. Note that all images are resized as 128×128. (i) Toy dataset. We generate a Gaussian distribution in the source domain, and other six Gaussian or Uniform distributions in the target domains. More details can be found in the supplemental materials. (ii) CelebA [33] contains 202,599 face images, where each image has 40 binary attributes. We use the following attributes: hair color (black, blond and brown), eyeglasses, mustache and pale skin. In the first experiment, we use black hair images as the source domain, and use the blond hair, eyeglasses, mustache and pale skin images as target domains. In the second experiment, we extract 50k Canny edges from CelebA. We take edge images as the source domain and hair images as target domains. (iii) Style painting [51]. The size of Real scene, Monet, Van Gogh and Ukiyo-e is 6287, 1073, 400 and 563, respectively. We take real scene images as the source domain, and others as target domains. Evaluation Metrics. We use the following evaluation metrics: (i) Fréchet Inception Distance (FID) [24] evaluates the quality of the translated images. In general, a lower FID score means better performance. (ii) Classification accuracy widely used in [10, 23] evaluates the probability that the generated images belong to corresponding target domains. Specifically, we train a classifier on CelebA (90% for training and 10% for testing) using ResNet-18 [22], resulting in a near-perfect accuracy, then use the classifier to measure the classification accuracy of the generated images. 6.1 Results on Toy Dataset We compare MWGAN with UFDN and StarGAN on toy dataset to verify the limitations mentioned in Section 2. Specifically, we measure the distribution matching ability and plot the value surface of the discriminator. Here, the value surface depicts the outputs of the discriminator [18, 31]. In Figure 2, MWGAN matches the target domain distributions very well as it is able to capture the geometric information of real distribution using a low-capacity network. Moreover, the value surface shows that the discriminator provides correct gradients to update the generators. However, the baseline methods are very sensitive to the type of source and target domain distributions. With the same capacity, the baseline methods on similar distributions (top row) are able to match the target domain distributions. However, they cannot match the target domain distribution well when the initial and the target domain distributions are different (see bottom row of Figure 2). 3The source code of our method is available: https://github.com/caojiezhang/MWGAN. 6.2 Results on CelebA We compare MWGAN with several baselines on both balanced and imbalanced translation tasks. (i) Balanced image translation task. In this experiment, we train the generators to produce single attribute images, and then synthesize multi-attribute images using the composite generators. We generate attributes in order of {Blond hair, Eyeglasses, Mustache, Pale skin}. Taking two attributes as an example, let g1 and g2 be the generators of Blond hair and Eyeglasses images, respectively, then images with Blond hair and Eyeglasses attributes are generated by the composite generators g2◦g1. Qualitative results. In Figure 3, MWGAN has a better or comparable performance than baselines on the single attribute translation task, but achieves the highest visual quality of multi-attributes translation results. In other words, MWGAN has good generalization performance. However, CycleGAN is hard to synthesize multi-attributes. UFDN cannot guarantee the identity of the translated images and produces images with blurring structures. Moreover, StarGAN highly depends on the number of transferred domains and the synthesized images sometimes lack the perceptual realism. Quantitative results. We further compare FID and classification accuracy for the single-attribute results. For the multi-attribute results, we only report classification accuracy because FID is no longer a valid measure and may give misleading results when training data are not sufficient [24]. In Table 1, MWGAN achieves the lowest FID and comparable classification accuracy, indicating that it produces realistic single-attribute images of the highest quality. In Table 2, MWGAN achieves the highest classification accuracy and thus synthesizes the most realistic multi-attribute images. (ii) Imbalanced image translation task. In this experiment, we compare MWGAN with baselines on the Edge→CelebA translation task. Note that this task is unbalanced because the information of edge images is much less than facial attribute images. Qualitative results. In Figure 4, MWGAN is able to generate the most natural-looking facial images with the corresponding attributes from edge images. In contrast, UFDN fails to preserve the facial texture of an edge image, and generates images with very blurry and distorted structure. In addition, CycleGAN and StarGAN mostly preserve the domain information but cannot maintain the sharpness of images and the facial structure information. Moreover, this experiment also shows the superiority of our method on the imbalanced image translation task. Quantitative results. In Table 3, MWGAN achieves the lowest FID, showing that it is able to produce the most realistic facial attributes from the edge images. In contrast, the FID values of baselines are large because these methods are hard to generate sharp and realistic images. We also perform a perceptual evaluation with AMT for this task (see Section M in supplementary materials). 6.3 Results on Painting Translation In this experiment, we finally train our model on the painting dataset to conduct the style transfer task [41, 44]. As suggested in [14, 15, 51], we only show the qualitative results. Note that this translation task is also imbalanced because the input and target distributions are significantly different. In Figure 5, MWGAN generates painting images with higher visual quality. In contrast, UFDN fails to generate clearly structural painting images because it is hard to learn domain-invariant representation when domains are highly imbalanced. CycleGAN cannot fully learn some useful information from painting images to scene images. When taking a painting image as an input, StarGAN may obtain misleading information to update the generator. In this sense, when all domains are significantly different, StarGAN may not learn a good single generator to synthesize images of multiple domains. 7 Conclusion In this paper, we have proposed a novel multi-marginal Wasserstein GAN (MWGAN) for multiple marginal matching problem. Specifically, with the help of multi-marginal optimal transport theory, we develop a new dual formulation for better adversarial learning on the unsupervised multi-domain image translation task. Moreover, we theoretically define and further analyze the generalization ability of the proposed method. Extensive experiments on both toy and real-world datasets demonstrate the effectiveness of the proposed method. Acknowledgements This work is partially funded by Guangdong Provincial Scientific and Technological Funds under Grants 2018B010107001, National Natural Science Foundation of China (NSFC) 61602185, key project of NSFC (No. 61836003), Fundamental Research Funds for the Central Universities D2191240, Program for Guangdong Introducing Innovative and Enterpreneurial Teams 2017ZT07X183, and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201902). This work is also partially funded by Microsoft Research Asia (MSRA Collaborative Research Program 2019).
1. What is the main contribution of the paper regarding the multiple marginal mapping problem? 2. What are the strengths of the paper in terms of technical analysis, generalization analysis, and empirical experiments? 3. What are the concerns regarding the condition required for the proposed algorithm, specifically in real-world tasks? 4. How does the reviewer interpret the key assumption in Theorem 1, and what is their query regarding the replacement of N different functions with a unified function f and N constant factor? 5. What questions does the reviewer have regarding the toy data experiment, particularly in understanding the value surface of the discriminator in Figure 2?
Review
Review Originality:This paper solves the multiple marginal mapping problem by defining a multi-marginal Wasserstein algorithm firstly. Quality: The whole structure of this work is consistent in general. Under a specific condition, the paper gives the sound technically analysis, contains the equivalence of solutions, and the generalization analysis. And the theoretical analysis and the empirical experiment results to support for the proposed algorithms. Clarity:The paper is written clearly and easy to follow. Significance: This work makes a moderate advance for M3 problem. Under a specific and rigorous condition, the authors done an adequate work theoretically and experimentally. The only problem is that how can real problems satisfy the condition. Some concerns: 1)The whole work stand on a condition that a shared potential function is sufficient for problem 1 in paper. But authors just use the experimental result in appendix I to show the practicability in some real-world tasks. It seems weak. 2)In theorem 1, there is a key assumption which states “if (f_0, \cdots, f_N) and (\lambda_0f , \cdots, \labmbda_N f ) are solutions to problem 1”. How can you verify this assumption? The key question I want to known is, why you can replace N different functions with a unified function f and N constant factor? Or within what distance among the multiple domain, you can do such replace. 3)Toy data experiment. Can you explain more detailly about how to understand the Figure2 especially the value surface of the discriminator.
NIPS
Title VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain Abstract Selfand semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique structure in the domain datasets (such as spatial relationships in images or semantic relationships in language). They are not adaptable to general tabular data which does not have the same explicit structure as image and language data. In this paper, we fill this gap by proposing novel selfand semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation). We create a novel pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning. We also introduce a novel tabular data augmentation method for selfand semi-supervised learning frameworks. In experiments, we evaluate the proposed framework in multiple tabular datasets from various application domains, such as genomics and clinical data. VIME exceeds state-of-the-art performance in comparison to the existing baseline methods. 1 Introduction Tremendous successes have been achieved in a variety of applications (such as image classification [1], object detection [2], and language translation [3]) with deep learning models via supervised learning on large labeled datasets such as ImageNet [4]. Unfortunately, collecting sufficiently large labeled datasets is expensive and even impossible in several domains (such as medical datasets concerned with a particularly rare disease). In these settings, however, there is often a wealth of unlabeled data available - datasets are often collected from a large population, but target labels are only available for a small group of people. The 100,000 Genomes project [5], for instance, sequenced 100,000 genomes from around 85,000 NHS patients affected by a rare disease, such as cancer. By definition rare diseases occur in (less than) 1 in 2000 people. Datasets like these present huge opportunities for self- and semi-supervised learning algorithms, which can leverage the unlabeled data to further improve the performance of a predictive model. Unfortunately, existing self- and semi-supervised learning algorithms are not effective for tabular data1 because they heavily rely on the spatial or semantic structure of image or language data. A 1Tabular data is a database that is structured in a tabular form. It arranges data elements in vertical columns (features) and horizontal rows (samples). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. standard self-supervised leaning framework designs a (set of) pretext task(s) to learn informative representations from the raw input features. For the language domain, BERT introduces 4 different pretext tasks (e.g. predicting future words from previous words) to learn representations of the language data [6]. In the image domain, rotation [7], jigsaw puzzle [8], and colorization [9] can be utilized as pretext tasks to learn representations of the images. Standard semi-supervised learning methods also suffer from the same problem, since the regularizers they use for the predictive model are based on some prior knowledge of these data structures. For example, the consistency regularizer encourages the predictive model to have the same output distribution on a sample and its augmented variants, e.g. an image and its rotated variants [7], or two images and their convex combination(s) [10]. The notion of rotation simply does not exist in tabular data. Moreover, in many settings, variables are often categorical, and do not admit meaningful convex combinations. Even in a setting where all variables are continuous, there is no guarantee that the data manifold is convex and as such taking convex combinations will either generate out-of-distribution samples (therefore degrading model performances) or be restricted to generating samples that are very close to real samples (limiting the effectiveness of the data augmentation), for more details see the Supplementary Materials (Section 4). Contribution: In this paper, we propose novel self- and semi-supervised learning frameworks for tabular data. For self-supervised learning, we introduce a novel pretext task, mask vector estimation in addition to feature vector estimation. To solve those pretext tasks, an encoder function learns to construct informative representations from the raw features in the unlabeled data. For semi-supervised learning, we introduce a novel tabular data augmentation scheme. We use the trained encoder to generate multiple augmented samples for each data point by masking each point using several different masks and then imputing the corrupted values for each masked data point. Finally, we propose a systematic self- and semi-supervised learning framework for tabular data, VIME (Value Imputation and Mask Estimation), that combines our ideas to produce state-of-the-art performances on several tabular datasets with a few labeled samples, from various domains. 2 Related Works Self-supervised learning (Self-SL) frameworks are representation learning methods using unlabeled data. It can be categorized into two types: using pretext task(s) and contrastive learning. Most existing works with pretext tasks are appropriate only for images or natural language: (i) surrogate classes prediction (scaling and translation) [11], (ii) rotation degree predictions [7], (iii) colorization [9], (iv) relative position of patches estimation [12], (v) jigsaw puzzle solving [8], (vi) image denoising [13], (vii) partial-to-partial registration [14], and (viii) next words and previous words predictions [6]. Most existing works with contrastive learning are also applicable only for image or natural languages due to their data augmentation scheme, and temporal and spatial relationships for defining the similarity: (i) contrastive predictive coding [15, 16], (ii) contrastive multi-view coding [17], (iii) SimCLR [18], (iv) momentum contrast [19, 20]. There is some existing work on self-supervised learning which can be applied to tabular data. In Denoising auto-encoder [21], the pretext task is to recover the original sample from a corrupted sample. In Context Encoder [22], the pretext task is to reconstruct the original sample from both the corrupted sample and the mask vector. The pretext task for self-supervised learning in TabNet [23] and TaBERT [24] is also recovering corrupted tabular data. In this paper, we propose a new pretext task: to recover the mask vector, in addition to the original sample with a novel corrupted sample generation scheme. Also, we propose a novel tabular data augmentation scheme that can be combined with various contrastive learning frameworks to extend the self-supervised learning to tabular domains. Semi-supervised learning (Semi-SL) frameworks can be categorized into two types: entropy minimization and consistency regularization. Entropy minimization encourages a classifier to output low entropy predictions on unlabeled data. For instance, [25] constructs hard labels from high-confidence predictions on unlabeled data, and train the network using these pseudo-labels together with labeled data in a supervised way. Consistency regularization encourages some sort of consistency between a sample and some stochastically altered version of itself. Π-model [26] uses an L2 loss to encourage consistency between predictions. Mean teacher [27] uses anL2 loss to encourage consistency between the intermediate representations. Virtual Adversarial Training (VAT) [28] encourages prediction consistency by minimizing the maximum difference in predictions between a sample and multiple augmented versions. MixMatch [29] and ReMixMatch [30] combine entropy minimization with consistency regularization in one unified framework with MixUp [10] as the data augmentation method. There is a series of interesting works on graph-based semi-supervised learning [31, 32, 33] which consider a special case of network data where samples are connected by a given edge, i.e. a citation network where an article is connected with its citations. Here, we introduce a novel data augmentation method for general tabular data which can be combined with various semi-supervised learning frameworks to train a predictive model in a semi-supervised way. 3 Problem Formulation In this section, we introduce the general formulation of self- and semi-supervised learning. Suppose we have a small labeled dataset Dl = {xi, yi}Nli=1 and a large unlabeled dataset Du = {xi} Nl+Nu i=Nl+1 , where Nu Nl, xi ∈ X ⊆ Rd and yi ∈ Y . The label yi is a scalar in single-task learning while it can be given as a multi-dimensional vector in multi-task learning. We assume every input feature xi in Dl and Du is sampled i.i.d. from a feature distribution pX , and the labeled data pairs (xi, yi) in Dl are drawn from a joint distribution pX,Y . When only limited labeled samples from pX,Y are available, a predictive model f : X → Y solely trained by supervised learning is likely to overfit the training samples since the empirical supervised loss ∑Nl i=1 l ( f(xi), yi ) we minimize deviates significantly from the expected supervised loss E(x,y)∼pX,Y [ l ( f(x), y )] , where l(·, ·) is some standard supervised loss function (e.g. cross-entropy). 3.1 Self-supervised learning Self-supervised learning aims to learn informative representations from unlabeled data. In this subsection, we focus on self-supervised learning with various self-supervised/pretext tasks for a pretext model to solve. These tasks are set to be challenging but highly relevant to the downstream tasks that we attempt to solve. Ideally, the pretext model will extract some useful information from the raw data in the process of solving the pretext tasks. Then the extracted information can be utilized by the predictive model f in the downstream tasks. In general, self-supervised learning constructs an encoder function e : X → Z that takes a sample x ∈ X and returns an informative representation z = e(x) ∈ Z . The representation z is optimized to solve a pretext task defined with a pseudo-label ys ∈ Ys and a self-supervised loss function lss. For example, the pretext task can be predicting the rotation degree of some rotated image in the raw dataset, where ys is the true rotation degree and lss is the squared difference between the predicted rotation degree and ys. We define the pretext predictive model as h : Z → Ys, which is trained jointly with the encoder function e by minimizing the expected self-supervised loss function lss as follows, min e,h E(xs,ys)∼pXs,Ys [ lss ( ys, (h ◦ e)(xs) )] (1) where pXs,Ys is a pretext distribution that generates pseudo-labeled samples (xs, ys) for training the encoder e and pretext predictive model h. Note that we have sufficient samples to approximate the objective function above since for each input sample in Du, we can generate a pretext sample (xs, ys) for free, e.g. rotating an image xi to create xs and taking the rotation degree as the label ys. After training, the encoder function e can be used to extract better data representations from raw data for solving various downstream tasks. Note that in settings where the downstream task (and a loss for it) are known in advance, the encoder can be trained jointly with the downstream task’s model. 3.2 Semi-supervised learning Semi-supervised learning optimizes the predictive model f by minimizing the supervised loss function jointly with some unsupervised loss function defined over the output space Y . Formally, semi-supervised learning is formulated as an optimization problem as follows, min f E(x,y)∼pXY [ l ( y, f(x) )] + β · Ex∼pX ,x′∼p̃X(x′|x) [ lu ( f(x), f(x′) )] (2) where lu : Y × Y → R is an unsupervised loss function, and a hyperparameter β ≥ 0 is introduced to control the trade-off between the supervised and unsupervised losses. x′ is a perturbed version of x assumed to be drawn from a conditional distribution p̃X(x′|x). The first term is estimated using the small labeled dataset Dl, while the second term is estimated using all input features in Du. The unsupervised loss function (lu) is often inspired by some prior knowledge of the downstream task. For example, consistency regularization encourages the model f to produce the same output distribution when its inputs are perturbed (x′). 4 Proposed Model: VIME In this section, we describe VIME, our systematic approach for self- and semi-supervised learning for tabular data (block diagram can be found in the Supplementary Materials)). We first propose two pretext tasks in self-supervised learning, then we develop an unsupervised loss function in semisupervised learning using the encoder learned from the pretext tasks via self-supervised learning. 4.1 Self-supervised learning for tabular data We introduce two pretext tasks: feature vector estimation and mask vector estimation. Our goal is to optimize a pretext model to recover an input sample (a feature vector) from its corrupted variant, at the same time as estimating the mask vector that has been applied to the sample. In our framework, the two pretext tasks share a single pretext distribution pXs,Ys . First, a mask vector generator outputs a binary mask vector m = [m1, ...,md]> ∈ {0, 1}d where mj is randomly sampled from a Bernoulli distribution with probability pm (i.e. pm = ∏d j=1 Bern(mj |pm)). Then a pretext generator gm : X × {0, 1}d → X takes a sample x from Du and a mask vector m as input, and generates a masked sample x̃. The generating process of x̃ is given by x̃ = gm(x,m) = m x̄ + (1−m) x (3) where the j-th feature of x̄ is sampled from the empirical distribution p̂Xj = 1Nu ∑Nl+Nu i=Nl+1 δ(xj = xi,j) where xi,j is the j-th feature of the i-th sample in Du (i.e. the empirical marginal distribution of each feature). - see Figure 3 in the Supplementary Materials for further details. The generating process in Equation (3) ensures the corrupted sample x̃ is not only tabular but also similar to the samples in Du. Compared with standard sample corruption approaches, e.g. adding Gaussian noise to, or replacing zeros with the missing features, our approach generates x̃ that is more difficult to distinguish from x. This difficulty is crucial for self-supervised learning, which we will elaborate more in the following sections. There are two folds of randomness imposed in our pretext distribution pXs,Ys . Explicitly, m is a random vector sampled from a Bernoulli distribution. Implicitly, the pretext generator gm is also a stochastic function whose randomness comes from x̄. Together, this randomness increases the difficulty in reconstructing x from x̃. The level of difficulty can be adjusted by changing the hyperparameter pm, the probability in Bern(·|pm), which controls the proportion of features that will be masked and corrupted. Following the convention of self-supervised learning, the encoder e first transforms the masked and corrupted sample x̃ to a representation z, then a pretext predictive model will be introduced to recover the original sample x from z. Arguably, this is a more challenging task than existing pretext tasks, such as correcting the rotation of images or recolorizing a grayscale image. A rotated or grayscale image still contains some information about the original features. In contrast, masking completely removes some of the features from x and replaces them with a noise sample x̄ of which each feature may come from a different random sample in Du. The resulting sample x̃ may not contain any information about the missing features and even hard to identify which features are missing. To solve such a challenging task, we first divide it into two sub-tasks (pretext tasks): (1) Mask vector estimation: predict which features have been masked; (2) Feature vector estimation: predict the values of the features that have been corrupted. We introduce a separate pretext predictive model for each pretext task. Both models operate on top of the representation z given by the encoder e and try to estimate m and x collaboratively. The two models and their functions are, • Mask vector estimator, sm : Z → [0, 1]d, takes z as input and outputs a vector m̂ to predict which features of x̃ have been replaced by a noisy counterpart (i.e., m); • Feature vector estimator, sr : Z → X , takes z as input and returns x̂, an estimate of the original sample x. The encoder e and the pretext predictive models (in our case, the two estimators sm and sr) are trained jointly in the following optimization problem, min e,sm,sr Ex∼pX ,m∼pm,x̃∼gm(x,m) [ lm(m, m̂) + α · lr(x, x̂) ] (4) where m̂ = (sm ◦ e)(x̃) and x̂ = (sr ◦ e)(x̃). The first loss function lm is the sum of the binary cross-entropy losses for each dimension of the mask vector2: lm(m, m̂) = − 1 d [ d∑ j=1 mj log [ (sm ◦ e)j(x̃) ] + (1−mj) log [ 1− (sm ◦ e)j(x̃) ]] , (5) and the second loss function lr is the reconstruction loss, lr(x, x̂) = 1 d [ d∑ j=1 (xj − (sr ◦ e)j(x̃))2 ] . (6) α adjusts the trade-off between the two losses. For categorical variables, we modified Equation 6 to cross-entropy loss. Figure 1 illustrates our entire self-supervised learning framework. What has the encoder learned? These two loss functions share the encoder e. It is the only part we will utilize in the downstream tasks. To understand how the encoder is going to benefit these downstream tasks, we consider what the encoder must be able to do to solve our pretext tasks. We make the following intuitive observation: it is important for e to capture the correlation among the features of x and output some latent representations z that can recover x. In this case, sm can identify the masked features from the inconsistency between feature values, and sr can impute the masked features by learning from the correlated non-masked features. For instance, if the value of a feature is very different from its correlated features, this feature is likely masked and corrupted. We note that correlations are also learned in other self-supervised learning frameworks, e.g. spatial correlations in rotated images and autocorrelations between future and previous words. Our framework is novel in learning the correlations for tabular data whose correlation structure is less obvious than in images or language. The learned representation that captures the correlation across different parts of the object, regardless of the object type (e.g. language, image or tabular data), is an informative input for the various downstream tasks. 4.2 Semi-supervised learning for tabular data We now show how the encoder function e from the previous subsection can be used in semi-supervised learning. Our framework of semi-supervised learning follows the structure as given in Section 3. Let 2Subscript j represents the j-th element of the vector. fe = f ◦ e and ŷ = fe(x). We train the predictive model f by minimizing the objective function, Lfinal = Ls + β · Lu . (7) The supervised loss Ls is given by Ls = E(x,y)∼pXY [ ls ( y, fe(x) )] , (8) where ls is the standard supervised loss function, e.g. mean squared error for regression or categorical cross-entropy for classification. The unsupervised (consistency) loss Lu is defined between original samples (x) and their reconstructions from corrupted and masked samples (x̃), Lu = Ex∼pX ,m∼pm,x̃∼gm(x,m) [( fe(x̃)− fe(x) )2] . (9) Our consistency loss is inspired by the idea in consistency regularizer: encouraging the predictive model f to return the similar output distribution when its inputs are perturbed. However, the perturbation in our framework is learned through our self-supervised framework while in the previous works, the perturbation is from a manually chosen distribution, such as rotation. For a fixed sample x, the inner expectation in Equation (9) is taken with respect to pm and gm(x,m) and could be interpreted as the variance of the predictions of corrupted and masked samples. β is another hyper-parameter to adjust the supervised loss Ls and the consistency loss Lu. In each iteration of training, for each sample x ∈ Du in the batch, we create K augmented samples x̃1, ..., x̃K by repeating the operation in Equation (3) K times. Every time the sample x ∈ Du is used in a batch, we recreate these augmented samples. The stochastic approximation of Lu is given as L̂u = 1 NbK Nb∑ i=1 K∑ k=1 [( fe(x̃i,k)− fe(xi) )2] = 1 NbK Nb∑ i=1 K∑ k=1 [( f(zi,k)− f(zi) )2] (10) where Nb is the batch size. During training, the predictive model f is regularized to make similar predictions on zi and zi,k, k = 1, ...,K. After training f , the output for a new test sample xt is given by ŷ = fe(xt). Figure 2 illustrates the entire procedure of the proposed semi-supervised framework on tabular data with a pre-trained encoder. 5 Experiments In this section, we conduct a series of experiments to demonstrate the efficacy of our framework (VIME) on several tabular datasets from different application domains, including genomics and clinical data. We use Min-max scaler to normalize the data between 0 and 1. For self-supervised learning, we compare VIME against two benchmarks, Denoising auto-encoder (DAE) [21] and Context Encoder [22]. For semi-supervised learning, we use the data augmentation method MixUp [10] as the main benchmark. We exclude self- and semi-supervised learning benchmarks that are applicable only to image or language data. As a baseline, we also include supervised learning benchmarks. Additional results with more baselines can be found in the Supplementary Materials. In the experiments, self- and semi-supervised learning methods use both labeled data and unlabeled data, while the supervised learning methods only use the labeled data. Implementation details and sensitivity analyses on three hyperparameters (pm, α, β) can be found in the Supplementary Materials (Section 5 & 6). The implementation of VIME can be found at https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/vime/ and at https://github.com/jsyoon0823/VIME. 5.1 Genomics data: Genome-wide polygenic scoring In this subsection, we evaluate the methods on a large genomics dataset from UK Biobank consisting of around 400,000 individuals’ genomics information (SNPs) and 6 corresponding blood cell traits: (1) Mean Reticulocyte Volume (MRV), (2) Mean Platelet Volume (MPV), (3) Mean Cell Hemoglobin (MCH), (4) Reticulocyte Fraction of Red Cells (RET), (5) Plateletcrit (PCT), and (6) Monocyte Percentage of White Cells (MONO). The features of the dataset consist of around 700 SNPs (after the standard p-values filtering process), where each SNP, taking value in {0, 1, 2}, is treated as a categorical variable (with three categories). Here, we have 6 different blood cell traits to predict, and we treat each of them as an independent prediction task (selected SNPs are different across different blood cell traits). Detailed data descriptions are provided in the Supplementary Materials (Section 2). Note that all the variables are categorical features. To test the effectiveness of self- and semi-supervised learning in the small labeled data setting, VIME and benchmarks are tasked to predict the 6 blood cell traits while we gradually increase the number of labeled data points from 1,000 to 100,000 samples while using the remaining data as unlabeled data (more than 300,000 samples). We use a linear model (Elastic Net [34]) as the predictive model due to their superior performance in comparison to other non-linear models such as multi-layer perceptron and random forests [35] on genomics datasets. In Figure 3, we show the MSE performance (y-axis) against the number of labeled data points (x-axis, in log scale) increasing from 1,000 to 10,0003. The proposed model (VIME) outperforms all the benchmarks, including purely supervised method ElasticNet, the self-supervised method Context Encoder and the semi-supervised method MixUp. In fact, in many cases VIME shows similar 3The performances for 10,000 to 100,000 range can be found in the Supplementary Materials (Section 3) performances to the benchmarks even when it has access to only half as many labeled data points (as the benchmarks). 5.2 Clinical data: Patient treatment prediction In this subsection, we evaluate the methods on clinical data, using the UK and US prostate cancer datasets (from Prostate Cancer UK and SEER datasets, respectively). The features consist of patients’ clinical information (e.g. age, grade, stage, Gleason scores) - total 28 features. We predict 2 possible treatments of UK prostate cancer patients (1) Hormone therapy (whether the patients got hormone therapy), (2) Radical therapy (whether the patient got radical therapy). Both tasks are binary classification. In the UK prostate cancer dataset, we only have around 10,000 labeled patients samples. The US prostate cancer dataset contains more than 200,000 unlabeled patients samples, twenty times bigger than the labeled UK dataset. We use 50% of the UK dataset (as the labeled data) and the entire US dataset (as the unlabeled data) for training, with the remainder of the UK data being used as the testing set. We also test three popular supervised learning models: Logistic Regression, a 2-layer Multi-layer Perceptron and XGBoost. Table 1 shows that VIME results in the best prediction performance, outperforming the benchmarks. More importantly, VIME is the only self- or semi-supervised learning framework that significantly outperforms supervised learning models. These results shed light on the unique advantage of using VIME in leveraging a large unlabeled tabular dataset (e.g. the US dataset) to strengthen a model’s predictive power. Here we also demonstrate that VIME can perform well even when there exists a distribution shift between the UK labeled data and the US unlabeled data (see the Supplementary Materials (Section 2) for further details). 5.3 Public tabular data To further verify the generalizability and allow for reproducibility of our results, we compare VIME with the benchmarks using three public tabular datasets: MNIST (interpreted as a tabular data with 784 features), UCI Income and UCI Blog. We use 10% of the data as labeled data, and the 90% of the remaining data as unlabeled data. Prediction accuracy on a separate testing set is used as the metric for all three datasets. As shown in Table 2 (Type - Supervised models, Self-supervised models, Semi-supervised models and VIME), VIME achieves the best accuracy regardless of the application domains. These results further confirm the superiority of VIME in a diverse range of tabular datasets. 5.4 Ablation study In this section, we conduct an ablation study to analyze the performance gain of each component in VIME on the tabular datasets introduced in Section 5.3. We define three variants of VIME: • Supervised only: Exclude both self- and semi-supervised learning parts (i.e. 2-layer perceptron) • Semi-SL only: Exclude self-supervised learning part (i.e. remove the encoder in Figure 2) • Self-SL only: Exclude semi-supervised learning part (i.e. β = 0). More specifically, we first train the encoder via self-supervised learning. Then, we train the predictive model with loss function (in Equation (7) with β = 0 (only utilizing the labeled data). Table 2 (Type - Variants of VIME and VIME) shows that both Self-SL only and Semi-SL only show performance gains compared with Supervised only, and VIME is always better than its variants. Every component in VIME can improve the performance of a predictive model, and the best performance is achieved when they work collaboratively in our unified framework. We note that Self-SL only leads to a larger performance drop than Semi-SL only because in the former the predictive model is trained solely on a small labeled dataset without the unsupervised loss function Lu, while in the latter the predictive model is trained via minimizing both losses but without the encoder. Additional ablation study can be found in the Supplementary Materials. 6 Discussions: Why does the proposed model (VIME) need for tabular data? Image and tabular data are very different. The spatial correlations between pixels in images or the sequential correlations between words in text data are well-known and consistent across different datasets. By contrast, the correlation structure among features in tabular data is unknown and varies across different datasets. In other words, there is no “common” correlation structure in tabular data (unlike in image and text data). This makes the self- and semi-supervised learning in tabular data more challenging. Note that promising methods for image domain do not guarantee the favorable results on tabular domain (vice versa). Also, most augmentations and pretext tasks used in image data are not applicable to tabular data; because they directly utilize the spatial relationship of the image for augmentation (e.g., rotation) and pretext tasks (e.g., jigsaw puzzle and colorization). To transfer the successes of self- and semi-supervised learning from image to tabular domains, proposing applicable and proper pretext tasks and augmentations for tabular data (our main novelty) is critical. Note that better augmentations and pretext tasks can significantly improve self- and semi-supervised learning performances. Broader Impact Tabular data is the most common data type in the real-world. Most databases include tabular data such as demographic information in medical and finance datasets and SNPs in genomic datasets. However, the tremendous successes in deep learning (especially in image and language domains) has not yet been fully extended to the tabular domain. Still, in the tabular domain, ensembles of decision trees achieve the state-of-the-art performance. If we can efficiently extend the successful deep learning methodologies from images and language to tabular data, the application of machine learning in the real-world can be greatly extended. This paper takes a step in this direction for selfand semi-supervised learning frameworks which recently have achieved significant successes in images and language. In addition, the proposed tabular data augmentation and representation learning methodologies can be utilized in various fields such as tabular data encoding, balancing the labels of tabular data, and missing data imputation. Acknowledgements and Funding Sources The authors would like to thank the reviewers for their helpful comments. This work was supported by the National Science Foundation (NSF grant 1722516), the US Office of Naval Research (ONR), and GlaxoSmithKline (GSK).
1. What is the main contribution of the paper regarding self-supervised representation learning with tabular data? 2. What are the strengths of the proposed approach, particularly in its applicability and empirical evaluations? 3. Do you have any concerns about the methodology and its relation to previous works in NLP? 4. How does the reviewer assess the effectiveness of the multi-headed reconstruction pretext task? 5. Are there any questions or suggestions regarding the experimental design and ablations? 6. How does the reviewer evaluate the novelty and focus of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work proposes a self-supervised framework for representation learning with tabular data. The authors propose a multi-headed self-supervised training model that first corrupts (augments) the input tabular data using a binary mask, and then one head reconstructs the mask while the other head reconstructs the uncorrupted data. In addition, the authors use a standard supervised loss function for data that contain labels, an addition that makes this model applicable to semi-supervised learning. The authors demonstrate the effectiveness of their multi-headed reconstruction pretext task on a genomics dataset, patient treatment dataset, and two tabular benchmark datasets (UCI Income & Blog) as well as MNIST treated as tabular data. Strengths Overall, machine learning on tabular data is an understudied problem, and this paper lays out a clear and justifiable explanation for the development of their self-supervised pretraining approach for tabular data. The paper proposes a novel 2-part reconstruction task for masked tabular data: where both reconstructing the mask itself and the unmasked input data are the two feedback mechanisms for the self-supervised learning. The paper studies a unique set of genomics and patient treatment datasets that tie in nicely with the original motivation of the paper. The experimental results look promising, and the authors include a few ablations to better understand the benefit of the semi-supervised learning component. The applicability tabular data and the empirical evaluations are the primary strengths of this work. Weaknesses My central concern for this paper is the misalignment between the motivation and methodology. As motivation, the authors argue that self-supervised CV and **NLP** “algorithms are not effective for tabular data.” The proposed model, though, is effectively the binary masked language model whose variants pervade self-supervised NLP research (e.g. WordNet, BERT, etc). Granted, instead of masking words, the proposed models are masking tabular values, but this is performing a very similar pretext task. In fact, there is concurrent work that learns tabular representations using a BERT model [1]. At the very least, I think it’s worth a discussion of how this masked entry model is similar to a masked language model. I believe this paper also overlooks [2] as related research. Line 167-168: The justification for using the two-component pretext tasks is that it is a difficult individual task. Did you explore using only one of the two-components? Line 195: Is it true that the correlation structure is less obvious in tabular data than in images or text? The semi-supervised learning aspect of this paper described in S4.2 (using a weighted combination of an unsupervised loss function and a supervised loss function) is well established, e.g. [3], and I think this paper could focus more on the novelty of the pretext tasks for tabular data. It would be interesting to experiment and measure the performance of alternative corruption (augmentation) models and their impact on different kinds of tabular data. [1] TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data, https://arxiv.org/abs/2005.08314 [2] TabNet https://arxiv.org/abs/1908.07442 [3] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems, 2005.
NIPS
Title VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain Abstract Selfand semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique structure in the domain datasets (such as spatial relationships in images or semantic relationships in language). They are not adaptable to general tabular data which does not have the same explicit structure as image and language data. In this paper, we fill this gap by proposing novel selfand semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation). We create a novel pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning. We also introduce a novel tabular data augmentation method for selfand semi-supervised learning frameworks. In experiments, we evaluate the proposed framework in multiple tabular datasets from various application domains, such as genomics and clinical data. VIME exceeds state-of-the-art performance in comparison to the existing baseline methods. 1 Introduction Tremendous successes have been achieved in a variety of applications (such as image classification [1], object detection [2], and language translation [3]) with deep learning models via supervised learning on large labeled datasets such as ImageNet [4]. Unfortunately, collecting sufficiently large labeled datasets is expensive and even impossible in several domains (such as medical datasets concerned with a particularly rare disease). In these settings, however, there is often a wealth of unlabeled data available - datasets are often collected from a large population, but target labels are only available for a small group of people. The 100,000 Genomes project [5], for instance, sequenced 100,000 genomes from around 85,000 NHS patients affected by a rare disease, such as cancer. By definition rare diseases occur in (less than) 1 in 2000 people. Datasets like these present huge opportunities for self- and semi-supervised learning algorithms, which can leverage the unlabeled data to further improve the performance of a predictive model. Unfortunately, existing self- and semi-supervised learning algorithms are not effective for tabular data1 because they heavily rely on the spatial or semantic structure of image or language data. A 1Tabular data is a database that is structured in a tabular form. It arranges data elements in vertical columns (features) and horizontal rows (samples). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. standard self-supervised leaning framework designs a (set of) pretext task(s) to learn informative representations from the raw input features. For the language domain, BERT introduces 4 different pretext tasks (e.g. predicting future words from previous words) to learn representations of the language data [6]. In the image domain, rotation [7], jigsaw puzzle [8], and colorization [9] can be utilized as pretext tasks to learn representations of the images. Standard semi-supervised learning methods also suffer from the same problem, since the regularizers they use for the predictive model are based on some prior knowledge of these data structures. For example, the consistency regularizer encourages the predictive model to have the same output distribution on a sample and its augmented variants, e.g. an image and its rotated variants [7], or two images and their convex combination(s) [10]. The notion of rotation simply does not exist in tabular data. Moreover, in many settings, variables are often categorical, and do not admit meaningful convex combinations. Even in a setting where all variables are continuous, there is no guarantee that the data manifold is convex and as such taking convex combinations will either generate out-of-distribution samples (therefore degrading model performances) or be restricted to generating samples that are very close to real samples (limiting the effectiveness of the data augmentation), for more details see the Supplementary Materials (Section 4). Contribution: In this paper, we propose novel self- and semi-supervised learning frameworks for tabular data. For self-supervised learning, we introduce a novel pretext task, mask vector estimation in addition to feature vector estimation. To solve those pretext tasks, an encoder function learns to construct informative representations from the raw features in the unlabeled data. For semi-supervised learning, we introduce a novel tabular data augmentation scheme. We use the trained encoder to generate multiple augmented samples for each data point by masking each point using several different masks and then imputing the corrupted values for each masked data point. Finally, we propose a systematic self- and semi-supervised learning framework for tabular data, VIME (Value Imputation and Mask Estimation), that combines our ideas to produce state-of-the-art performances on several tabular datasets with a few labeled samples, from various domains. 2 Related Works Self-supervised learning (Self-SL) frameworks are representation learning methods using unlabeled data. It can be categorized into two types: using pretext task(s) and contrastive learning. Most existing works with pretext tasks are appropriate only for images or natural language: (i) surrogate classes prediction (scaling and translation) [11], (ii) rotation degree predictions [7], (iii) colorization [9], (iv) relative position of patches estimation [12], (v) jigsaw puzzle solving [8], (vi) image denoising [13], (vii) partial-to-partial registration [14], and (viii) next words and previous words predictions [6]. Most existing works with contrastive learning are also applicable only for image or natural languages due to their data augmentation scheme, and temporal and spatial relationships for defining the similarity: (i) contrastive predictive coding [15, 16], (ii) contrastive multi-view coding [17], (iii) SimCLR [18], (iv) momentum contrast [19, 20]. There is some existing work on self-supervised learning which can be applied to tabular data. In Denoising auto-encoder [21], the pretext task is to recover the original sample from a corrupted sample. In Context Encoder [22], the pretext task is to reconstruct the original sample from both the corrupted sample and the mask vector. The pretext task for self-supervised learning in TabNet [23] and TaBERT [24] is also recovering corrupted tabular data. In this paper, we propose a new pretext task: to recover the mask vector, in addition to the original sample with a novel corrupted sample generation scheme. Also, we propose a novel tabular data augmentation scheme that can be combined with various contrastive learning frameworks to extend the self-supervised learning to tabular domains. Semi-supervised learning (Semi-SL) frameworks can be categorized into two types: entropy minimization and consistency regularization. Entropy minimization encourages a classifier to output low entropy predictions on unlabeled data. For instance, [25] constructs hard labels from high-confidence predictions on unlabeled data, and train the network using these pseudo-labels together with labeled data in a supervised way. Consistency regularization encourages some sort of consistency between a sample and some stochastically altered version of itself. Π-model [26] uses an L2 loss to encourage consistency between predictions. Mean teacher [27] uses anL2 loss to encourage consistency between the intermediate representations. Virtual Adversarial Training (VAT) [28] encourages prediction consistency by minimizing the maximum difference in predictions between a sample and multiple augmented versions. MixMatch [29] and ReMixMatch [30] combine entropy minimization with consistency regularization in one unified framework with MixUp [10] as the data augmentation method. There is a series of interesting works on graph-based semi-supervised learning [31, 32, 33] which consider a special case of network data where samples are connected by a given edge, i.e. a citation network where an article is connected with its citations. Here, we introduce a novel data augmentation method for general tabular data which can be combined with various semi-supervised learning frameworks to train a predictive model in a semi-supervised way. 3 Problem Formulation In this section, we introduce the general formulation of self- and semi-supervised learning. Suppose we have a small labeled dataset Dl = {xi, yi}Nli=1 and a large unlabeled dataset Du = {xi} Nl+Nu i=Nl+1 , where Nu Nl, xi ∈ X ⊆ Rd and yi ∈ Y . The label yi is a scalar in single-task learning while it can be given as a multi-dimensional vector in multi-task learning. We assume every input feature xi in Dl and Du is sampled i.i.d. from a feature distribution pX , and the labeled data pairs (xi, yi) in Dl are drawn from a joint distribution pX,Y . When only limited labeled samples from pX,Y are available, a predictive model f : X → Y solely trained by supervised learning is likely to overfit the training samples since the empirical supervised loss ∑Nl i=1 l ( f(xi), yi ) we minimize deviates significantly from the expected supervised loss E(x,y)∼pX,Y [ l ( f(x), y )] , where l(·, ·) is some standard supervised loss function (e.g. cross-entropy). 3.1 Self-supervised learning Self-supervised learning aims to learn informative representations from unlabeled data. In this subsection, we focus on self-supervised learning with various self-supervised/pretext tasks for a pretext model to solve. These tasks are set to be challenging but highly relevant to the downstream tasks that we attempt to solve. Ideally, the pretext model will extract some useful information from the raw data in the process of solving the pretext tasks. Then the extracted information can be utilized by the predictive model f in the downstream tasks. In general, self-supervised learning constructs an encoder function e : X → Z that takes a sample x ∈ X and returns an informative representation z = e(x) ∈ Z . The representation z is optimized to solve a pretext task defined with a pseudo-label ys ∈ Ys and a self-supervised loss function lss. For example, the pretext task can be predicting the rotation degree of some rotated image in the raw dataset, where ys is the true rotation degree and lss is the squared difference between the predicted rotation degree and ys. We define the pretext predictive model as h : Z → Ys, which is trained jointly with the encoder function e by minimizing the expected self-supervised loss function lss as follows, min e,h E(xs,ys)∼pXs,Ys [ lss ( ys, (h ◦ e)(xs) )] (1) where pXs,Ys is a pretext distribution that generates pseudo-labeled samples (xs, ys) for training the encoder e and pretext predictive model h. Note that we have sufficient samples to approximate the objective function above since for each input sample in Du, we can generate a pretext sample (xs, ys) for free, e.g. rotating an image xi to create xs and taking the rotation degree as the label ys. After training, the encoder function e can be used to extract better data representations from raw data for solving various downstream tasks. Note that in settings where the downstream task (and a loss for it) are known in advance, the encoder can be trained jointly with the downstream task’s model. 3.2 Semi-supervised learning Semi-supervised learning optimizes the predictive model f by minimizing the supervised loss function jointly with some unsupervised loss function defined over the output space Y . Formally, semi-supervised learning is formulated as an optimization problem as follows, min f E(x,y)∼pXY [ l ( y, f(x) )] + β · Ex∼pX ,x′∼p̃X(x′|x) [ lu ( f(x), f(x′) )] (2) where lu : Y × Y → R is an unsupervised loss function, and a hyperparameter β ≥ 0 is introduced to control the trade-off between the supervised and unsupervised losses. x′ is a perturbed version of x assumed to be drawn from a conditional distribution p̃X(x′|x). The first term is estimated using the small labeled dataset Dl, while the second term is estimated using all input features in Du. The unsupervised loss function (lu) is often inspired by some prior knowledge of the downstream task. For example, consistency regularization encourages the model f to produce the same output distribution when its inputs are perturbed (x′). 4 Proposed Model: VIME In this section, we describe VIME, our systematic approach for self- and semi-supervised learning for tabular data (block diagram can be found in the Supplementary Materials)). We first propose two pretext tasks in self-supervised learning, then we develop an unsupervised loss function in semisupervised learning using the encoder learned from the pretext tasks via self-supervised learning. 4.1 Self-supervised learning for tabular data We introduce two pretext tasks: feature vector estimation and mask vector estimation. Our goal is to optimize a pretext model to recover an input sample (a feature vector) from its corrupted variant, at the same time as estimating the mask vector that has been applied to the sample. In our framework, the two pretext tasks share a single pretext distribution pXs,Ys . First, a mask vector generator outputs a binary mask vector m = [m1, ...,md]> ∈ {0, 1}d where mj is randomly sampled from a Bernoulli distribution with probability pm (i.e. pm = ∏d j=1 Bern(mj |pm)). Then a pretext generator gm : X × {0, 1}d → X takes a sample x from Du and a mask vector m as input, and generates a masked sample x̃. The generating process of x̃ is given by x̃ = gm(x,m) = m x̄ + (1−m) x (3) where the j-th feature of x̄ is sampled from the empirical distribution p̂Xj = 1Nu ∑Nl+Nu i=Nl+1 δ(xj = xi,j) where xi,j is the j-th feature of the i-th sample in Du (i.e. the empirical marginal distribution of each feature). - see Figure 3 in the Supplementary Materials for further details. The generating process in Equation (3) ensures the corrupted sample x̃ is not only tabular but also similar to the samples in Du. Compared with standard sample corruption approaches, e.g. adding Gaussian noise to, or replacing zeros with the missing features, our approach generates x̃ that is more difficult to distinguish from x. This difficulty is crucial for self-supervised learning, which we will elaborate more in the following sections. There are two folds of randomness imposed in our pretext distribution pXs,Ys . Explicitly, m is a random vector sampled from a Bernoulli distribution. Implicitly, the pretext generator gm is also a stochastic function whose randomness comes from x̄. Together, this randomness increases the difficulty in reconstructing x from x̃. The level of difficulty can be adjusted by changing the hyperparameter pm, the probability in Bern(·|pm), which controls the proportion of features that will be masked and corrupted. Following the convention of self-supervised learning, the encoder e first transforms the masked and corrupted sample x̃ to a representation z, then a pretext predictive model will be introduced to recover the original sample x from z. Arguably, this is a more challenging task than existing pretext tasks, such as correcting the rotation of images or recolorizing a grayscale image. A rotated or grayscale image still contains some information about the original features. In contrast, masking completely removes some of the features from x and replaces them with a noise sample x̄ of which each feature may come from a different random sample in Du. The resulting sample x̃ may not contain any information about the missing features and even hard to identify which features are missing. To solve such a challenging task, we first divide it into two sub-tasks (pretext tasks): (1) Mask vector estimation: predict which features have been masked; (2) Feature vector estimation: predict the values of the features that have been corrupted. We introduce a separate pretext predictive model for each pretext task. Both models operate on top of the representation z given by the encoder e and try to estimate m and x collaboratively. The two models and their functions are, • Mask vector estimator, sm : Z → [0, 1]d, takes z as input and outputs a vector m̂ to predict which features of x̃ have been replaced by a noisy counterpart (i.e., m); • Feature vector estimator, sr : Z → X , takes z as input and returns x̂, an estimate of the original sample x. The encoder e and the pretext predictive models (in our case, the two estimators sm and sr) are trained jointly in the following optimization problem, min e,sm,sr Ex∼pX ,m∼pm,x̃∼gm(x,m) [ lm(m, m̂) + α · lr(x, x̂) ] (4) where m̂ = (sm ◦ e)(x̃) and x̂ = (sr ◦ e)(x̃). The first loss function lm is the sum of the binary cross-entropy losses for each dimension of the mask vector2: lm(m, m̂) = − 1 d [ d∑ j=1 mj log [ (sm ◦ e)j(x̃) ] + (1−mj) log [ 1− (sm ◦ e)j(x̃) ]] , (5) and the second loss function lr is the reconstruction loss, lr(x, x̂) = 1 d [ d∑ j=1 (xj − (sr ◦ e)j(x̃))2 ] . (6) α adjusts the trade-off between the two losses. For categorical variables, we modified Equation 6 to cross-entropy loss. Figure 1 illustrates our entire self-supervised learning framework. What has the encoder learned? These two loss functions share the encoder e. It is the only part we will utilize in the downstream tasks. To understand how the encoder is going to benefit these downstream tasks, we consider what the encoder must be able to do to solve our pretext tasks. We make the following intuitive observation: it is important for e to capture the correlation among the features of x and output some latent representations z that can recover x. In this case, sm can identify the masked features from the inconsistency between feature values, and sr can impute the masked features by learning from the correlated non-masked features. For instance, if the value of a feature is very different from its correlated features, this feature is likely masked and corrupted. We note that correlations are also learned in other self-supervised learning frameworks, e.g. spatial correlations in rotated images and autocorrelations between future and previous words. Our framework is novel in learning the correlations for tabular data whose correlation structure is less obvious than in images or language. The learned representation that captures the correlation across different parts of the object, regardless of the object type (e.g. language, image or tabular data), is an informative input for the various downstream tasks. 4.2 Semi-supervised learning for tabular data We now show how the encoder function e from the previous subsection can be used in semi-supervised learning. Our framework of semi-supervised learning follows the structure as given in Section 3. Let 2Subscript j represents the j-th element of the vector. fe = f ◦ e and ŷ = fe(x). We train the predictive model f by minimizing the objective function, Lfinal = Ls + β · Lu . (7) The supervised loss Ls is given by Ls = E(x,y)∼pXY [ ls ( y, fe(x) )] , (8) where ls is the standard supervised loss function, e.g. mean squared error for regression or categorical cross-entropy for classification. The unsupervised (consistency) loss Lu is defined between original samples (x) and their reconstructions from corrupted and masked samples (x̃), Lu = Ex∼pX ,m∼pm,x̃∼gm(x,m) [( fe(x̃)− fe(x) )2] . (9) Our consistency loss is inspired by the idea in consistency regularizer: encouraging the predictive model f to return the similar output distribution when its inputs are perturbed. However, the perturbation in our framework is learned through our self-supervised framework while in the previous works, the perturbation is from a manually chosen distribution, such as rotation. For a fixed sample x, the inner expectation in Equation (9) is taken with respect to pm and gm(x,m) and could be interpreted as the variance of the predictions of corrupted and masked samples. β is another hyper-parameter to adjust the supervised loss Ls and the consistency loss Lu. In each iteration of training, for each sample x ∈ Du in the batch, we create K augmented samples x̃1, ..., x̃K by repeating the operation in Equation (3) K times. Every time the sample x ∈ Du is used in a batch, we recreate these augmented samples. The stochastic approximation of Lu is given as L̂u = 1 NbK Nb∑ i=1 K∑ k=1 [( fe(x̃i,k)− fe(xi) )2] = 1 NbK Nb∑ i=1 K∑ k=1 [( f(zi,k)− f(zi) )2] (10) where Nb is the batch size. During training, the predictive model f is regularized to make similar predictions on zi and zi,k, k = 1, ...,K. After training f , the output for a new test sample xt is given by ŷ = fe(xt). Figure 2 illustrates the entire procedure of the proposed semi-supervised framework on tabular data with a pre-trained encoder. 5 Experiments In this section, we conduct a series of experiments to demonstrate the efficacy of our framework (VIME) on several tabular datasets from different application domains, including genomics and clinical data. We use Min-max scaler to normalize the data between 0 and 1. For self-supervised learning, we compare VIME against two benchmarks, Denoising auto-encoder (DAE) [21] and Context Encoder [22]. For semi-supervised learning, we use the data augmentation method MixUp [10] as the main benchmark. We exclude self- and semi-supervised learning benchmarks that are applicable only to image or language data. As a baseline, we also include supervised learning benchmarks. Additional results with more baselines can be found in the Supplementary Materials. In the experiments, self- and semi-supervised learning methods use both labeled data and unlabeled data, while the supervised learning methods only use the labeled data. Implementation details and sensitivity analyses on three hyperparameters (pm, α, β) can be found in the Supplementary Materials (Section 5 & 6). The implementation of VIME can be found at https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/vime/ and at https://github.com/jsyoon0823/VIME. 5.1 Genomics data: Genome-wide polygenic scoring In this subsection, we evaluate the methods on a large genomics dataset from UK Biobank consisting of around 400,000 individuals’ genomics information (SNPs) and 6 corresponding blood cell traits: (1) Mean Reticulocyte Volume (MRV), (2) Mean Platelet Volume (MPV), (3) Mean Cell Hemoglobin (MCH), (4) Reticulocyte Fraction of Red Cells (RET), (5) Plateletcrit (PCT), and (6) Monocyte Percentage of White Cells (MONO). The features of the dataset consist of around 700 SNPs (after the standard p-values filtering process), where each SNP, taking value in {0, 1, 2}, is treated as a categorical variable (with three categories). Here, we have 6 different blood cell traits to predict, and we treat each of them as an independent prediction task (selected SNPs are different across different blood cell traits). Detailed data descriptions are provided in the Supplementary Materials (Section 2). Note that all the variables are categorical features. To test the effectiveness of self- and semi-supervised learning in the small labeled data setting, VIME and benchmarks are tasked to predict the 6 blood cell traits while we gradually increase the number of labeled data points from 1,000 to 100,000 samples while using the remaining data as unlabeled data (more than 300,000 samples). We use a linear model (Elastic Net [34]) as the predictive model due to their superior performance in comparison to other non-linear models such as multi-layer perceptron and random forests [35] on genomics datasets. In Figure 3, we show the MSE performance (y-axis) against the number of labeled data points (x-axis, in log scale) increasing from 1,000 to 10,0003. The proposed model (VIME) outperforms all the benchmarks, including purely supervised method ElasticNet, the self-supervised method Context Encoder and the semi-supervised method MixUp. In fact, in many cases VIME shows similar 3The performances for 10,000 to 100,000 range can be found in the Supplementary Materials (Section 3) performances to the benchmarks even when it has access to only half as many labeled data points (as the benchmarks). 5.2 Clinical data: Patient treatment prediction In this subsection, we evaluate the methods on clinical data, using the UK and US prostate cancer datasets (from Prostate Cancer UK and SEER datasets, respectively). The features consist of patients’ clinical information (e.g. age, grade, stage, Gleason scores) - total 28 features. We predict 2 possible treatments of UK prostate cancer patients (1) Hormone therapy (whether the patients got hormone therapy), (2) Radical therapy (whether the patient got radical therapy). Both tasks are binary classification. In the UK prostate cancer dataset, we only have around 10,000 labeled patients samples. The US prostate cancer dataset contains more than 200,000 unlabeled patients samples, twenty times bigger than the labeled UK dataset. We use 50% of the UK dataset (as the labeled data) and the entire US dataset (as the unlabeled data) for training, with the remainder of the UK data being used as the testing set. We also test three popular supervised learning models: Logistic Regression, a 2-layer Multi-layer Perceptron and XGBoost. Table 1 shows that VIME results in the best prediction performance, outperforming the benchmarks. More importantly, VIME is the only self- or semi-supervised learning framework that significantly outperforms supervised learning models. These results shed light on the unique advantage of using VIME in leveraging a large unlabeled tabular dataset (e.g. the US dataset) to strengthen a model’s predictive power. Here we also demonstrate that VIME can perform well even when there exists a distribution shift between the UK labeled data and the US unlabeled data (see the Supplementary Materials (Section 2) for further details). 5.3 Public tabular data To further verify the generalizability and allow for reproducibility of our results, we compare VIME with the benchmarks using three public tabular datasets: MNIST (interpreted as a tabular data with 784 features), UCI Income and UCI Blog. We use 10% of the data as labeled data, and the 90% of the remaining data as unlabeled data. Prediction accuracy on a separate testing set is used as the metric for all three datasets. As shown in Table 2 (Type - Supervised models, Self-supervised models, Semi-supervised models and VIME), VIME achieves the best accuracy regardless of the application domains. These results further confirm the superiority of VIME in a diverse range of tabular datasets. 5.4 Ablation study In this section, we conduct an ablation study to analyze the performance gain of each component in VIME on the tabular datasets introduced in Section 5.3. We define three variants of VIME: • Supervised only: Exclude both self- and semi-supervised learning parts (i.e. 2-layer perceptron) • Semi-SL only: Exclude self-supervised learning part (i.e. remove the encoder in Figure 2) • Self-SL only: Exclude semi-supervised learning part (i.e. β = 0). More specifically, we first train the encoder via self-supervised learning. Then, we train the predictive model with loss function (in Equation (7) with β = 0 (only utilizing the labeled data). Table 2 (Type - Variants of VIME and VIME) shows that both Self-SL only and Semi-SL only show performance gains compared with Supervised only, and VIME is always better than its variants. Every component in VIME can improve the performance of a predictive model, and the best performance is achieved when they work collaboratively in our unified framework. We note that Self-SL only leads to a larger performance drop than Semi-SL only because in the former the predictive model is trained solely on a small labeled dataset without the unsupervised loss function Lu, while in the latter the predictive model is trained via minimizing both losses but without the encoder. Additional ablation study can be found in the Supplementary Materials. 6 Discussions: Why does the proposed model (VIME) need for tabular data? Image and tabular data are very different. The spatial correlations between pixels in images or the sequential correlations between words in text data are well-known and consistent across different datasets. By contrast, the correlation structure among features in tabular data is unknown and varies across different datasets. In other words, there is no “common” correlation structure in tabular data (unlike in image and text data). This makes the self- and semi-supervised learning in tabular data more challenging. Note that promising methods for image domain do not guarantee the favorable results on tabular domain (vice versa). Also, most augmentations and pretext tasks used in image data are not applicable to tabular data; because they directly utilize the spatial relationship of the image for augmentation (e.g., rotation) and pretext tasks (e.g., jigsaw puzzle and colorization). To transfer the successes of self- and semi-supervised learning from image to tabular domains, proposing applicable and proper pretext tasks and augmentations for tabular data (our main novelty) is critical. Note that better augmentations and pretext tasks can significantly improve self- and semi-supervised learning performances. Broader Impact Tabular data is the most common data type in the real-world. Most databases include tabular data such as demographic information in medical and finance datasets and SNPs in genomic datasets. However, the tremendous successes in deep learning (especially in image and language domains) has not yet been fully extended to the tabular domain. Still, in the tabular domain, ensembles of decision trees achieve the state-of-the-art performance. If we can efficiently extend the successful deep learning methodologies from images and language to tabular data, the application of machine learning in the real-world can be greatly extended. This paper takes a step in this direction for selfand semi-supervised learning frameworks which recently have achieved significant successes in images and language. In addition, the proposed tabular data augmentation and representation learning methodologies can be utilized in various fields such as tabular data encoding, balancing the labels of tabular data, and missing data imputation. Acknowledgements and Funding Sources The authors would like to thank the reviewers for their helpful comments. This work was supported by the National Science Foundation (NSF grant 1722516), the US Office of Naval Research (ONR), and GlaxoSmithKline (GSK).
1. What is the focus and contribution of the paper regarding tabular data? 2. What are the strengths of the proposed approach, particularly in terms of the reconstruction loss and semi-supervised setting? 3. What are the weaknesses of the paper, especially regarding its limitations in tackling categorical entries and experimental validation? 4. Do you have any concerns or suggestions regarding the baseline methods used in the experiments? 5. Are there any questions regarding the representation learning approach or its application to genomic and clinical data?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This manuscript contributes self and semi-supervised approaches well suited to tabular data. The point being that tabular data does not come with obvious invariants and corresponding transformations that can be used to create selecf supervision. The contributed method relies creating representations that facilitate learning. Strengths The work contributes a new reconstruction loss for unsupervised training of representations. This loss extends auto-encoders practice with a pretext task that uses the marginal distribution of features. It can then be used to help training intermediate representations in a semi-supervised setting, to improve prediction, adapting existing frameworks The manuscript contributes empirical benchmarks on a genomic dataset as well as clinical data and a few UCI tabular datasets, demonstrating some increasing in performance. Weaknesses The manuscript is tackling tabular data, however it avoids the problem of categorical entries, which are frequent in such data. In particular, the squared loss is used (eq 6), which is not very relevant for categorical data. Likewise, in the experimental validation, the data used do not seem to have categorical data, although the UK Biobank does have categorical features, beyond genomics. As a baseline for the genomics experiments, it would have been interesting to use a PCA to learn representations. In genomics, such a simple model often performs well. With regards to the encoder baseline: were the data centered and normed before fitting an auto-encoder? Indeed, in the absence of standardization, the reconstruction loss is brittle.
NIPS
Title VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain Abstract Selfand semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique structure in the domain datasets (such as spatial relationships in images or semantic relationships in language). They are not adaptable to general tabular data which does not have the same explicit structure as image and language data. In this paper, we fill this gap by proposing novel selfand semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation). We create a novel pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning. We also introduce a novel tabular data augmentation method for selfand semi-supervised learning frameworks. In experiments, we evaluate the proposed framework in multiple tabular datasets from various application domains, such as genomics and clinical data. VIME exceeds state-of-the-art performance in comparison to the existing baseline methods. 1 Introduction Tremendous successes have been achieved in a variety of applications (such as image classification [1], object detection [2], and language translation [3]) with deep learning models via supervised learning on large labeled datasets such as ImageNet [4]. Unfortunately, collecting sufficiently large labeled datasets is expensive and even impossible in several domains (such as medical datasets concerned with a particularly rare disease). In these settings, however, there is often a wealth of unlabeled data available - datasets are often collected from a large population, but target labels are only available for a small group of people. The 100,000 Genomes project [5], for instance, sequenced 100,000 genomes from around 85,000 NHS patients affected by a rare disease, such as cancer. By definition rare diseases occur in (less than) 1 in 2000 people. Datasets like these present huge opportunities for self- and semi-supervised learning algorithms, which can leverage the unlabeled data to further improve the performance of a predictive model. Unfortunately, existing self- and semi-supervised learning algorithms are not effective for tabular data1 because they heavily rely on the spatial or semantic structure of image or language data. A 1Tabular data is a database that is structured in a tabular form. It arranges data elements in vertical columns (features) and horizontal rows (samples). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. standard self-supervised leaning framework designs a (set of) pretext task(s) to learn informative representations from the raw input features. For the language domain, BERT introduces 4 different pretext tasks (e.g. predicting future words from previous words) to learn representations of the language data [6]. In the image domain, rotation [7], jigsaw puzzle [8], and colorization [9] can be utilized as pretext tasks to learn representations of the images. Standard semi-supervised learning methods also suffer from the same problem, since the regularizers they use for the predictive model are based on some prior knowledge of these data structures. For example, the consistency regularizer encourages the predictive model to have the same output distribution on a sample and its augmented variants, e.g. an image and its rotated variants [7], or two images and their convex combination(s) [10]. The notion of rotation simply does not exist in tabular data. Moreover, in many settings, variables are often categorical, and do not admit meaningful convex combinations. Even in a setting where all variables are continuous, there is no guarantee that the data manifold is convex and as such taking convex combinations will either generate out-of-distribution samples (therefore degrading model performances) or be restricted to generating samples that are very close to real samples (limiting the effectiveness of the data augmentation), for more details see the Supplementary Materials (Section 4). Contribution: In this paper, we propose novel self- and semi-supervised learning frameworks for tabular data. For self-supervised learning, we introduce a novel pretext task, mask vector estimation in addition to feature vector estimation. To solve those pretext tasks, an encoder function learns to construct informative representations from the raw features in the unlabeled data. For semi-supervised learning, we introduce a novel tabular data augmentation scheme. We use the trained encoder to generate multiple augmented samples for each data point by masking each point using several different masks and then imputing the corrupted values for each masked data point. Finally, we propose a systematic self- and semi-supervised learning framework for tabular data, VIME (Value Imputation and Mask Estimation), that combines our ideas to produce state-of-the-art performances on several tabular datasets with a few labeled samples, from various domains. 2 Related Works Self-supervised learning (Self-SL) frameworks are representation learning methods using unlabeled data. It can be categorized into two types: using pretext task(s) and contrastive learning. Most existing works with pretext tasks are appropriate only for images or natural language: (i) surrogate classes prediction (scaling and translation) [11], (ii) rotation degree predictions [7], (iii) colorization [9], (iv) relative position of patches estimation [12], (v) jigsaw puzzle solving [8], (vi) image denoising [13], (vii) partial-to-partial registration [14], and (viii) next words and previous words predictions [6]. Most existing works with contrastive learning are also applicable only for image or natural languages due to their data augmentation scheme, and temporal and spatial relationships for defining the similarity: (i) contrastive predictive coding [15, 16], (ii) contrastive multi-view coding [17], (iii) SimCLR [18], (iv) momentum contrast [19, 20]. There is some existing work on self-supervised learning which can be applied to tabular data. In Denoising auto-encoder [21], the pretext task is to recover the original sample from a corrupted sample. In Context Encoder [22], the pretext task is to reconstruct the original sample from both the corrupted sample and the mask vector. The pretext task for self-supervised learning in TabNet [23] and TaBERT [24] is also recovering corrupted tabular data. In this paper, we propose a new pretext task: to recover the mask vector, in addition to the original sample with a novel corrupted sample generation scheme. Also, we propose a novel tabular data augmentation scheme that can be combined with various contrastive learning frameworks to extend the self-supervised learning to tabular domains. Semi-supervised learning (Semi-SL) frameworks can be categorized into two types: entropy minimization and consistency regularization. Entropy minimization encourages a classifier to output low entropy predictions on unlabeled data. For instance, [25] constructs hard labels from high-confidence predictions on unlabeled data, and train the network using these pseudo-labels together with labeled data in a supervised way. Consistency regularization encourages some sort of consistency between a sample and some stochastically altered version of itself. Π-model [26] uses an L2 loss to encourage consistency between predictions. Mean teacher [27] uses anL2 loss to encourage consistency between the intermediate representations. Virtual Adversarial Training (VAT) [28] encourages prediction consistency by minimizing the maximum difference in predictions between a sample and multiple augmented versions. MixMatch [29] and ReMixMatch [30] combine entropy minimization with consistency regularization in one unified framework with MixUp [10] as the data augmentation method. There is a series of interesting works on graph-based semi-supervised learning [31, 32, 33] which consider a special case of network data where samples are connected by a given edge, i.e. a citation network where an article is connected with its citations. Here, we introduce a novel data augmentation method for general tabular data which can be combined with various semi-supervised learning frameworks to train a predictive model in a semi-supervised way. 3 Problem Formulation In this section, we introduce the general formulation of self- and semi-supervised learning. Suppose we have a small labeled dataset Dl = {xi, yi}Nli=1 and a large unlabeled dataset Du = {xi} Nl+Nu i=Nl+1 , where Nu Nl, xi ∈ X ⊆ Rd and yi ∈ Y . The label yi is a scalar in single-task learning while it can be given as a multi-dimensional vector in multi-task learning. We assume every input feature xi in Dl and Du is sampled i.i.d. from a feature distribution pX , and the labeled data pairs (xi, yi) in Dl are drawn from a joint distribution pX,Y . When only limited labeled samples from pX,Y are available, a predictive model f : X → Y solely trained by supervised learning is likely to overfit the training samples since the empirical supervised loss ∑Nl i=1 l ( f(xi), yi ) we minimize deviates significantly from the expected supervised loss E(x,y)∼pX,Y [ l ( f(x), y )] , where l(·, ·) is some standard supervised loss function (e.g. cross-entropy). 3.1 Self-supervised learning Self-supervised learning aims to learn informative representations from unlabeled data. In this subsection, we focus on self-supervised learning with various self-supervised/pretext tasks for a pretext model to solve. These tasks are set to be challenging but highly relevant to the downstream tasks that we attempt to solve. Ideally, the pretext model will extract some useful information from the raw data in the process of solving the pretext tasks. Then the extracted information can be utilized by the predictive model f in the downstream tasks. In general, self-supervised learning constructs an encoder function e : X → Z that takes a sample x ∈ X and returns an informative representation z = e(x) ∈ Z . The representation z is optimized to solve a pretext task defined with a pseudo-label ys ∈ Ys and a self-supervised loss function lss. For example, the pretext task can be predicting the rotation degree of some rotated image in the raw dataset, where ys is the true rotation degree and lss is the squared difference between the predicted rotation degree and ys. We define the pretext predictive model as h : Z → Ys, which is trained jointly with the encoder function e by minimizing the expected self-supervised loss function lss as follows, min e,h E(xs,ys)∼pXs,Ys [ lss ( ys, (h ◦ e)(xs) )] (1) where pXs,Ys is a pretext distribution that generates pseudo-labeled samples (xs, ys) for training the encoder e and pretext predictive model h. Note that we have sufficient samples to approximate the objective function above since for each input sample in Du, we can generate a pretext sample (xs, ys) for free, e.g. rotating an image xi to create xs and taking the rotation degree as the label ys. After training, the encoder function e can be used to extract better data representations from raw data for solving various downstream tasks. Note that in settings where the downstream task (and a loss for it) are known in advance, the encoder can be trained jointly with the downstream task’s model. 3.2 Semi-supervised learning Semi-supervised learning optimizes the predictive model f by minimizing the supervised loss function jointly with some unsupervised loss function defined over the output space Y . Formally, semi-supervised learning is formulated as an optimization problem as follows, min f E(x,y)∼pXY [ l ( y, f(x) )] + β · Ex∼pX ,x′∼p̃X(x′|x) [ lu ( f(x), f(x′) )] (2) where lu : Y × Y → R is an unsupervised loss function, and a hyperparameter β ≥ 0 is introduced to control the trade-off between the supervised and unsupervised losses. x′ is a perturbed version of x assumed to be drawn from a conditional distribution p̃X(x′|x). The first term is estimated using the small labeled dataset Dl, while the second term is estimated using all input features in Du. The unsupervised loss function (lu) is often inspired by some prior knowledge of the downstream task. For example, consistency regularization encourages the model f to produce the same output distribution when its inputs are perturbed (x′). 4 Proposed Model: VIME In this section, we describe VIME, our systematic approach for self- and semi-supervised learning for tabular data (block diagram can be found in the Supplementary Materials)). We first propose two pretext tasks in self-supervised learning, then we develop an unsupervised loss function in semisupervised learning using the encoder learned from the pretext tasks via self-supervised learning. 4.1 Self-supervised learning for tabular data We introduce two pretext tasks: feature vector estimation and mask vector estimation. Our goal is to optimize a pretext model to recover an input sample (a feature vector) from its corrupted variant, at the same time as estimating the mask vector that has been applied to the sample. In our framework, the two pretext tasks share a single pretext distribution pXs,Ys . First, a mask vector generator outputs a binary mask vector m = [m1, ...,md]> ∈ {0, 1}d where mj is randomly sampled from a Bernoulli distribution with probability pm (i.e. pm = ∏d j=1 Bern(mj |pm)). Then a pretext generator gm : X × {0, 1}d → X takes a sample x from Du and a mask vector m as input, and generates a masked sample x̃. The generating process of x̃ is given by x̃ = gm(x,m) = m x̄ + (1−m) x (3) where the j-th feature of x̄ is sampled from the empirical distribution p̂Xj = 1Nu ∑Nl+Nu i=Nl+1 δ(xj = xi,j) where xi,j is the j-th feature of the i-th sample in Du (i.e. the empirical marginal distribution of each feature). - see Figure 3 in the Supplementary Materials for further details. The generating process in Equation (3) ensures the corrupted sample x̃ is not only tabular but also similar to the samples in Du. Compared with standard sample corruption approaches, e.g. adding Gaussian noise to, or replacing zeros with the missing features, our approach generates x̃ that is more difficult to distinguish from x. This difficulty is crucial for self-supervised learning, which we will elaborate more in the following sections. There are two folds of randomness imposed in our pretext distribution pXs,Ys . Explicitly, m is a random vector sampled from a Bernoulli distribution. Implicitly, the pretext generator gm is also a stochastic function whose randomness comes from x̄. Together, this randomness increases the difficulty in reconstructing x from x̃. The level of difficulty can be adjusted by changing the hyperparameter pm, the probability in Bern(·|pm), which controls the proportion of features that will be masked and corrupted. Following the convention of self-supervised learning, the encoder e first transforms the masked and corrupted sample x̃ to a representation z, then a pretext predictive model will be introduced to recover the original sample x from z. Arguably, this is a more challenging task than existing pretext tasks, such as correcting the rotation of images or recolorizing a grayscale image. A rotated or grayscale image still contains some information about the original features. In contrast, masking completely removes some of the features from x and replaces them with a noise sample x̄ of which each feature may come from a different random sample in Du. The resulting sample x̃ may not contain any information about the missing features and even hard to identify which features are missing. To solve such a challenging task, we first divide it into two sub-tasks (pretext tasks): (1) Mask vector estimation: predict which features have been masked; (2) Feature vector estimation: predict the values of the features that have been corrupted. We introduce a separate pretext predictive model for each pretext task. Both models operate on top of the representation z given by the encoder e and try to estimate m and x collaboratively. The two models and their functions are, • Mask vector estimator, sm : Z → [0, 1]d, takes z as input and outputs a vector m̂ to predict which features of x̃ have been replaced by a noisy counterpart (i.e., m); • Feature vector estimator, sr : Z → X , takes z as input and returns x̂, an estimate of the original sample x. The encoder e and the pretext predictive models (in our case, the two estimators sm and sr) are trained jointly in the following optimization problem, min e,sm,sr Ex∼pX ,m∼pm,x̃∼gm(x,m) [ lm(m, m̂) + α · lr(x, x̂) ] (4) where m̂ = (sm ◦ e)(x̃) and x̂ = (sr ◦ e)(x̃). The first loss function lm is the sum of the binary cross-entropy losses for each dimension of the mask vector2: lm(m, m̂) = − 1 d [ d∑ j=1 mj log [ (sm ◦ e)j(x̃) ] + (1−mj) log [ 1− (sm ◦ e)j(x̃) ]] , (5) and the second loss function lr is the reconstruction loss, lr(x, x̂) = 1 d [ d∑ j=1 (xj − (sr ◦ e)j(x̃))2 ] . (6) α adjusts the trade-off between the two losses. For categorical variables, we modified Equation 6 to cross-entropy loss. Figure 1 illustrates our entire self-supervised learning framework. What has the encoder learned? These two loss functions share the encoder e. It is the only part we will utilize in the downstream tasks. To understand how the encoder is going to benefit these downstream tasks, we consider what the encoder must be able to do to solve our pretext tasks. We make the following intuitive observation: it is important for e to capture the correlation among the features of x and output some latent representations z that can recover x. In this case, sm can identify the masked features from the inconsistency between feature values, and sr can impute the masked features by learning from the correlated non-masked features. For instance, if the value of a feature is very different from its correlated features, this feature is likely masked and corrupted. We note that correlations are also learned in other self-supervised learning frameworks, e.g. spatial correlations in rotated images and autocorrelations between future and previous words. Our framework is novel in learning the correlations for tabular data whose correlation structure is less obvious than in images or language. The learned representation that captures the correlation across different parts of the object, regardless of the object type (e.g. language, image or tabular data), is an informative input for the various downstream tasks. 4.2 Semi-supervised learning for tabular data We now show how the encoder function e from the previous subsection can be used in semi-supervised learning. Our framework of semi-supervised learning follows the structure as given in Section 3. Let 2Subscript j represents the j-th element of the vector. fe = f ◦ e and ŷ = fe(x). We train the predictive model f by minimizing the objective function, Lfinal = Ls + β · Lu . (7) The supervised loss Ls is given by Ls = E(x,y)∼pXY [ ls ( y, fe(x) )] , (8) where ls is the standard supervised loss function, e.g. mean squared error for regression or categorical cross-entropy for classification. The unsupervised (consistency) loss Lu is defined between original samples (x) and their reconstructions from corrupted and masked samples (x̃), Lu = Ex∼pX ,m∼pm,x̃∼gm(x,m) [( fe(x̃)− fe(x) )2] . (9) Our consistency loss is inspired by the idea in consistency regularizer: encouraging the predictive model f to return the similar output distribution when its inputs are perturbed. However, the perturbation in our framework is learned through our self-supervised framework while in the previous works, the perturbation is from a manually chosen distribution, such as rotation. For a fixed sample x, the inner expectation in Equation (9) is taken with respect to pm and gm(x,m) and could be interpreted as the variance of the predictions of corrupted and masked samples. β is another hyper-parameter to adjust the supervised loss Ls and the consistency loss Lu. In each iteration of training, for each sample x ∈ Du in the batch, we create K augmented samples x̃1, ..., x̃K by repeating the operation in Equation (3) K times. Every time the sample x ∈ Du is used in a batch, we recreate these augmented samples. The stochastic approximation of Lu is given as L̂u = 1 NbK Nb∑ i=1 K∑ k=1 [( fe(x̃i,k)− fe(xi) )2] = 1 NbK Nb∑ i=1 K∑ k=1 [( f(zi,k)− f(zi) )2] (10) where Nb is the batch size. During training, the predictive model f is regularized to make similar predictions on zi and zi,k, k = 1, ...,K. After training f , the output for a new test sample xt is given by ŷ = fe(xt). Figure 2 illustrates the entire procedure of the proposed semi-supervised framework on tabular data with a pre-trained encoder. 5 Experiments In this section, we conduct a series of experiments to demonstrate the efficacy of our framework (VIME) on several tabular datasets from different application domains, including genomics and clinical data. We use Min-max scaler to normalize the data between 0 and 1. For self-supervised learning, we compare VIME against two benchmarks, Denoising auto-encoder (DAE) [21] and Context Encoder [22]. For semi-supervised learning, we use the data augmentation method MixUp [10] as the main benchmark. We exclude self- and semi-supervised learning benchmarks that are applicable only to image or language data. As a baseline, we also include supervised learning benchmarks. Additional results with more baselines can be found in the Supplementary Materials. In the experiments, self- and semi-supervised learning methods use both labeled data and unlabeled data, while the supervised learning methods only use the labeled data. Implementation details and sensitivity analyses on three hyperparameters (pm, α, β) can be found in the Supplementary Materials (Section 5 & 6). The implementation of VIME can be found at https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/vime/ and at https://github.com/jsyoon0823/VIME. 5.1 Genomics data: Genome-wide polygenic scoring In this subsection, we evaluate the methods on a large genomics dataset from UK Biobank consisting of around 400,000 individuals’ genomics information (SNPs) and 6 corresponding blood cell traits: (1) Mean Reticulocyte Volume (MRV), (2) Mean Platelet Volume (MPV), (3) Mean Cell Hemoglobin (MCH), (4) Reticulocyte Fraction of Red Cells (RET), (5) Plateletcrit (PCT), and (6) Monocyte Percentage of White Cells (MONO). The features of the dataset consist of around 700 SNPs (after the standard p-values filtering process), where each SNP, taking value in {0, 1, 2}, is treated as a categorical variable (with three categories). Here, we have 6 different blood cell traits to predict, and we treat each of them as an independent prediction task (selected SNPs are different across different blood cell traits). Detailed data descriptions are provided in the Supplementary Materials (Section 2). Note that all the variables are categorical features. To test the effectiveness of self- and semi-supervised learning in the small labeled data setting, VIME and benchmarks are tasked to predict the 6 blood cell traits while we gradually increase the number of labeled data points from 1,000 to 100,000 samples while using the remaining data as unlabeled data (more than 300,000 samples). We use a linear model (Elastic Net [34]) as the predictive model due to their superior performance in comparison to other non-linear models such as multi-layer perceptron and random forests [35] on genomics datasets. In Figure 3, we show the MSE performance (y-axis) against the number of labeled data points (x-axis, in log scale) increasing from 1,000 to 10,0003. The proposed model (VIME) outperforms all the benchmarks, including purely supervised method ElasticNet, the self-supervised method Context Encoder and the semi-supervised method MixUp. In fact, in many cases VIME shows similar 3The performances for 10,000 to 100,000 range can be found in the Supplementary Materials (Section 3) performances to the benchmarks even when it has access to only half as many labeled data points (as the benchmarks). 5.2 Clinical data: Patient treatment prediction In this subsection, we evaluate the methods on clinical data, using the UK and US prostate cancer datasets (from Prostate Cancer UK and SEER datasets, respectively). The features consist of patients’ clinical information (e.g. age, grade, stage, Gleason scores) - total 28 features. We predict 2 possible treatments of UK prostate cancer patients (1) Hormone therapy (whether the patients got hormone therapy), (2) Radical therapy (whether the patient got radical therapy). Both tasks are binary classification. In the UK prostate cancer dataset, we only have around 10,000 labeled patients samples. The US prostate cancer dataset contains more than 200,000 unlabeled patients samples, twenty times bigger than the labeled UK dataset. We use 50% of the UK dataset (as the labeled data) and the entire US dataset (as the unlabeled data) for training, with the remainder of the UK data being used as the testing set. We also test three popular supervised learning models: Logistic Regression, a 2-layer Multi-layer Perceptron and XGBoost. Table 1 shows that VIME results in the best prediction performance, outperforming the benchmarks. More importantly, VIME is the only self- or semi-supervised learning framework that significantly outperforms supervised learning models. These results shed light on the unique advantage of using VIME in leveraging a large unlabeled tabular dataset (e.g. the US dataset) to strengthen a model’s predictive power. Here we also demonstrate that VIME can perform well even when there exists a distribution shift between the UK labeled data and the US unlabeled data (see the Supplementary Materials (Section 2) for further details). 5.3 Public tabular data To further verify the generalizability and allow for reproducibility of our results, we compare VIME with the benchmarks using three public tabular datasets: MNIST (interpreted as a tabular data with 784 features), UCI Income and UCI Blog. We use 10% of the data as labeled data, and the 90% of the remaining data as unlabeled data. Prediction accuracy on a separate testing set is used as the metric for all three datasets. As shown in Table 2 (Type - Supervised models, Self-supervised models, Semi-supervised models and VIME), VIME achieves the best accuracy regardless of the application domains. These results further confirm the superiority of VIME in a diverse range of tabular datasets. 5.4 Ablation study In this section, we conduct an ablation study to analyze the performance gain of each component in VIME on the tabular datasets introduced in Section 5.3. We define three variants of VIME: • Supervised only: Exclude both self- and semi-supervised learning parts (i.e. 2-layer perceptron) • Semi-SL only: Exclude self-supervised learning part (i.e. remove the encoder in Figure 2) • Self-SL only: Exclude semi-supervised learning part (i.e. β = 0). More specifically, we first train the encoder via self-supervised learning. Then, we train the predictive model with loss function (in Equation (7) with β = 0 (only utilizing the labeled data). Table 2 (Type - Variants of VIME and VIME) shows that both Self-SL only and Semi-SL only show performance gains compared with Supervised only, and VIME is always better than its variants. Every component in VIME can improve the performance of a predictive model, and the best performance is achieved when they work collaboratively in our unified framework. We note that Self-SL only leads to a larger performance drop than Semi-SL only because in the former the predictive model is trained solely on a small labeled dataset without the unsupervised loss function Lu, while in the latter the predictive model is trained via minimizing both losses but without the encoder. Additional ablation study can be found in the Supplementary Materials. 6 Discussions: Why does the proposed model (VIME) need for tabular data? Image and tabular data are very different. The spatial correlations between pixels in images or the sequential correlations between words in text data are well-known and consistent across different datasets. By contrast, the correlation structure among features in tabular data is unknown and varies across different datasets. In other words, there is no “common” correlation structure in tabular data (unlike in image and text data). This makes the self- and semi-supervised learning in tabular data more challenging. Note that promising methods for image domain do not guarantee the favorable results on tabular domain (vice versa). Also, most augmentations and pretext tasks used in image data are not applicable to tabular data; because they directly utilize the spatial relationship of the image for augmentation (e.g., rotation) and pretext tasks (e.g., jigsaw puzzle and colorization). To transfer the successes of self- and semi-supervised learning from image to tabular domains, proposing applicable and proper pretext tasks and augmentations for tabular data (our main novelty) is critical. Note that better augmentations and pretext tasks can significantly improve self- and semi-supervised learning performances. Broader Impact Tabular data is the most common data type in the real-world. Most databases include tabular data such as demographic information in medical and finance datasets and SNPs in genomic datasets. However, the tremendous successes in deep learning (especially in image and language domains) has not yet been fully extended to the tabular domain. Still, in the tabular domain, ensembles of decision trees achieve the state-of-the-art performance. If we can efficiently extend the successful deep learning methodologies from images and language to tabular data, the application of machine learning in the real-world can be greatly extended. This paper takes a step in this direction for selfand semi-supervised learning frameworks which recently have achieved significant successes in images and language. In addition, the proposed tabular data augmentation and representation learning methodologies can be utilized in various fields such as tabular data encoding, balancing the labels of tabular data, and missing data imputation. Acknowledgements and Funding Sources The authors would like to thank the reviewers for their helpful comments. This work was supported by the National Science Foundation (NSF grant 1722516), the US Office of Naval Research (ONR), and GlaxoSmithKline (GSK).
1. What is the focus and contribution of the paper on tabular data? 2. What are the strengths of the proposed approach, particularly in terms of its extension to the tabular domain and mask estimation? 3. What are the weaknesses of the paper, especially regarding its novelty and limitations in the proposed mask estimation? 4. Do you have any concerns about the motivation behind generating masked samples using Equation (3)? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper extends the self/semi-supervised learning to the tabular domain. VIME is proposed to estimate the mask as a pretext task. Experiments on related datasets show their superiority. After reading the response, the authors resolve part of my concerns. I revise the overall score to 6. Strengths (1) The extension to the tabular domain with mask estimation is interesting and useful. (2) The authors conduct extensive experiments. Experimental results look good compared with Mix-up. Weaknesses (1) I think the novelty is limited. As introduced in the paper, self/semi-supervised learning has already been thoroughly investigated in other domains, including the image and language. Tabular domain aside, feature vector estimation is common in auto-encoder, and the novelty of the proposed mask estimation is not good enough. I think the existing Gaussian noise based augmentation and estimation is very similar except the difference in distribution. (2) The motivation that you generate the masked samples by Eq.(3) is unclear. Why you add the first term, especially after the shuffle operation? (3) I think the mask m does not need to be binary. Have you ever tried other distributions, such as the Gaussian distribution? You only claim that your approach is more difficult. I would like to see more detailed analysis as well as experimental comparison. (4) For the results in Table 2, I wonder how you compute the accuracy for the method 'Self-SL only'. I'm afraid that there is no classification module in the self-supervised learning framework.
NIPS
Title Random Projections with Asymmetric Quantization Abstract The method of random projection has been a popular tool for data compression, similarity search, and machine learning. In many practical scenarios, applying quantization on randomly projected data could be very helpful to further reduce storage cost and facilitate more efficient retrievals, while only suffering from little loss in accuracy. In real-world applications, however, data collected from different sources may be quantized under different schemes, which calls for a need to study the asymmetric quantization problem. In this paper, we investigate the cosine similarity estimators derived in such setting under the Lloyd-Max (LM) quantization scheme. We thoroughly analyze the biases and variances of a series of estimators including the basic simple estimators, their normalized versions, and their debiased versions. Furthermore, by studying the monotonicity, we show that the expectation of proposed estimators increases with the true cosine similarity, on a broader family of stair-shaped quantizers. Experiments on nearest neighbor search justify the theory and illustrate the effectiveness of our proposed estimators. 1 Introduction The method of random projections (RP) [35] has become a popular technique to reduce data dimensionality while preserving distances between data points, as guaranteed by the celebrated JohnsonLindenstrauss (J-L) Lemma and variants [24, 12, 1]. Given a high dimensional dataset, the algorithm projects each data point onto a lower-dimensional random subspace. There is a very rich literature of research on the theory and applications of random projections, such as clustering, classification, near neighbor search, bio-informatics, compressed sensing, etc. [22, 10, 4, 6, 8, 17, 18, 28, 15, 7, 19, 11, 9]. In recent years, “random projections + quantization” has been an active research topic. That is, the projected data, which are in general real-valued (i.e., infinite precision), are quantized into integers in a small number of bits. Applying quantization on top of random projections has at least two major advantages: (i) the storage cost is further (substantially) reduced; and (ii) some important applications such as hashing-table-based near neighbor search, require using quantized data for indexing purposes. The pioneering example of quantized random projections should be the so-called “1-bit” (sign) random projections, initially used for analyzing the MaxCut problem [20] and then was adopted for near neighbor search [8] and compressed sensing [5, 23, 25]. As one would expect, storing merely 1-bit per projected data value in many situations might suffer from a substantial loss of accuracy, compared to using random projections with full (infinite) precision. There have been various studies on (symmetrically) quantized random projections beyond the 1-bit scheme, e.g., [13, 37, 26, 29, 31]. In this paper, we further move to studying “asymmetric quantization” of random projections, a relatively new problem arising from practical scenarios which is also mathematically very interesting. Everyday, the process of data collection is taking place in every possible place that one can think of, but it is often impractical to cast a universal encoding strategy on data storage methods for every place. As a consequence, it becomes a meaningful task to look into the estimation problems with data encoded by different algorithms, or namely, the asymmetric case. In this paper, we provide 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. some insights on this type of problems, and particularly, we consider recovering inner products from asymmetrically quantized random projections, arising from the following two practical scenarios. • Scenario 1: quantization vs. full-precision. Consider, for example, a retrieval system which uses random projections to process every data vector. To save storage, the projected data stored in the repository are quantized into a small number of bits. When a query data vector arrives, it is first processed by random projections. We then have the option of quantizing the projected query data vector before conducting the similarity search (with vectors in the repository); but we do not have to do the quantization step since we still have the projected query data vector in full-precision (why waste?). This situation hence creates the “quantization vs. full-precision” estimation problem. This setting is natural and practical, and the estimation problem has been studied in the literature, for example [14, 21, 27]. • Scenario 2: quantization with different bits. In applications such as large ad hoc networks [36, 30], data are collected and processed by different nodes (e.g., sensors or mobile devices) at different locations before sent to the central unit or cloud server. However, distinct nodes may use different quantization methods (or different bits) due to many possible reasons, e.g., memory capacity or purpose of data usage. In this situation, information retrieval among data sources using different quantization schemes could be on the cards. As a tightly related topic, asymmetric distributed source coding (with different bits from different sources) has also been considered in [3, 34] among others for sensor networks. Scenario 1 is in fact an important special case of Scenario 2, where one source of data is quantized with infinite bits. In this paper, we provide thorough statistical analysis on the above two scenarios. Our contributions. The major contributions of this paper include the following: • In Section 3, we provide the bias and variance of linear and normalized inner product estimators in Scenario 1. We reveal an interesting connection between the variance of debiased inner product estimator and similarity search, which is very helpful in practice. • In Sections 4 and 5, we conduct statistical analysis in Scenario 2, and prove the monotonicity of a large family of asymmetric quantized inner product estimators, which assures their validity for practical use. A new bound on the bias is also derived in the symmetric case. • In Section 6, an empirical study on various real-world datasets confirms the theoretical findings and well illustrates the effectiveness of proposed quantization schemes. 2 Preliminaries Random Projections. Let U = [u1, ..., un]T ∈ Rn×d be the original data matrix (with d possibly being large). Random projections are realized by Z = [z1, ..., zn]T = U × R, where R ∈ Rd×k, k d is a random matrix with i.i.d. standard Gaussian entries. Let ‖·‖2 denote the l2 Euclidean norm. Throughout this paper, we assume that every data point is normalized to unit norm1, i.e., ‖ui‖2 = 1, 1 ≤ i ≤ n. We will hence use the terms “inner product” and “cosine similarity” interchangeably. For the convenience of presentation, our results (estimators and properties) will be given for two pairs of data vectors, ui and uj (and correspondingly zi and zj). Let ρ = 〈ui, uj〉 be the inner product between ui and uj . We also denote x = zi and y = zj . It is then easy to verify that (x, y) is bi-variate normal: ( x y ) ∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) . (1) Lloyd-Max (LM) quantization [33, 32]. Assume a random signal model with signals generated from a probability distribution with densityX ∼ f . AnM -level scalar quantizer qM (·) is specified by M + 1 decision borders t0 < t1 < · · · < tM and M reconstruction levels (or codes) µi, i = 1, ...,M , with the quantizing operator defined as qM (x) = µi∗ , i ∗ = {i : ti−1 < x ≤ ti, 1 ≤ i ≤M}. (2) 1Normalizing each data vector to the unit norm is a standard data preprocessing procedure for many applications such as clustering and classification. In this paper, we adopt this assumption merely for convenience. When data is not normalized, our results still hold, although we will need to store the values of the norms. The “distortion” is an important quantity that measures how much information is lost from the original signal due to quantization. In this paper, we will also assume M = 2b, with b = 1, 2, ..., being the number of bits used for the quantizer. Thus, we will write qb(·) instead of qM (·). Definition 1. The distortion of a b-bit quantizer Qb(·) with respect to distribution f is defined as E ( (X − qb(X))2 ) = ∫ (x− qb(x))2f(x)dx = 2b∑ i=1 ∫ ti ti−1 (x− µi)2f(x)dx. (3) In this paper, f is the standard normal, i.e., f(x) = φ(x) = 1√ 2π e−x 2/2 in the conventional notation for Gaussian. Also, we will use Qb to denote Lloyd-Max (LM) quantizer which minimizes the distortion and Db to denote the corresponding value of distortion: Qb = argmin q E ( (X − qb(X))2 ) , Db = E ( (X −Qb(X))2 ) (4) A basic identity of LM quantizer is that E(Q2b(X)) = E(Qb(X)X). In practice, Lloyd’s algorithm [32] is used to find the solution, which alternates between updating borders and reconstruction points until convergence (and the convergence is guaranteed). Estimates using full-precision RP’s. Consider observations ( xi yi ) i.i.d.∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) , 1 ≤ i ≤ k, as in (1). The task is to estimate ρ. One can use the usual simple estimator ρ̂f = 1 k k∑ i=1 xiyi, with E(ρ̂f ) = ρ, V ar(ρ̂f ) = 1 + ρ2 k . (5) where E(ρ̂) is the expectation and V ar(ρ̂) is the variance. Note that the variance grows as |ρ| increases. One can take advantage of the following so-called “normalized estimator”: ρ̂f,n = ∑k i=1 xiyi√∑k i=1 x 2 i √∑k i=1 y 2 i , E(ρ̂f,n) = ρ+O( 1 k ), V ar(ρ̂f,n) = (1− ρ2)2 k +O( 1 k2 ). (6) ρ̂f,n is nearly unbiased but it substantially reduces the variance especially near two extreme points ρ = ±1. We refer readers to the classical textbook [2] and recent papers [28, 27] for more details. Estimates using symmetric LM quantized RP’s. [29] study the inner product estimator under LM quantization scheme, by analyzing the biases and variances of estimators in the symmetric case. That is, the observations xi and yi are quantized by the same LM scheme with the same number of bits (b). In this paper, we study the asymmetric setting by using b1 number of bits for quantizing xi and b2 number of bits for yi. Apparently, the work of [29] is a special case of our results (i.e., b1 = b2). Interestingly, our analysis also leads to a more refined bound on the estimation bias in the symmetric case compared to the corresponding bound in [29]. See Section 4 for the detailed results. 3 Scenario 1: Quantization vs. Full-precision Recall that, we have i.i.d. observations {xi, yi}, i = 1, 2, ..., k, from a standard bi-variate normal with xi ∼ N(0, 1), yi ∼ N(0, 1), and E(xiyi) = ρ. In this section, we study Scenario 1: quantization vs. full-precision. That is, we quantize xi with b bits and we leave yi intact. The task is to estimate ρ from {Qb(xi), yi}, i = 1, 2, ..., k. We can still try to use the simple estimator similar to (5): ρ̂b,f = 1 k k∑ i=1 Qb(xi)yi. (7) As one would expect, this estimator ρ̂b,f is no longer unbiased. We can show that E (ρ̂b,f ) = ξ1,1ρ. Hence, we can attempt to remove the bias by using the following “debiased estimator” ρ̂dbb,f = ρ̂b,f ξ1,1 = 1 k 1 ξ1,1 k∑ i=1 Qb(xi)yi. (8) We will need to define ξ1,1. More generally and analogous to the notation in [29], we define γα,β = E ( Qb(x) αyβ ) , ξα,β = E ( Qb(x) αxβ ) . (9) That is, ξ1,1 = E (Qb(x)x). Note that ξα,β can be represented by γα,β , but we use both for convenience. Also note that ξ1,1 = ξ2,0 = 1 −Db from definitions. For b = 1, 2, 3, 4,∞, we can compute ξ1,1 = 0.6366, 0.8825, 0.9655, 0.9905, 1, respectively (keeping four decimal points). In fact, it is also known that Db = 3 3/22π 12 2 −2b, i.e., the bias decays at the rate of O(2−2b). In the following, Theorem 1 summarizes the expectations and variances of the two estimators ρ̂b,f and ρ̂dbb,f . Theorem 1. E (ρ̂b,f ) = ξ1,1ρ, E ( ρ̂dbb,f ) = ρ, (10) V ar (ρ̂b,f ) = Vb,f k , with Vb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 (11) V ar ( ρ̂dbb,f ) = V dbb,f k , with V dbb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 ξ21,1 . (12) Normalized Estimator. We also attempt to take advantage of the (beneficial) effect of normalization by defining two normalized estimators and their variances, as summarized in Theorem 2. Theorem 2. As k →∞, we have ρ̂b,f,n = ∑k i=1Qb(xi)yi√∑k i=1Q 2 b(xi) √∑k i=1 y 2 i , E(ρ̂b,f,n) = √ ξ1,1ρ+O( 1 k ), (13) ρ̂dbb,f,n = ρ̂b,f,n√ ξ1,1 , E(ρ̂dbb,f,n) = ρ+O( 1 k ), (14) V ar (ρ̂b,f,n) = Vb,f,n k +O( 1 k2 ), V ar(ρ̂dbb,f,n) = V dbb,f,n k +O( 1 k2 ), (15) Vb,f,n = ( γ4,0 4γ2,0 + 3 4 γ2,0 + 1 2 γ2,2 ) ρ2 − ( γ3,1 γ2,0 + γ1,3 ) ρ+ γ2,2 γ2,0 , V dbb,f,n = Vb,f,n ξ1,1 . (16) 3.1 Benefits of normalized estimators and debiased estimators Figure 1 plots (in the left two panels) the variances for two debiased estimators ρ̂dbb,f and ρ̂ db b,f,n, to illustrate the benefits of normalization. The right panel of Figure 1 demonstrates that the variance of the normalized estimator is always smaller, and substantially so as ρ away from zero. factor V dbb,f,n (for the normalized estimator). Right panel: the variance ratio: V dbb,f V dbb,f,n . To elaborate on the benefit of debiased estimators, we evaluate the mean square errors (MSE): bias2 + variance. Given the benefit of normalization, we consider the two normalized estimators: MSE (ρ̂b,f,n) = ( 1− √ ξ1,1 )2 ρ2 + Vb,f,n k +O( 1 k2 ), MSE ( ρ̂dbb,f,n ) = Vb,f,n ξ1,1k +O( 1 k2 ). Thus, to compare their mean square errors, we can examine the ratio: ξ1,1 + kρ2 ξ1,1(1− √ ξ1,1) 2 Vb,f,n , which will be larger than 1 quickly as k increases. Note that ξ1,1 ≤ 1 but it is very close to 1 when b ≥ 3. In summary, the MSE of the debiased estimator quickly becomes smaller as k increases. 3.2 Analysis of mis-ordering probabilities in similarity search In similarity search, the estimates of inner products are subsequently used for ordering data vectors to identify the nearest neighbor for a given query. Intuitively, a more accurate estimator should provide a more accurate ordering, but a precise analysis is needed for the “mis-ordering” probabilities. Definition 2. Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being a query) with unit norm and pair-wise cosine similarity ρ12, ρ13 and ρ23 respectively. For an estimator ρ̂, the probability of mis-ordering is defined as PM(u1;u2, u3) = Pr (ρ̂12 > ρ̂13|ρ12 < ρ13) . Consider a case where u3 is the nearest point of u1 in the data space (which implies ρ12 < ρ13). If the estimation gives ρ̂12 > ρ̂13, we then make a wrong decision that u3 is not the nearest neighbor of u1. Theorem 3. (Asymptotic mis-ordering) Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being the query) on a unit sphere with pair-wise inner products ρ12, ρ13 and ρ23 respectively. Denote two estimators ρ̂ and ρ̂′ based on k random projections such that as k →∞, the normality ρ̂ ∼ N(αρ, σ̂2ρ) and ρ̂′ ∼ N(α′ρ, σ̂′2ρ ) hold, with constants α, α′ > 0. Denote δ2ρ = σ̂2ρ α2 , δ ′2 ρ = σ̂′2ρ α′2 and the correlations C = corr(ρ̂12, ρ̂13), C ′ = corr(ρ̂′12, ρ̂ ′ 13), respectively. If δ′ρ12 = aδρ12 , δ ′ ρ13 = a ′δρ13 , C − aa′C ′ < (1− a2)δ2ρ12 + (1− a ′2)δ2ρ13 2δρ12δρ13 , (17) with some 0 < a < 1, 0 < a′ < 1, then as k → ∞ we have P̂M(u1;u2, u3) > P̂ ′M(u1;u2, u3), where P̂M(u1;u2, u3) and P̂ ′M(u1;u2, u3) are the mis-ordering probability of ρ̂ and ρ̂ ′, respectively. Remark. There is an interesting connection with the variances of the aforementioned “debiased estimators”. Condition (17) basically assumes that the variance of the debiased ρ̂′ is smaller than that of the debiased ρ̂ at ρ12 and ρ13 respectively by factors a and a′. In a special case where a = a′ and C = C ′, the last constraint in (17) reduces to C < δ2ρ12 +δ2ρ13 2δρ12δρ13 , which always holds since the right-hand side is greater than 1. Also, note that, by Central Limit Theorem, the normality assumption is true for all the estimators we have discussed in this paper. Although Theorem 3 is asymptotic, we are able to obtain valuable insights in finite sample case, since statistically a sufficiently large k is enough to provide good approximation to the normal distribution. The important message given by Theorem 3 is that estimators with lower “debiased variance” (δ) tend to have lower mis-ordering probability, which leads to a more accurate estimation of nearest neighbors in the original data space. This could be extremely feasible in numerous real world applications. 4 Scenario 2: Quantization with Different Bits We now consider the more general case (Scenario 2) where the data vectors are LM quantized with different numbers of bits. That is, given observations {xi, yi}, 1 ≤ i ≤ n, we quantize xi using b1 bits and yi using b2 bits. Without loss of generality, we assume b1 < b2. Furthermore, we denote two Lloyd-Max quantizers as Qb1 and Qb2 and distortion Db1 and Db2 , respectively. Similar to Scenario 1, we define the asymmetric estimator and the corresponding normalized estimator as ρ̂b1,b2 = 1 k k∑ i=1 Qb1(xi)Qb2(yi), ρ̂b1,b2,n = ∑k i=1Qb1(xi)Qb2(yi)√∑k i=1Q 2 b1 (xi) √∑k i=1Q 2 b2 (yi) . (18) As one might expect, the analysis will become somewhat more difficult. Similar to the analysis for Scenario 1, in this section we will use the following notations: ξα,β = E ( Qb1(x) αxβ ) , γα,β = E ( Qb2(x) αxβ ) , ζα,β = E ( Qb1(x) αQb2(y) β ) , (19) which allow us to express the expectation and variance of ρ̂b1,b2 as follows. E (ρ̂b1,b2) = ζ1,1, V ar (ρ̂b1,b2) = Vb1,b2 k , Vb1,b2 = ζ2,2 − ζ21,1 (20) ζ1,1 can be expressed as an infinite sum, but it appears difficult to be further simplified. Nevertheless, we are able to quantify the expectation of ρ̂b1,b2 in Theorem 4. Theorem 4. The following two bounds hold for ρ ∈ [−1, 1]:∣∣E (ρ̂b1,b2)− (1−Db1)(1−Db2)ρ∣∣ ≤ ∆1, and (21) ∆2 −∆1 ≤ |E (ρ̂b1,b2)− ρ| ≤ ∆1 + ∆2, where (22) ∆1 = √ Db1Db2 √ 1−Db1 √ 1−Db2 |ρ|3, ∆2 = (Db1 +Db2 −Db1Db2)|ρ|. Remark. When b2 →∞ (i.e., Scenario 1), we have Db2 → 0 and the bound reduces to an equality E (ρ̂b1,∞) = (1−Db1)ρ, which matches the result in Section 3. Eq. (22) provides upper and lower bounds for the absolute bias of ρ̂b1,b2 . When b1 = b2 (i.e., the symmetric quantization case), Theorem 5 presents more refined bounds of the bias of ρ̂b1,b2 . Theorem 5. (Symmetric quantization) Suppose b1 = b2 = b. For ρ ∈ [−1, 1], we have (2Db −D2b )|ρ| −Db(1−Db)|ρ|3 ≤ |E (ρ̂b,b)− ρ| ≤ (2Db −D2b )|ρ|. (23) Remark. Compared to [29], which derived |E(ρ̂b,b)− ρ| ≤ 2Db|ρ|, our bounds are more tight. What about the debiased estimator of ρ̂b1,b2? It is slightly tricky because E(ρ̂b1,b2) = ζ1,1 cannot be explicitly expressed as cρ for some constant c (otherwise the debiased estimator would be simply ρ̂b1,b2/c). In Theorem 4, Eq. (21) implies that the expectation of ρ̂b1,b2 is close to (1−Db1)(1−Db2)ρ. Thus, we recommend ρ̂b1,b2(1−Db1 )(1−Db2 ) as the surrogate for the debiased estimator. Next, we provide the expectation and variance of the normalized estimator in Theorem 6. Theorem 6. (Normalized estimator) As k →∞, we have E (ρ̂b1,b2,n) = ζ1,1√ ξ2,0γ2,0 +O( 1 k ), V ar (ρ̂b1,b2,n) = Vb1,b2,n k +O( 1 k2 ), (24) Vb1,b2,n = ζ2,2 − ζ21,1 ξ2,0γ2,0 − ζ1,1ζ3,1 − ζ21,1ξ2,0 ξ22,0γ2,0 − ζ1,1ζ1,3 − ζ21,1γ2,0 ξ2,0γ22,0 (25) + ζ21,1ζ2,2 − ζ21,1ξ2,0γ2,0 2ξ22,0γ 2 2,0 + ζ21,1ξ4,0 − ζ21,1ξ22,0 4ξ32,0γ2,0 + ζ21,1γ4,0 − ζ21,1γ22,0 4ξ2,0γ32,0 . Remark. When b2 = ∞, the expected value of ρ̂b1,b2,n reduces to that of ρ̂b1,f,n in Scenario 1. Additionally, we have ζ1,1 = ζ2,0ρ, γ2,0 = 1, and γ4,0 = 3. It is easy to check that the expression of the variance will reduce to the corresponding formula in Theorem 2. Also, note that ξ2,0 = 1−Db1 , γ2,0 = 1 − Db2 , and ζ1,1 ≈ (1 − Db1)(1 − Db2)ρ. This means that we can practically use ρ̂b1,b2,n√ (1−Db1 )(1−Db2 ) as surrogate for the debiased estimator of ρ̂b1,b2,n. We plot the related results in Figure 2, which verifies the theories in Theorems 4, 5 and 6. 5 Monotonicity of Inner Product Estimates In applications such as nearest neighbor retrieval, the order of distances tends to matter more than the exact values. Given an estimator ρ̂, one would hope that E(ρ̂) is monotone in ρ. This is indeed the case in the full-precision situation. Recall that, in Section 2, given i.i.d. observations {xi, yi}, i = 1, 2, ...k, the full-precision estimator ρ̂f = 1k ∑k i=1 xiyi is monotone in ρ in the expectation because E(ρ̂f ) = ρ. Naturally, one will ask if the expectations of our quantized estimators, e.g., ρ̂b1,b2 = 1 k ∑k i=1Qb1(xi)Qb2(yi), are also monotone in ρ. This turns out to be non-trivial question. We solve this important problem rigorously through several stages. Our analysis will not be restricted to LM quantizers. To do so, we will need the following definition about “increasing quantizer”. Definition 3. (Increasing quantizer) Let Q be an M -level quantizer with boarders t0 < · · · < tM and reconstruction levels µ1, ..., µM . We say that Q is an increasing quantizer if µ1 < · · · < µM . To proceed, we will prove the following three Lemmas for increasing quantizers. Lemma 1. (1-bit vs. others) Suppose Qb1 , Qb2 are increasing quantizers symmetric about 0, with b1 ≥ 1, and b2 = 1. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 2. (2-bit vs. 2-bit) Suppose Qb1 , Qb2 are any two increasing quantizers symmetric about 0, with b1 = b2 = 2. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 3. (Universal decomposition) For any increasing discrete quantizer Qb, b ≥ 3 which is symmetric about 0, there exist a 2-bit symmetric increasing quantizer Q2 and a (b-1)-bit symmetric increasing quantizer Qb−1 such that Qb = Qb−1 +Q2. Once we have the above lemmas, we are ready to prove the monotonicity of E(Qb1(x)Qb2(y)). Theorem 7. (Monotonicity) For any increasing quantizers Qb1 and Qb2 symmetric about 0 with bits b1 ≥ 1 and b2 ≥ 1, the function E(Qb1(x)Qb2(y)) is increasing in ρ. Proof. By Lemma 1, we know that the statement is valid for b1 = 1, and arbitrary b2. Now we look at the case where b1 ≥ 2, b2 ≥ 2. By Lemma 3, we can always write Qb1(x) = b1−1∑ i=1 Q̃ (i) 2 (x), Qb2(y) = b2−1∑ j=1 Q̂ (j) 2 (y), where Q̃12, ..., Q̃ b1−1 2 and Q̂ 1 2, ..., Q̂ b2−1 2 are two sets of symmetric increasing 2-bit quantizers. Thus, ∂E(Qb1(x)Qb2(y)) ∂ρ = ∂E( ∑b1−1 i=1 Q̃ (i) 2 (x) ∑b2−1 j=1 Q̂ (j) 2 (y)) ∂ρ = b1−1∑ i=1 b2−1∑ j=1 ∂E(Q̃(i)2 (x)Q̂ (j) 2 (y)) ∂ρ > 0, where the last equality is due to linearity of expectation and derivative, and the inequality holds because of Lemma 2. Therefore, E(Qb1(x)Qb2(y)) is increasing in ρ for any b1 ≥ 1 and b2 ≥ 1. Recall that, in Section 3.2, we have proved the result for the mis-ordering probability, i.e., Theorem 3, which actually assumes estimators have expectations monotone in ρ. Therefore, Theorem 7 provides the necessary proof to support the assumption in Theorem 3. 6 Empirical Study: Similarity Search In this section, we test proposed estimators on 3 datasets from the UCI repository (Table 1) [16]. The experiments clearly confirm that the normalization step uniformly improves the search accuracy. The results also, to an extent, illustrate the influence of mis-ordering probability studied in Theorem 3. For each dataset, all the examples are preprocessed to have unit norm. The evaluation metric we adopt is the 1-NN precision, which is the proportion of nearest neighbors (NN) we can recover from the nearest neighbor estimated using quantized random projections, averaged over all the examples. We summarize the results in Figure 3. First of all, we can see that, as the number of bits increases, the performance of the quantized estimators converges to that of the estimator with full-precision, as expected. Importantly, the normalization step of the estimators substantially improves the performances, by comparing Column 2 with Column 1 (for Scenario 1), and Column 4 with Column 3 (for Scenario 2). In addition, we can to an extent validate the assertions in Theorem 3, which states that smaller variance of debiased estimators could improve NN recovery precision. • In Figure 1 (left panel), we see that the variance of debiased estimate ρ̂dbb,f with b = 1 is much smaller than using b ≥ 2 in high similarity region (e.g. |ρ| > 0.8), and roughly the same at ρ = 0.6. Since Arcene and COIL20 have high mean 1-NN ρ (0.86 and 0.93 respectively), Theorem 3 may imply that cosine estimation of ρ̂db1,f should (in general) have smaller mis-ordering probability than b ≥ 2, implying higher 1-NN precision. On the other hand, the average 1-NN ρ of BASEHOCK is 0.59, so ρ̂dbb,f with all b = 1, 2, ...,∞ would likely give similar performance. These claims are consistent with Column 1 of Figure 3. • The variance of the debiased normalized estimator ρ̂dbb,f,n (Figure 1, middle panel) decreases as b increases, uniformly for any ρ. Hence by Theorem 3 we expect the 1-NN precision should increase with larger b on all 3 datasets, as confirmed by Column 2 of Figure 3. 7 Conclusion In this paper, we conduct a comprehensive study of estimating inner product similarities from random projections followed by asymmetric quantizations. This setting is theoretically interesting and also has many practical applications. For example, in a retrieval system, data vectors (after random projections) in the repository are quantized to reduce storage and communications; when a new query vector arrives, it does not have to be quantized. Another example of asymmetric quantization may come from data collected from different sources with own quantization strategies. In this study, we propose a series of estimators for asymmetric quantization, starting with the simple basic estimator, then the normalized estimator, and then the debiased estimators. We provide a thorough analysis of the estimation errors. Furthermore, we analyze the problems of “mis-ordering” probabilities and monotonicity properties of estimators. While our methods and analyses are largely based on the classical Lloyd-Max (LM) method, they can be extended to other more general quantization schemes.
1. What are the significant contributions of the paper, particularly in terms of methodology? 2. What are the weaknesses of the paper regarding its readability, organization, and cohesiveness? 3. How does the reviewer assess the relevance and connection between the different results presented in the paper? 4. What is the main topic of the paper, and how does it relate to the presented results? 5. Does the paper provide sufficient formal definitions and explanations of the key concepts?
Review
Review This is a tricky article to review. It clearly makes some significant contributions, especially when it comes with a methodology for obtaining a quantization scheme that minimizes debiased variance. On the other hand it is very hard to read with multiple grammatical and syntactic mistakes. The main topic of the paper, asymmetric quantization, is not formally defined until page 6 and then is discussed for less than half a page. Meanwhile other results, mostly having to do with debiased estimator variance for the symmetric case, cover the lion's share of the paper. As far as I can tell these results end up having no connection with the asymmetric case. In the end this paper feels like a collection of cool results that were rushed into a paper with no common thread. In the end it doesn’t form a cohesive unit. I don’t like the idea of turning down a valuable paper for structure alone but in the end this paper just lacks the necessary polish to be published (which it ultimatly should be).
NIPS
Title Random Projections with Asymmetric Quantization Abstract The method of random projection has been a popular tool for data compression, similarity search, and machine learning. In many practical scenarios, applying quantization on randomly projected data could be very helpful to further reduce storage cost and facilitate more efficient retrievals, while only suffering from little loss in accuracy. In real-world applications, however, data collected from different sources may be quantized under different schemes, which calls for a need to study the asymmetric quantization problem. In this paper, we investigate the cosine similarity estimators derived in such setting under the Lloyd-Max (LM) quantization scheme. We thoroughly analyze the biases and variances of a series of estimators including the basic simple estimators, their normalized versions, and their debiased versions. Furthermore, by studying the monotonicity, we show that the expectation of proposed estimators increases with the true cosine similarity, on a broader family of stair-shaped quantizers. Experiments on nearest neighbor search justify the theory and illustrate the effectiveness of our proposed estimators. 1 Introduction The method of random projections (RP) [35] has become a popular technique to reduce data dimensionality while preserving distances between data points, as guaranteed by the celebrated JohnsonLindenstrauss (J-L) Lemma and variants [24, 12, 1]. Given a high dimensional dataset, the algorithm projects each data point onto a lower-dimensional random subspace. There is a very rich literature of research on the theory and applications of random projections, such as clustering, classification, near neighbor search, bio-informatics, compressed sensing, etc. [22, 10, 4, 6, 8, 17, 18, 28, 15, 7, 19, 11, 9]. In recent years, “random projections + quantization” has been an active research topic. That is, the projected data, which are in general real-valued (i.e., infinite precision), are quantized into integers in a small number of bits. Applying quantization on top of random projections has at least two major advantages: (i) the storage cost is further (substantially) reduced; and (ii) some important applications such as hashing-table-based near neighbor search, require using quantized data for indexing purposes. The pioneering example of quantized random projections should be the so-called “1-bit” (sign) random projections, initially used for analyzing the MaxCut problem [20] and then was adopted for near neighbor search [8] and compressed sensing [5, 23, 25]. As one would expect, storing merely 1-bit per projected data value in many situations might suffer from a substantial loss of accuracy, compared to using random projections with full (infinite) precision. There have been various studies on (symmetrically) quantized random projections beyond the 1-bit scheme, e.g., [13, 37, 26, 29, 31]. In this paper, we further move to studying “asymmetric quantization” of random projections, a relatively new problem arising from practical scenarios which is also mathematically very interesting. Everyday, the process of data collection is taking place in every possible place that one can think of, but it is often impractical to cast a universal encoding strategy on data storage methods for every place. As a consequence, it becomes a meaningful task to look into the estimation problems with data encoded by different algorithms, or namely, the asymmetric case. In this paper, we provide 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. some insights on this type of problems, and particularly, we consider recovering inner products from asymmetrically quantized random projections, arising from the following two practical scenarios. • Scenario 1: quantization vs. full-precision. Consider, for example, a retrieval system which uses random projections to process every data vector. To save storage, the projected data stored in the repository are quantized into a small number of bits. When a query data vector arrives, it is first processed by random projections. We then have the option of quantizing the projected query data vector before conducting the similarity search (with vectors in the repository); but we do not have to do the quantization step since we still have the projected query data vector in full-precision (why waste?). This situation hence creates the “quantization vs. full-precision” estimation problem. This setting is natural and practical, and the estimation problem has been studied in the literature, for example [14, 21, 27]. • Scenario 2: quantization with different bits. In applications such as large ad hoc networks [36, 30], data are collected and processed by different nodes (e.g., sensors or mobile devices) at different locations before sent to the central unit or cloud server. However, distinct nodes may use different quantization methods (or different bits) due to many possible reasons, e.g., memory capacity or purpose of data usage. In this situation, information retrieval among data sources using different quantization schemes could be on the cards. As a tightly related topic, asymmetric distributed source coding (with different bits from different sources) has also been considered in [3, 34] among others for sensor networks. Scenario 1 is in fact an important special case of Scenario 2, where one source of data is quantized with infinite bits. In this paper, we provide thorough statistical analysis on the above two scenarios. Our contributions. The major contributions of this paper include the following: • In Section 3, we provide the bias and variance of linear and normalized inner product estimators in Scenario 1. We reveal an interesting connection between the variance of debiased inner product estimator and similarity search, which is very helpful in practice. • In Sections 4 and 5, we conduct statistical analysis in Scenario 2, and prove the monotonicity of a large family of asymmetric quantized inner product estimators, which assures their validity for practical use. A new bound on the bias is also derived in the symmetric case. • In Section 6, an empirical study on various real-world datasets confirms the theoretical findings and well illustrates the effectiveness of proposed quantization schemes. 2 Preliminaries Random Projections. Let U = [u1, ..., un]T ∈ Rn×d be the original data matrix (with d possibly being large). Random projections are realized by Z = [z1, ..., zn]T = U × R, where R ∈ Rd×k, k d is a random matrix with i.i.d. standard Gaussian entries. Let ‖·‖2 denote the l2 Euclidean norm. Throughout this paper, we assume that every data point is normalized to unit norm1, i.e., ‖ui‖2 = 1, 1 ≤ i ≤ n. We will hence use the terms “inner product” and “cosine similarity” interchangeably. For the convenience of presentation, our results (estimators and properties) will be given for two pairs of data vectors, ui and uj (and correspondingly zi and zj). Let ρ = 〈ui, uj〉 be the inner product between ui and uj . We also denote x = zi and y = zj . It is then easy to verify that (x, y) is bi-variate normal: ( x y ) ∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) . (1) Lloyd-Max (LM) quantization [33, 32]. Assume a random signal model with signals generated from a probability distribution with densityX ∼ f . AnM -level scalar quantizer qM (·) is specified by M + 1 decision borders t0 < t1 < · · · < tM and M reconstruction levels (or codes) µi, i = 1, ...,M , with the quantizing operator defined as qM (x) = µi∗ , i ∗ = {i : ti−1 < x ≤ ti, 1 ≤ i ≤M}. (2) 1Normalizing each data vector to the unit norm is a standard data preprocessing procedure for many applications such as clustering and classification. In this paper, we adopt this assumption merely for convenience. When data is not normalized, our results still hold, although we will need to store the values of the norms. The “distortion” is an important quantity that measures how much information is lost from the original signal due to quantization. In this paper, we will also assume M = 2b, with b = 1, 2, ..., being the number of bits used for the quantizer. Thus, we will write qb(·) instead of qM (·). Definition 1. The distortion of a b-bit quantizer Qb(·) with respect to distribution f is defined as E ( (X − qb(X))2 ) = ∫ (x− qb(x))2f(x)dx = 2b∑ i=1 ∫ ti ti−1 (x− µi)2f(x)dx. (3) In this paper, f is the standard normal, i.e., f(x) = φ(x) = 1√ 2π e−x 2/2 in the conventional notation for Gaussian. Also, we will use Qb to denote Lloyd-Max (LM) quantizer which minimizes the distortion and Db to denote the corresponding value of distortion: Qb = argmin q E ( (X − qb(X))2 ) , Db = E ( (X −Qb(X))2 ) (4) A basic identity of LM quantizer is that E(Q2b(X)) = E(Qb(X)X). In practice, Lloyd’s algorithm [32] is used to find the solution, which alternates between updating borders and reconstruction points until convergence (and the convergence is guaranteed). Estimates using full-precision RP’s. Consider observations ( xi yi ) i.i.d.∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) , 1 ≤ i ≤ k, as in (1). The task is to estimate ρ. One can use the usual simple estimator ρ̂f = 1 k k∑ i=1 xiyi, with E(ρ̂f ) = ρ, V ar(ρ̂f ) = 1 + ρ2 k . (5) where E(ρ̂) is the expectation and V ar(ρ̂) is the variance. Note that the variance grows as |ρ| increases. One can take advantage of the following so-called “normalized estimator”: ρ̂f,n = ∑k i=1 xiyi√∑k i=1 x 2 i √∑k i=1 y 2 i , E(ρ̂f,n) = ρ+O( 1 k ), V ar(ρ̂f,n) = (1− ρ2)2 k +O( 1 k2 ). (6) ρ̂f,n is nearly unbiased but it substantially reduces the variance especially near two extreme points ρ = ±1. We refer readers to the classical textbook [2] and recent papers [28, 27] for more details. Estimates using symmetric LM quantized RP’s. [29] study the inner product estimator under LM quantization scheme, by analyzing the biases and variances of estimators in the symmetric case. That is, the observations xi and yi are quantized by the same LM scheme with the same number of bits (b). In this paper, we study the asymmetric setting by using b1 number of bits for quantizing xi and b2 number of bits for yi. Apparently, the work of [29] is a special case of our results (i.e., b1 = b2). Interestingly, our analysis also leads to a more refined bound on the estimation bias in the symmetric case compared to the corresponding bound in [29]. See Section 4 for the detailed results. 3 Scenario 1: Quantization vs. Full-precision Recall that, we have i.i.d. observations {xi, yi}, i = 1, 2, ..., k, from a standard bi-variate normal with xi ∼ N(0, 1), yi ∼ N(0, 1), and E(xiyi) = ρ. In this section, we study Scenario 1: quantization vs. full-precision. That is, we quantize xi with b bits and we leave yi intact. The task is to estimate ρ from {Qb(xi), yi}, i = 1, 2, ..., k. We can still try to use the simple estimator similar to (5): ρ̂b,f = 1 k k∑ i=1 Qb(xi)yi. (7) As one would expect, this estimator ρ̂b,f is no longer unbiased. We can show that E (ρ̂b,f ) = ξ1,1ρ. Hence, we can attempt to remove the bias by using the following “debiased estimator” ρ̂dbb,f = ρ̂b,f ξ1,1 = 1 k 1 ξ1,1 k∑ i=1 Qb(xi)yi. (8) We will need to define ξ1,1. More generally and analogous to the notation in [29], we define γα,β = E ( Qb(x) αyβ ) , ξα,β = E ( Qb(x) αxβ ) . (9) That is, ξ1,1 = E (Qb(x)x). Note that ξα,β can be represented by γα,β , but we use both for convenience. Also note that ξ1,1 = ξ2,0 = 1 −Db from definitions. For b = 1, 2, 3, 4,∞, we can compute ξ1,1 = 0.6366, 0.8825, 0.9655, 0.9905, 1, respectively (keeping four decimal points). In fact, it is also known that Db = 3 3/22π 12 2 −2b, i.e., the bias decays at the rate of O(2−2b). In the following, Theorem 1 summarizes the expectations and variances of the two estimators ρ̂b,f and ρ̂dbb,f . Theorem 1. E (ρ̂b,f ) = ξ1,1ρ, E ( ρ̂dbb,f ) = ρ, (10) V ar (ρ̂b,f ) = Vb,f k , with Vb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 (11) V ar ( ρ̂dbb,f ) = V dbb,f k , with V dbb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 ξ21,1 . (12) Normalized Estimator. We also attempt to take advantage of the (beneficial) effect of normalization by defining two normalized estimators and their variances, as summarized in Theorem 2. Theorem 2. As k →∞, we have ρ̂b,f,n = ∑k i=1Qb(xi)yi√∑k i=1Q 2 b(xi) √∑k i=1 y 2 i , E(ρ̂b,f,n) = √ ξ1,1ρ+O( 1 k ), (13) ρ̂dbb,f,n = ρ̂b,f,n√ ξ1,1 , E(ρ̂dbb,f,n) = ρ+O( 1 k ), (14) V ar (ρ̂b,f,n) = Vb,f,n k +O( 1 k2 ), V ar(ρ̂dbb,f,n) = V dbb,f,n k +O( 1 k2 ), (15) Vb,f,n = ( γ4,0 4γ2,0 + 3 4 γ2,0 + 1 2 γ2,2 ) ρ2 − ( γ3,1 γ2,0 + γ1,3 ) ρ+ γ2,2 γ2,0 , V dbb,f,n = Vb,f,n ξ1,1 . (16) 3.1 Benefits of normalized estimators and debiased estimators Figure 1 plots (in the left two panels) the variances for two debiased estimators ρ̂dbb,f and ρ̂ db b,f,n, to illustrate the benefits of normalization. The right panel of Figure 1 demonstrates that the variance of the normalized estimator is always smaller, and substantially so as ρ away from zero. factor V dbb,f,n (for the normalized estimator). Right panel: the variance ratio: V dbb,f V dbb,f,n . To elaborate on the benefit of debiased estimators, we evaluate the mean square errors (MSE): bias2 + variance. Given the benefit of normalization, we consider the two normalized estimators: MSE (ρ̂b,f,n) = ( 1− √ ξ1,1 )2 ρ2 + Vb,f,n k +O( 1 k2 ), MSE ( ρ̂dbb,f,n ) = Vb,f,n ξ1,1k +O( 1 k2 ). Thus, to compare their mean square errors, we can examine the ratio: ξ1,1 + kρ2 ξ1,1(1− √ ξ1,1) 2 Vb,f,n , which will be larger than 1 quickly as k increases. Note that ξ1,1 ≤ 1 but it is very close to 1 when b ≥ 3. In summary, the MSE of the debiased estimator quickly becomes smaller as k increases. 3.2 Analysis of mis-ordering probabilities in similarity search In similarity search, the estimates of inner products are subsequently used for ordering data vectors to identify the nearest neighbor for a given query. Intuitively, a more accurate estimator should provide a more accurate ordering, but a precise analysis is needed for the “mis-ordering” probabilities. Definition 2. Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being a query) with unit norm and pair-wise cosine similarity ρ12, ρ13 and ρ23 respectively. For an estimator ρ̂, the probability of mis-ordering is defined as PM(u1;u2, u3) = Pr (ρ̂12 > ρ̂13|ρ12 < ρ13) . Consider a case where u3 is the nearest point of u1 in the data space (which implies ρ12 < ρ13). If the estimation gives ρ̂12 > ρ̂13, we then make a wrong decision that u3 is not the nearest neighbor of u1. Theorem 3. (Asymptotic mis-ordering) Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being the query) on a unit sphere with pair-wise inner products ρ12, ρ13 and ρ23 respectively. Denote two estimators ρ̂ and ρ̂′ based on k random projections such that as k →∞, the normality ρ̂ ∼ N(αρ, σ̂2ρ) and ρ̂′ ∼ N(α′ρ, σ̂′2ρ ) hold, with constants α, α′ > 0. Denote δ2ρ = σ̂2ρ α2 , δ ′2 ρ = σ̂′2ρ α′2 and the correlations C = corr(ρ̂12, ρ̂13), C ′ = corr(ρ̂′12, ρ̂ ′ 13), respectively. If δ′ρ12 = aδρ12 , δ ′ ρ13 = a ′δρ13 , C − aa′C ′ < (1− a2)δ2ρ12 + (1− a ′2)δ2ρ13 2δρ12δρ13 , (17) with some 0 < a < 1, 0 < a′ < 1, then as k → ∞ we have P̂M(u1;u2, u3) > P̂ ′M(u1;u2, u3), where P̂M(u1;u2, u3) and P̂ ′M(u1;u2, u3) are the mis-ordering probability of ρ̂ and ρ̂ ′, respectively. Remark. There is an interesting connection with the variances of the aforementioned “debiased estimators”. Condition (17) basically assumes that the variance of the debiased ρ̂′ is smaller than that of the debiased ρ̂ at ρ12 and ρ13 respectively by factors a and a′. In a special case where a = a′ and C = C ′, the last constraint in (17) reduces to C < δ2ρ12 +δ2ρ13 2δρ12δρ13 , which always holds since the right-hand side is greater than 1. Also, note that, by Central Limit Theorem, the normality assumption is true for all the estimators we have discussed in this paper. Although Theorem 3 is asymptotic, we are able to obtain valuable insights in finite sample case, since statistically a sufficiently large k is enough to provide good approximation to the normal distribution. The important message given by Theorem 3 is that estimators with lower “debiased variance” (δ) tend to have lower mis-ordering probability, which leads to a more accurate estimation of nearest neighbors in the original data space. This could be extremely feasible in numerous real world applications. 4 Scenario 2: Quantization with Different Bits We now consider the more general case (Scenario 2) where the data vectors are LM quantized with different numbers of bits. That is, given observations {xi, yi}, 1 ≤ i ≤ n, we quantize xi using b1 bits and yi using b2 bits. Without loss of generality, we assume b1 < b2. Furthermore, we denote two Lloyd-Max quantizers as Qb1 and Qb2 and distortion Db1 and Db2 , respectively. Similar to Scenario 1, we define the asymmetric estimator and the corresponding normalized estimator as ρ̂b1,b2 = 1 k k∑ i=1 Qb1(xi)Qb2(yi), ρ̂b1,b2,n = ∑k i=1Qb1(xi)Qb2(yi)√∑k i=1Q 2 b1 (xi) √∑k i=1Q 2 b2 (yi) . (18) As one might expect, the analysis will become somewhat more difficult. Similar to the analysis for Scenario 1, in this section we will use the following notations: ξα,β = E ( Qb1(x) αxβ ) , γα,β = E ( Qb2(x) αxβ ) , ζα,β = E ( Qb1(x) αQb2(y) β ) , (19) which allow us to express the expectation and variance of ρ̂b1,b2 as follows. E (ρ̂b1,b2) = ζ1,1, V ar (ρ̂b1,b2) = Vb1,b2 k , Vb1,b2 = ζ2,2 − ζ21,1 (20) ζ1,1 can be expressed as an infinite sum, but it appears difficult to be further simplified. Nevertheless, we are able to quantify the expectation of ρ̂b1,b2 in Theorem 4. Theorem 4. The following two bounds hold for ρ ∈ [−1, 1]:∣∣E (ρ̂b1,b2)− (1−Db1)(1−Db2)ρ∣∣ ≤ ∆1, and (21) ∆2 −∆1 ≤ |E (ρ̂b1,b2)− ρ| ≤ ∆1 + ∆2, where (22) ∆1 = √ Db1Db2 √ 1−Db1 √ 1−Db2 |ρ|3, ∆2 = (Db1 +Db2 −Db1Db2)|ρ|. Remark. When b2 →∞ (i.e., Scenario 1), we have Db2 → 0 and the bound reduces to an equality E (ρ̂b1,∞) = (1−Db1)ρ, which matches the result in Section 3. Eq. (22) provides upper and lower bounds for the absolute bias of ρ̂b1,b2 . When b1 = b2 (i.e., the symmetric quantization case), Theorem 5 presents more refined bounds of the bias of ρ̂b1,b2 . Theorem 5. (Symmetric quantization) Suppose b1 = b2 = b. For ρ ∈ [−1, 1], we have (2Db −D2b )|ρ| −Db(1−Db)|ρ|3 ≤ |E (ρ̂b,b)− ρ| ≤ (2Db −D2b )|ρ|. (23) Remark. Compared to [29], which derived |E(ρ̂b,b)− ρ| ≤ 2Db|ρ|, our bounds are more tight. What about the debiased estimator of ρ̂b1,b2? It is slightly tricky because E(ρ̂b1,b2) = ζ1,1 cannot be explicitly expressed as cρ for some constant c (otherwise the debiased estimator would be simply ρ̂b1,b2/c). In Theorem 4, Eq. (21) implies that the expectation of ρ̂b1,b2 is close to (1−Db1)(1−Db2)ρ. Thus, we recommend ρ̂b1,b2(1−Db1 )(1−Db2 ) as the surrogate for the debiased estimator. Next, we provide the expectation and variance of the normalized estimator in Theorem 6. Theorem 6. (Normalized estimator) As k →∞, we have E (ρ̂b1,b2,n) = ζ1,1√ ξ2,0γ2,0 +O( 1 k ), V ar (ρ̂b1,b2,n) = Vb1,b2,n k +O( 1 k2 ), (24) Vb1,b2,n = ζ2,2 − ζ21,1 ξ2,0γ2,0 − ζ1,1ζ3,1 − ζ21,1ξ2,0 ξ22,0γ2,0 − ζ1,1ζ1,3 − ζ21,1γ2,0 ξ2,0γ22,0 (25) + ζ21,1ζ2,2 − ζ21,1ξ2,0γ2,0 2ξ22,0γ 2 2,0 + ζ21,1ξ4,0 − ζ21,1ξ22,0 4ξ32,0γ2,0 + ζ21,1γ4,0 − ζ21,1γ22,0 4ξ2,0γ32,0 . Remark. When b2 = ∞, the expected value of ρ̂b1,b2,n reduces to that of ρ̂b1,f,n in Scenario 1. Additionally, we have ζ1,1 = ζ2,0ρ, γ2,0 = 1, and γ4,0 = 3. It is easy to check that the expression of the variance will reduce to the corresponding formula in Theorem 2. Also, note that ξ2,0 = 1−Db1 , γ2,0 = 1 − Db2 , and ζ1,1 ≈ (1 − Db1)(1 − Db2)ρ. This means that we can practically use ρ̂b1,b2,n√ (1−Db1 )(1−Db2 ) as surrogate for the debiased estimator of ρ̂b1,b2,n. We plot the related results in Figure 2, which verifies the theories in Theorems 4, 5 and 6. 5 Monotonicity of Inner Product Estimates In applications such as nearest neighbor retrieval, the order of distances tends to matter more than the exact values. Given an estimator ρ̂, one would hope that E(ρ̂) is monotone in ρ. This is indeed the case in the full-precision situation. Recall that, in Section 2, given i.i.d. observations {xi, yi}, i = 1, 2, ...k, the full-precision estimator ρ̂f = 1k ∑k i=1 xiyi is monotone in ρ in the expectation because E(ρ̂f ) = ρ. Naturally, one will ask if the expectations of our quantized estimators, e.g., ρ̂b1,b2 = 1 k ∑k i=1Qb1(xi)Qb2(yi), are also monotone in ρ. This turns out to be non-trivial question. We solve this important problem rigorously through several stages. Our analysis will not be restricted to LM quantizers. To do so, we will need the following definition about “increasing quantizer”. Definition 3. (Increasing quantizer) Let Q be an M -level quantizer with boarders t0 < · · · < tM and reconstruction levels µ1, ..., µM . We say that Q is an increasing quantizer if µ1 < · · · < µM . To proceed, we will prove the following three Lemmas for increasing quantizers. Lemma 1. (1-bit vs. others) Suppose Qb1 , Qb2 are increasing quantizers symmetric about 0, with b1 ≥ 1, and b2 = 1. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 2. (2-bit vs. 2-bit) Suppose Qb1 , Qb2 are any two increasing quantizers symmetric about 0, with b1 = b2 = 2. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 3. (Universal decomposition) For any increasing discrete quantizer Qb, b ≥ 3 which is symmetric about 0, there exist a 2-bit symmetric increasing quantizer Q2 and a (b-1)-bit symmetric increasing quantizer Qb−1 such that Qb = Qb−1 +Q2. Once we have the above lemmas, we are ready to prove the monotonicity of E(Qb1(x)Qb2(y)). Theorem 7. (Monotonicity) For any increasing quantizers Qb1 and Qb2 symmetric about 0 with bits b1 ≥ 1 and b2 ≥ 1, the function E(Qb1(x)Qb2(y)) is increasing in ρ. Proof. By Lemma 1, we know that the statement is valid for b1 = 1, and arbitrary b2. Now we look at the case where b1 ≥ 2, b2 ≥ 2. By Lemma 3, we can always write Qb1(x) = b1−1∑ i=1 Q̃ (i) 2 (x), Qb2(y) = b2−1∑ j=1 Q̂ (j) 2 (y), where Q̃12, ..., Q̃ b1−1 2 and Q̂ 1 2, ..., Q̂ b2−1 2 are two sets of symmetric increasing 2-bit quantizers. Thus, ∂E(Qb1(x)Qb2(y)) ∂ρ = ∂E( ∑b1−1 i=1 Q̃ (i) 2 (x) ∑b2−1 j=1 Q̂ (j) 2 (y)) ∂ρ = b1−1∑ i=1 b2−1∑ j=1 ∂E(Q̃(i)2 (x)Q̂ (j) 2 (y)) ∂ρ > 0, where the last equality is due to linearity of expectation and derivative, and the inequality holds because of Lemma 2. Therefore, E(Qb1(x)Qb2(y)) is increasing in ρ for any b1 ≥ 1 and b2 ≥ 1. Recall that, in Section 3.2, we have proved the result for the mis-ordering probability, i.e., Theorem 3, which actually assumes estimators have expectations monotone in ρ. Therefore, Theorem 7 provides the necessary proof to support the assumption in Theorem 3. 6 Empirical Study: Similarity Search In this section, we test proposed estimators on 3 datasets from the UCI repository (Table 1) [16]. The experiments clearly confirm that the normalization step uniformly improves the search accuracy. The results also, to an extent, illustrate the influence of mis-ordering probability studied in Theorem 3. For each dataset, all the examples are preprocessed to have unit norm. The evaluation metric we adopt is the 1-NN precision, which is the proportion of nearest neighbors (NN) we can recover from the nearest neighbor estimated using quantized random projections, averaged over all the examples. We summarize the results in Figure 3. First of all, we can see that, as the number of bits increases, the performance of the quantized estimators converges to that of the estimator with full-precision, as expected. Importantly, the normalization step of the estimators substantially improves the performances, by comparing Column 2 with Column 1 (for Scenario 1), and Column 4 with Column 3 (for Scenario 2). In addition, we can to an extent validate the assertions in Theorem 3, which states that smaller variance of debiased estimators could improve NN recovery precision. • In Figure 1 (left panel), we see that the variance of debiased estimate ρ̂dbb,f with b = 1 is much smaller than using b ≥ 2 in high similarity region (e.g. |ρ| > 0.8), and roughly the same at ρ = 0.6. Since Arcene and COIL20 have high mean 1-NN ρ (0.86 and 0.93 respectively), Theorem 3 may imply that cosine estimation of ρ̂db1,f should (in general) have smaller mis-ordering probability than b ≥ 2, implying higher 1-NN precision. On the other hand, the average 1-NN ρ of BASEHOCK is 0.59, so ρ̂dbb,f with all b = 1, 2, ...,∞ would likely give similar performance. These claims are consistent with Column 1 of Figure 3. • The variance of the debiased normalized estimator ρ̂dbb,f,n (Figure 1, middle panel) decreases as b increases, uniformly for any ρ. Hence by Theorem 3 we expect the 1-NN precision should increase with larger b on all 3 datasets, as confirmed by Column 2 of Figure 3. 7 Conclusion In this paper, we conduct a comprehensive study of estimating inner product similarities from random projections followed by asymmetric quantizations. This setting is theoretically interesting and also has many practical applications. For example, in a retrieval system, data vectors (after random projections) in the repository are quantized to reduce storage and communications; when a new query vector arrives, it does not have to be quantized. Another example of asymmetric quantization may come from data collected from different sources with own quantization strategies. In this study, we propose a series of estimators for asymmetric quantization, starting with the simple basic estimator, then the normalized estimator, and then the debiased estimators. We provide a thorough analysis of the estimation errors. Furthermore, we analyze the problems of “mis-ordering” probabilities and monotonicity properties of estimators. While our methods and analyses are largely based on the classical Lloyd-Max (LM) method, they can be extended to other more general quantization schemes.
1. What is the focus of the paper, and what are the reviewer's thoughts on its contributions? 2. What is the most interesting part of the paper according to the reviewer? 3. Does the reviewer have concerns or suggestions for improving the paper? 4. How does the reviewer assess the significance of the paper's findings, particularly regarding the ordering question? 5. Are there any limitations or potential biases in the paper that the reviewer wants to highlight?
Review
Review Overall an interesting paper, with useful results. I would consider the ordering question to be the most interesting contribution. However, the ordering matters when the nearest neighbors are of different classes (i.e., if the ordering of distances changes after quantization, it doesn't matter if both neighbors are in the same class). It is not clear how to properly model and analyze that, but it is worth some discussion in the paper. ==== I've seen and taken into account the author's response, and it does not change my score.
NIPS
Title Random Projections with Asymmetric Quantization Abstract The method of random projection has been a popular tool for data compression, similarity search, and machine learning. In many practical scenarios, applying quantization on randomly projected data could be very helpful to further reduce storage cost and facilitate more efficient retrievals, while only suffering from little loss in accuracy. In real-world applications, however, data collected from different sources may be quantized under different schemes, which calls for a need to study the asymmetric quantization problem. In this paper, we investigate the cosine similarity estimators derived in such setting under the Lloyd-Max (LM) quantization scheme. We thoroughly analyze the biases and variances of a series of estimators including the basic simple estimators, their normalized versions, and their debiased versions. Furthermore, by studying the monotonicity, we show that the expectation of proposed estimators increases with the true cosine similarity, on a broader family of stair-shaped quantizers. Experiments on nearest neighbor search justify the theory and illustrate the effectiveness of our proposed estimators. 1 Introduction The method of random projections (RP) [35] has become a popular technique to reduce data dimensionality while preserving distances between data points, as guaranteed by the celebrated JohnsonLindenstrauss (J-L) Lemma and variants [24, 12, 1]. Given a high dimensional dataset, the algorithm projects each data point onto a lower-dimensional random subspace. There is a very rich literature of research on the theory and applications of random projections, such as clustering, classification, near neighbor search, bio-informatics, compressed sensing, etc. [22, 10, 4, 6, 8, 17, 18, 28, 15, 7, 19, 11, 9]. In recent years, “random projections + quantization” has been an active research topic. That is, the projected data, which are in general real-valued (i.e., infinite precision), are quantized into integers in a small number of bits. Applying quantization on top of random projections has at least two major advantages: (i) the storage cost is further (substantially) reduced; and (ii) some important applications such as hashing-table-based near neighbor search, require using quantized data for indexing purposes. The pioneering example of quantized random projections should be the so-called “1-bit” (sign) random projections, initially used for analyzing the MaxCut problem [20] and then was adopted for near neighbor search [8] and compressed sensing [5, 23, 25]. As one would expect, storing merely 1-bit per projected data value in many situations might suffer from a substantial loss of accuracy, compared to using random projections with full (infinite) precision. There have been various studies on (symmetrically) quantized random projections beyond the 1-bit scheme, e.g., [13, 37, 26, 29, 31]. In this paper, we further move to studying “asymmetric quantization” of random projections, a relatively new problem arising from practical scenarios which is also mathematically very interesting. Everyday, the process of data collection is taking place in every possible place that one can think of, but it is often impractical to cast a universal encoding strategy on data storage methods for every place. As a consequence, it becomes a meaningful task to look into the estimation problems with data encoded by different algorithms, or namely, the asymmetric case. In this paper, we provide 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. some insights on this type of problems, and particularly, we consider recovering inner products from asymmetrically quantized random projections, arising from the following two practical scenarios. • Scenario 1: quantization vs. full-precision. Consider, for example, a retrieval system which uses random projections to process every data vector. To save storage, the projected data stored in the repository are quantized into a small number of bits. When a query data vector arrives, it is first processed by random projections. We then have the option of quantizing the projected query data vector before conducting the similarity search (with vectors in the repository); but we do not have to do the quantization step since we still have the projected query data vector in full-precision (why waste?). This situation hence creates the “quantization vs. full-precision” estimation problem. This setting is natural and practical, and the estimation problem has been studied in the literature, for example [14, 21, 27]. • Scenario 2: quantization with different bits. In applications such as large ad hoc networks [36, 30], data are collected and processed by different nodes (e.g., sensors or mobile devices) at different locations before sent to the central unit or cloud server. However, distinct nodes may use different quantization methods (or different bits) due to many possible reasons, e.g., memory capacity or purpose of data usage. In this situation, information retrieval among data sources using different quantization schemes could be on the cards. As a tightly related topic, asymmetric distributed source coding (with different bits from different sources) has also been considered in [3, 34] among others for sensor networks. Scenario 1 is in fact an important special case of Scenario 2, where one source of data is quantized with infinite bits. In this paper, we provide thorough statistical analysis on the above two scenarios. Our contributions. The major contributions of this paper include the following: • In Section 3, we provide the bias and variance of linear and normalized inner product estimators in Scenario 1. We reveal an interesting connection between the variance of debiased inner product estimator and similarity search, which is very helpful in practice. • In Sections 4 and 5, we conduct statistical analysis in Scenario 2, and prove the monotonicity of a large family of asymmetric quantized inner product estimators, which assures their validity for practical use. A new bound on the bias is also derived in the symmetric case. • In Section 6, an empirical study on various real-world datasets confirms the theoretical findings and well illustrates the effectiveness of proposed quantization schemes. 2 Preliminaries Random Projections. Let U = [u1, ..., un]T ∈ Rn×d be the original data matrix (with d possibly being large). Random projections are realized by Z = [z1, ..., zn]T = U × R, where R ∈ Rd×k, k d is a random matrix with i.i.d. standard Gaussian entries. Let ‖·‖2 denote the l2 Euclidean norm. Throughout this paper, we assume that every data point is normalized to unit norm1, i.e., ‖ui‖2 = 1, 1 ≤ i ≤ n. We will hence use the terms “inner product” and “cosine similarity” interchangeably. For the convenience of presentation, our results (estimators and properties) will be given for two pairs of data vectors, ui and uj (and correspondingly zi and zj). Let ρ = 〈ui, uj〉 be the inner product between ui and uj . We also denote x = zi and y = zj . It is then easy to verify that (x, y) is bi-variate normal: ( x y ) ∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) . (1) Lloyd-Max (LM) quantization [33, 32]. Assume a random signal model with signals generated from a probability distribution with densityX ∼ f . AnM -level scalar quantizer qM (·) is specified by M + 1 decision borders t0 < t1 < · · · < tM and M reconstruction levels (or codes) µi, i = 1, ...,M , with the quantizing operator defined as qM (x) = µi∗ , i ∗ = {i : ti−1 < x ≤ ti, 1 ≤ i ≤M}. (2) 1Normalizing each data vector to the unit norm is a standard data preprocessing procedure for many applications such as clustering and classification. In this paper, we adopt this assumption merely for convenience. When data is not normalized, our results still hold, although we will need to store the values of the norms. The “distortion” is an important quantity that measures how much information is lost from the original signal due to quantization. In this paper, we will also assume M = 2b, with b = 1, 2, ..., being the number of bits used for the quantizer. Thus, we will write qb(·) instead of qM (·). Definition 1. The distortion of a b-bit quantizer Qb(·) with respect to distribution f is defined as E ( (X − qb(X))2 ) = ∫ (x− qb(x))2f(x)dx = 2b∑ i=1 ∫ ti ti−1 (x− µi)2f(x)dx. (3) In this paper, f is the standard normal, i.e., f(x) = φ(x) = 1√ 2π e−x 2/2 in the conventional notation for Gaussian. Also, we will use Qb to denote Lloyd-Max (LM) quantizer which minimizes the distortion and Db to denote the corresponding value of distortion: Qb = argmin q E ( (X − qb(X))2 ) , Db = E ( (X −Qb(X))2 ) (4) A basic identity of LM quantizer is that E(Q2b(X)) = E(Qb(X)X). In practice, Lloyd’s algorithm [32] is used to find the solution, which alternates between updating borders and reconstruction points until convergence (and the convergence is guaranteed). Estimates using full-precision RP’s. Consider observations ( xi yi ) i.i.d.∼ N (( 0 0 ) , ( 1 ρ ρ 1 )) , 1 ≤ i ≤ k, as in (1). The task is to estimate ρ. One can use the usual simple estimator ρ̂f = 1 k k∑ i=1 xiyi, with E(ρ̂f ) = ρ, V ar(ρ̂f ) = 1 + ρ2 k . (5) where E(ρ̂) is the expectation and V ar(ρ̂) is the variance. Note that the variance grows as |ρ| increases. One can take advantage of the following so-called “normalized estimator”: ρ̂f,n = ∑k i=1 xiyi√∑k i=1 x 2 i √∑k i=1 y 2 i , E(ρ̂f,n) = ρ+O( 1 k ), V ar(ρ̂f,n) = (1− ρ2)2 k +O( 1 k2 ). (6) ρ̂f,n is nearly unbiased but it substantially reduces the variance especially near two extreme points ρ = ±1. We refer readers to the classical textbook [2] and recent papers [28, 27] for more details. Estimates using symmetric LM quantized RP’s. [29] study the inner product estimator under LM quantization scheme, by analyzing the biases and variances of estimators in the symmetric case. That is, the observations xi and yi are quantized by the same LM scheme with the same number of bits (b). In this paper, we study the asymmetric setting by using b1 number of bits for quantizing xi and b2 number of bits for yi. Apparently, the work of [29] is a special case of our results (i.e., b1 = b2). Interestingly, our analysis also leads to a more refined bound on the estimation bias in the symmetric case compared to the corresponding bound in [29]. See Section 4 for the detailed results. 3 Scenario 1: Quantization vs. Full-precision Recall that, we have i.i.d. observations {xi, yi}, i = 1, 2, ..., k, from a standard bi-variate normal with xi ∼ N(0, 1), yi ∼ N(0, 1), and E(xiyi) = ρ. In this section, we study Scenario 1: quantization vs. full-precision. That is, we quantize xi with b bits and we leave yi intact. The task is to estimate ρ from {Qb(xi), yi}, i = 1, 2, ..., k. We can still try to use the simple estimator similar to (5): ρ̂b,f = 1 k k∑ i=1 Qb(xi)yi. (7) As one would expect, this estimator ρ̂b,f is no longer unbiased. We can show that E (ρ̂b,f ) = ξ1,1ρ. Hence, we can attempt to remove the bias by using the following “debiased estimator” ρ̂dbb,f = ρ̂b,f ξ1,1 = 1 k 1 ξ1,1 k∑ i=1 Qb(xi)yi. (8) We will need to define ξ1,1. More generally and analogous to the notation in [29], we define γα,β = E ( Qb(x) αyβ ) , ξα,β = E ( Qb(x) αxβ ) . (9) That is, ξ1,1 = E (Qb(x)x). Note that ξα,β can be represented by γα,β , but we use both for convenience. Also note that ξ1,1 = ξ2,0 = 1 −Db from definitions. For b = 1, 2, 3, 4,∞, we can compute ξ1,1 = 0.6366, 0.8825, 0.9655, 0.9905, 1, respectively (keeping four decimal points). In fact, it is also known that Db = 3 3/22π 12 2 −2b, i.e., the bias decays at the rate of O(2−2b). In the following, Theorem 1 summarizes the expectations and variances of the two estimators ρ̂b,f and ρ̂dbb,f . Theorem 1. E (ρ̂b,f ) = ξ1,1ρ, E ( ρ̂dbb,f ) = ρ, (10) V ar (ρ̂b,f ) = Vb,f k , with Vb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 (11) V ar ( ρ̂dbb,f ) = V dbb,f k , with V dbb,f = (ξ2,2 − ξ2,0 − ξ21,1)ρ2 + ξ2,0 ξ21,1 . (12) Normalized Estimator. We also attempt to take advantage of the (beneficial) effect of normalization by defining two normalized estimators and their variances, as summarized in Theorem 2. Theorem 2. As k →∞, we have ρ̂b,f,n = ∑k i=1Qb(xi)yi√∑k i=1Q 2 b(xi) √∑k i=1 y 2 i , E(ρ̂b,f,n) = √ ξ1,1ρ+O( 1 k ), (13) ρ̂dbb,f,n = ρ̂b,f,n√ ξ1,1 , E(ρ̂dbb,f,n) = ρ+O( 1 k ), (14) V ar (ρ̂b,f,n) = Vb,f,n k +O( 1 k2 ), V ar(ρ̂dbb,f,n) = V dbb,f,n k +O( 1 k2 ), (15) Vb,f,n = ( γ4,0 4γ2,0 + 3 4 γ2,0 + 1 2 γ2,2 ) ρ2 − ( γ3,1 γ2,0 + γ1,3 ) ρ+ γ2,2 γ2,0 , V dbb,f,n = Vb,f,n ξ1,1 . (16) 3.1 Benefits of normalized estimators and debiased estimators Figure 1 plots (in the left two panels) the variances for two debiased estimators ρ̂dbb,f and ρ̂ db b,f,n, to illustrate the benefits of normalization. The right panel of Figure 1 demonstrates that the variance of the normalized estimator is always smaller, and substantially so as ρ away from zero. factor V dbb,f,n (for the normalized estimator). Right panel: the variance ratio: V dbb,f V dbb,f,n . To elaborate on the benefit of debiased estimators, we evaluate the mean square errors (MSE): bias2 + variance. Given the benefit of normalization, we consider the two normalized estimators: MSE (ρ̂b,f,n) = ( 1− √ ξ1,1 )2 ρ2 + Vb,f,n k +O( 1 k2 ), MSE ( ρ̂dbb,f,n ) = Vb,f,n ξ1,1k +O( 1 k2 ). Thus, to compare their mean square errors, we can examine the ratio: ξ1,1 + kρ2 ξ1,1(1− √ ξ1,1) 2 Vb,f,n , which will be larger than 1 quickly as k increases. Note that ξ1,1 ≤ 1 but it is very close to 1 when b ≥ 3. In summary, the MSE of the debiased estimator quickly becomes smaller as k increases. 3.2 Analysis of mis-ordering probabilities in similarity search In similarity search, the estimates of inner products are subsequently used for ordering data vectors to identify the nearest neighbor for a given query. Intuitively, a more accurate estimator should provide a more accurate ordering, but a precise analysis is needed for the “mis-ordering” probabilities. Definition 2. Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being a query) with unit norm and pair-wise cosine similarity ρ12, ρ13 and ρ23 respectively. For an estimator ρ̂, the probability of mis-ordering is defined as PM(u1;u2, u3) = Pr (ρ̂12 > ρ̂13|ρ12 < ρ13) . Consider a case where u3 is the nearest point of u1 in the data space (which implies ρ12 < ρ13). If the estimation gives ρ̂12 > ρ̂13, we then make a wrong decision that u3 is not the nearest neighbor of u1. Theorem 3. (Asymptotic mis-ordering) Suppose u1, u2, u3 ∈ Rd are three data points (with u1 being the query) on a unit sphere with pair-wise inner products ρ12, ρ13 and ρ23 respectively. Denote two estimators ρ̂ and ρ̂′ based on k random projections such that as k →∞, the normality ρ̂ ∼ N(αρ, σ̂2ρ) and ρ̂′ ∼ N(α′ρ, σ̂′2ρ ) hold, with constants α, α′ > 0. Denote δ2ρ = σ̂2ρ α2 , δ ′2 ρ = σ̂′2ρ α′2 and the correlations C = corr(ρ̂12, ρ̂13), C ′ = corr(ρ̂′12, ρ̂ ′ 13), respectively. If δ′ρ12 = aδρ12 , δ ′ ρ13 = a ′δρ13 , C − aa′C ′ < (1− a2)δ2ρ12 + (1− a ′2)δ2ρ13 2δρ12δρ13 , (17) with some 0 < a < 1, 0 < a′ < 1, then as k → ∞ we have P̂M(u1;u2, u3) > P̂ ′M(u1;u2, u3), where P̂M(u1;u2, u3) and P̂ ′M(u1;u2, u3) are the mis-ordering probability of ρ̂ and ρ̂ ′, respectively. Remark. There is an interesting connection with the variances of the aforementioned “debiased estimators”. Condition (17) basically assumes that the variance of the debiased ρ̂′ is smaller than that of the debiased ρ̂ at ρ12 and ρ13 respectively by factors a and a′. In a special case where a = a′ and C = C ′, the last constraint in (17) reduces to C < δ2ρ12 +δ2ρ13 2δρ12δρ13 , which always holds since the right-hand side is greater than 1. Also, note that, by Central Limit Theorem, the normality assumption is true for all the estimators we have discussed in this paper. Although Theorem 3 is asymptotic, we are able to obtain valuable insights in finite sample case, since statistically a sufficiently large k is enough to provide good approximation to the normal distribution. The important message given by Theorem 3 is that estimators with lower “debiased variance” (δ) tend to have lower mis-ordering probability, which leads to a more accurate estimation of nearest neighbors in the original data space. This could be extremely feasible in numerous real world applications. 4 Scenario 2: Quantization with Different Bits We now consider the more general case (Scenario 2) where the data vectors are LM quantized with different numbers of bits. That is, given observations {xi, yi}, 1 ≤ i ≤ n, we quantize xi using b1 bits and yi using b2 bits. Without loss of generality, we assume b1 < b2. Furthermore, we denote two Lloyd-Max quantizers as Qb1 and Qb2 and distortion Db1 and Db2 , respectively. Similar to Scenario 1, we define the asymmetric estimator and the corresponding normalized estimator as ρ̂b1,b2 = 1 k k∑ i=1 Qb1(xi)Qb2(yi), ρ̂b1,b2,n = ∑k i=1Qb1(xi)Qb2(yi)√∑k i=1Q 2 b1 (xi) √∑k i=1Q 2 b2 (yi) . (18) As one might expect, the analysis will become somewhat more difficult. Similar to the analysis for Scenario 1, in this section we will use the following notations: ξα,β = E ( Qb1(x) αxβ ) , γα,β = E ( Qb2(x) αxβ ) , ζα,β = E ( Qb1(x) αQb2(y) β ) , (19) which allow us to express the expectation and variance of ρ̂b1,b2 as follows. E (ρ̂b1,b2) = ζ1,1, V ar (ρ̂b1,b2) = Vb1,b2 k , Vb1,b2 = ζ2,2 − ζ21,1 (20) ζ1,1 can be expressed as an infinite sum, but it appears difficult to be further simplified. Nevertheless, we are able to quantify the expectation of ρ̂b1,b2 in Theorem 4. Theorem 4. The following two bounds hold for ρ ∈ [−1, 1]:∣∣E (ρ̂b1,b2)− (1−Db1)(1−Db2)ρ∣∣ ≤ ∆1, and (21) ∆2 −∆1 ≤ |E (ρ̂b1,b2)− ρ| ≤ ∆1 + ∆2, where (22) ∆1 = √ Db1Db2 √ 1−Db1 √ 1−Db2 |ρ|3, ∆2 = (Db1 +Db2 −Db1Db2)|ρ|. Remark. When b2 →∞ (i.e., Scenario 1), we have Db2 → 0 and the bound reduces to an equality E (ρ̂b1,∞) = (1−Db1)ρ, which matches the result in Section 3. Eq. (22) provides upper and lower bounds for the absolute bias of ρ̂b1,b2 . When b1 = b2 (i.e., the symmetric quantization case), Theorem 5 presents more refined bounds of the bias of ρ̂b1,b2 . Theorem 5. (Symmetric quantization) Suppose b1 = b2 = b. For ρ ∈ [−1, 1], we have (2Db −D2b )|ρ| −Db(1−Db)|ρ|3 ≤ |E (ρ̂b,b)− ρ| ≤ (2Db −D2b )|ρ|. (23) Remark. Compared to [29], which derived |E(ρ̂b,b)− ρ| ≤ 2Db|ρ|, our bounds are more tight. What about the debiased estimator of ρ̂b1,b2? It is slightly tricky because E(ρ̂b1,b2) = ζ1,1 cannot be explicitly expressed as cρ for some constant c (otherwise the debiased estimator would be simply ρ̂b1,b2/c). In Theorem 4, Eq. (21) implies that the expectation of ρ̂b1,b2 is close to (1−Db1)(1−Db2)ρ. Thus, we recommend ρ̂b1,b2(1−Db1 )(1−Db2 ) as the surrogate for the debiased estimator. Next, we provide the expectation and variance of the normalized estimator in Theorem 6. Theorem 6. (Normalized estimator) As k →∞, we have E (ρ̂b1,b2,n) = ζ1,1√ ξ2,0γ2,0 +O( 1 k ), V ar (ρ̂b1,b2,n) = Vb1,b2,n k +O( 1 k2 ), (24) Vb1,b2,n = ζ2,2 − ζ21,1 ξ2,0γ2,0 − ζ1,1ζ3,1 − ζ21,1ξ2,0 ξ22,0γ2,0 − ζ1,1ζ1,3 − ζ21,1γ2,0 ξ2,0γ22,0 (25) + ζ21,1ζ2,2 − ζ21,1ξ2,0γ2,0 2ξ22,0γ 2 2,0 + ζ21,1ξ4,0 − ζ21,1ξ22,0 4ξ32,0γ2,0 + ζ21,1γ4,0 − ζ21,1γ22,0 4ξ2,0γ32,0 . Remark. When b2 = ∞, the expected value of ρ̂b1,b2,n reduces to that of ρ̂b1,f,n in Scenario 1. Additionally, we have ζ1,1 = ζ2,0ρ, γ2,0 = 1, and γ4,0 = 3. It is easy to check that the expression of the variance will reduce to the corresponding formula in Theorem 2. Also, note that ξ2,0 = 1−Db1 , γ2,0 = 1 − Db2 , and ζ1,1 ≈ (1 − Db1)(1 − Db2)ρ. This means that we can practically use ρ̂b1,b2,n√ (1−Db1 )(1−Db2 ) as surrogate for the debiased estimator of ρ̂b1,b2,n. We plot the related results in Figure 2, which verifies the theories in Theorems 4, 5 and 6. 5 Monotonicity of Inner Product Estimates In applications such as nearest neighbor retrieval, the order of distances tends to matter more than the exact values. Given an estimator ρ̂, one would hope that E(ρ̂) is monotone in ρ. This is indeed the case in the full-precision situation. Recall that, in Section 2, given i.i.d. observations {xi, yi}, i = 1, 2, ...k, the full-precision estimator ρ̂f = 1k ∑k i=1 xiyi is monotone in ρ in the expectation because E(ρ̂f ) = ρ. Naturally, one will ask if the expectations of our quantized estimators, e.g., ρ̂b1,b2 = 1 k ∑k i=1Qb1(xi)Qb2(yi), are also monotone in ρ. This turns out to be non-trivial question. We solve this important problem rigorously through several stages. Our analysis will not be restricted to LM quantizers. To do so, we will need the following definition about “increasing quantizer”. Definition 3. (Increasing quantizer) Let Q be an M -level quantizer with boarders t0 < · · · < tM and reconstruction levels µ1, ..., µM . We say that Q is an increasing quantizer if µ1 < · · · < µM . To proceed, we will prove the following three Lemmas for increasing quantizers. Lemma 1. (1-bit vs. others) Suppose Qb1 , Qb2 are increasing quantizers symmetric about 0, with b1 ≥ 1, and b2 = 1. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 2. (2-bit vs. 2-bit) Suppose Qb1 , Qb2 are any two increasing quantizers symmetric about 0, with b1 = b2 = 2. Then E(Qb1(x)Qb2(y)) is strictly increasing in ρ on [−1, 1]. Lemma 3. (Universal decomposition) For any increasing discrete quantizer Qb, b ≥ 3 which is symmetric about 0, there exist a 2-bit symmetric increasing quantizer Q2 and a (b-1)-bit symmetric increasing quantizer Qb−1 such that Qb = Qb−1 +Q2. Once we have the above lemmas, we are ready to prove the monotonicity of E(Qb1(x)Qb2(y)). Theorem 7. (Monotonicity) For any increasing quantizers Qb1 and Qb2 symmetric about 0 with bits b1 ≥ 1 and b2 ≥ 1, the function E(Qb1(x)Qb2(y)) is increasing in ρ. Proof. By Lemma 1, we know that the statement is valid for b1 = 1, and arbitrary b2. Now we look at the case where b1 ≥ 2, b2 ≥ 2. By Lemma 3, we can always write Qb1(x) = b1−1∑ i=1 Q̃ (i) 2 (x), Qb2(y) = b2−1∑ j=1 Q̂ (j) 2 (y), where Q̃12, ..., Q̃ b1−1 2 and Q̂ 1 2, ..., Q̂ b2−1 2 are two sets of symmetric increasing 2-bit quantizers. Thus, ∂E(Qb1(x)Qb2(y)) ∂ρ = ∂E( ∑b1−1 i=1 Q̃ (i) 2 (x) ∑b2−1 j=1 Q̂ (j) 2 (y)) ∂ρ = b1−1∑ i=1 b2−1∑ j=1 ∂E(Q̃(i)2 (x)Q̂ (j) 2 (y)) ∂ρ > 0, where the last equality is due to linearity of expectation and derivative, and the inequality holds because of Lemma 2. Therefore, E(Qb1(x)Qb2(y)) is increasing in ρ for any b1 ≥ 1 and b2 ≥ 1. Recall that, in Section 3.2, we have proved the result for the mis-ordering probability, i.e., Theorem 3, which actually assumes estimators have expectations monotone in ρ. Therefore, Theorem 7 provides the necessary proof to support the assumption in Theorem 3. 6 Empirical Study: Similarity Search In this section, we test proposed estimators on 3 datasets from the UCI repository (Table 1) [16]. The experiments clearly confirm that the normalization step uniformly improves the search accuracy. The results also, to an extent, illustrate the influence of mis-ordering probability studied in Theorem 3. For each dataset, all the examples are preprocessed to have unit norm. The evaluation metric we adopt is the 1-NN precision, which is the proportion of nearest neighbors (NN) we can recover from the nearest neighbor estimated using quantized random projections, averaged over all the examples. We summarize the results in Figure 3. First of all, we can see that, as the number of bits increases, the performance of the quantized estimators converges to that of the estimator with full-precision, as expected. Importantly, the normalization step of the estimators substantially improves the performances, by comparing Column 2 with Column 1 (for Scenario 1), and Column 4 with Column 3 (for Scenario 2). In addition, we can to an extent validate the assertions in Theorem 3, which states that smaller variance of debiased estimators could improve NN recovery precision. • In Figure 1 (left panel), we see that the variance of debiased estimate ρ̂dbb,f with b = 1 is much smaller than using b ≥ 2 in high similarity region (e.g. |ρ| > 0.8), and roughly the same at ρ = 0.6. Since Arcene and COIL20 have high mean 1-NN ρ (0.86 and 0.93 respectively), Theorem 3 may imply that cosine estimation of ρ̂db1,f should (in general) have smaller mis-ordering probability than b ≥ 2, implying higher 1-NN precision. On the other hand, the average 1-NN ρ of BASEHOCK is 0.59, so ρ̂dbb,f with all b = 1, 2, ...,∞ would likely give similar performance. These claims are consistent with Column 1 of Figure 3. • The variance of the debiased normalized estimator ρ̂dbb,f,n (Figure 1, middle panel) decreases as b increases, uniformly for any ρ. Hence by Theorem 3 we expect the 1-NN precision should increase with larger b on all 3 datasets, as confirmed by Column 2 of Figure 3. 7 Conclusion In this paper, we conduct a comprehensive study of estimating inner product similarities from random projections followed by asymmetric quantizations. This setting is theoretically interesting and also has many practical applications. For example, in a retrieval system, data vectors (after random projections) in the repository are quantized to reduce storage and communications; when a new query vector arrives, it does not have to be quantized. Another example of asymmetric quantization may come from data collected from different sources with own quantization strategies. In this study, we propose a series of estimators for asymmetric quantization, starting with the simple basic estimator, then the normalized estimator, and then the debiased estimators. We provide a thorough analysis of the estimation errors. Furthermore, we analyze the problems of “mis-ordering” probabilities and monotonicity properties of estimators. While our methods and analyses are largely based on the classical Lloyd-Max (LM) method, they can be extended to other more general quantization schemes.
1. What is the focus of the paper, and how does it contribute to the field of neural networks? 2. What are the strengths of the paper, particularly in terms of its theoretical analysis and experimental studies? 3. Are there any weaknesses or areas for improvement in the paper, such as the lack of discussion of previous work or misaligned graphs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific suggestions for improving the paper's organization and presentation?
Review
Review I think this paper is in good quality. The problem is not very complicated, but the authors studied the problem thoroughly, from theoretic analysis of mean and variance in different scenarios, to proposed refined method and experimental studies. I think this paper meets the standard of NeurIPS. Here are some minor suggestions. 1. I wish to see more discussions about previous work on quantized random projections, especially for cosine-similarity or inner-product estimation. The authors provide some references in the introduction, but I wish to see more detailed comparisons of results of the previous work and the current work. This will make the main contribution of this work more clear. 2. In Figure 1, please make the y-axis of the first two figures aligned. Also for Figure 2. 3. In the supplemental materials, the section numbers are wrong. (should be Section 3 in the title of Section A, etc.) ============================================================== I'm satisfied with the authors' feedback and I would like to increase my score from 6 to 7. I vote to accept this paper because of its high technical quality. I wish the authors could improve the paper's organization and presentation to match the technical quality of this paper.
NIPS
Title SGD Learns the Conjugate Kernel Class of the Network Abstract We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. N/A As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. 1 Introduction While stochastic gradient decent (SGD) from a random initialization is probably the most popular supervised learning algorithm today, we have very few results that depicts conditions that guarantee its success. Indeed, to the best of our knowledge, Andoni et al. [2014] provides the only known result of this form, and it is valid in a rather restricted setting. Namely, for depth-2 networks, where the underlying distribution is Gaussian, the algorithm is full gradient decent (rather than SGD), and the task is regression when the learnt function is a constant degree polynomial. We build on the framework of Daniely et al. [2016] to establish guarantees on SGD in a rather general setting. Daniely et al. [2016] defined a framework that associates a reproducing kernel to a network architecture. They also connected the kernel to the network via the random initialization. Namely, they showed that right after the random initialization, any function in the kernel space can be approximated by changing the weights of the last layer. The quality of the approximation depends on the size of the network and the norm of the function in the kernel space. As optimizing the last layer is a convex procedure, the result of Daniely et al. [2016] intuitively shows that the optimization process starts from a favourable point for learning a function in the conjugate kernel space. In this paper we verify this intuition. Namely, for a fairly general family of architectures (that contains fully connected networks and convolutional networks) and supervised learning tasks, we show that if the network is large enough, the learning rate is small enough, and the number of SGD steps is large enough as well, SGD is guaranteed to learn any function in the corresponding kernel space. We emphasize that the number of steps and the size of the network are only required to be polynomial (which is best possible) in the relevant parameters – the norm of the function, the required accuracy parameter (�), and the dimension of the input and the output of the network. Likewise, the result holds for any input distribution. To evaluate our result, one should understand which functions it guarantee that SGD will learn. Namely, what functions reside in the conjugate kernel space, how rich it is, and how good those functions are as predictors. From an empirical perspective, in [Daniely et al., 2017], it is shown that for standard convolutional networks the conjugate class contains functions whose performance is 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. close to the performance of the function that is actually learned by the network. This is based on experiments on the standard CIFAR-10 dataset. From a theoretical perspective, we list below a few implications that demonstrate the richness of the conjugate kernel space. These implications are valid for fully connected networks of any depth between 2 and log(n), where n is the input dimension. Likewise, they are also valid for convolutional networks of any depth between 2 and log(n), and with constantly many convolutional layers. • SGD is guaranteed to learn in polynomial time constant degree polynomials with polynomially bounded coefficients. As a corollary, SGD is guaranteed to learn in polynomial time conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. These function classes comprise a considerable fraction of the function classes that are known to be poly-time (PAC) learnable by any method. Exceptions include constant degree polynomial thresholds with no restriction on the coefficients, decision lists and parities. • SGD is guaranteed to learn, not necessarily in polynomial time, any continuous function. This complements classical universal approximation results that show that neural networks can (approximately) express any continuous function (see Scarselli and Tsoi [1998] for a survey). Our results strengthen those results and show that networks are not only able to express those functions, but actually guaranteed to learn them. 1.1 Related work Guarantees on SGD. As noted above, there are very few results that provide polynomial time guarantees for SGD on NN. One notable exception is the work of Andoni et al. [2014], that proves a result that is similar to ours, but in a substantially more restricted setting. Concretely, their result holds for depth-2 fully connected networks, as opposed to rather general architecture and constant or logarithmic depth in our case. Likewise, the marginal distribution on the instance space is assumed to be Gaussian or uniform, as opposed to arbitrary in our case. In addition, the algorithm they consider is full gradient decent, which corresponds to SGD with infinitely large mini-batch, as opposed to SGD with arbitrary mini-batch size in our case. Finally, the underlying task is regression in which the target function is a constant degree polynomial, whereas we consider rather general supervised learning setting. Other polynomial time guarantees on learning deep architectures. Various recent papers show that poly-time learning is possible in the case that the the learnt function can be realized by a neural network with certain (usually fairly strong) restrictions on the weights [Livni et al., 2014, Zhang et al., 2016a, 2015, 2016b], or under the assumption that the data is generated by a generative model that is derived from the network architecture [Arora et al., 2014, 2016]. We emphasize that the main difference of those results from our results and the results of Andoni et al. [2014] is that they do not provide guarantees on the standard SGD learning algorithm. Rather, they show that under those aforementioned conditions, there are some algorithms, usually very different from SGD on the network, that are able to learn in polynomial time. Connection to kernels. As mentioned earlier, our paper builds on Daniely et al. [2016], who developed the association of kernels to NN which we rely on. Several previous papers [Mairal et al., 2014, Cho and Saul, 2009, Rahimi and Recht, 2009, 2007, Neal, 2012, Williams, 1997, Kar and Karnick, 2012, Pennington et al., 2015, Bach, 2015, 2014, Hazan and Jaakkola, 2015, Anselmi et al., 2015] investigated such associations, but in a more restricted settings (i.e., for less architectures). Some of those papers [Rahimi and Recht, 2009, 2007, Daniely et al., 2016, Kar and Karnick, 2012, Bach, 2015, 2014] also provide measure of concentration results, that show that w.h.p. the random initialization of the network’s weights is reach enough to approximate the functions in the corresponding kernel space. As a result, these papers provide polynomial time guarantees on the variant of SGD, where only the last layer is trained. We remark that with the exception of Daniely et al. [2016], those results apply just to depth-2 networks. 1.2 Discussion and future directions We next want to place this work in the appropriate learning theoretic context, and to elaborate further on this paper’s approach for investigating neural networks. For the sake of concreteness, let us restrict the discussion to binary classification over the Boolean cube. Namely, given examples from a distribution D on {±1}n × {0, 1}, the goal is to learn a function h : {±1}n → {0, 1} whose 0-1 error, L0−1D (h) = Pr(x,y)∼D (h(x) �= y), is as small as possible. We will use a bit of terminology. A model is a distribution D on {±1}n × {0, 1} and a model class is a collection M of models. We note that any function class H ⊂ {0, 1}{±1}n defines a model class, M(H), consisting of all models D such that L0−1D (h) = 0 for some h ∈ H. We define the capacity of a model class as the minimal number m for which there is an algorithm such that for every D ∈ M the following holds. Given m samples from D, the algorithm is guaranteed to return, w.p. ≥ 910 over the samples and its internal randomness, a function h : {±1}n → {0, 1} with 0-1 error ≤ 110 . We note that for function classes the capacity is the VC dimension, up to a constant factor. Learning theory analyses learning algorithms via model classes. Concretely, one fixes some model class M and show that the algorithm is guaranteed to succeed whenever the underlying model is from M. Often, the connection between the algorithm and the class at hand is very clear. For example, in the case that the model is derived from a function class H, the algorithm might simply be one that finds a function in H that makes no mistake on the given sample. The natural choice for a model class for analyzing SGD on NN would be the class of all functions that can be realized by the network, possibly with some reasonable restrictions on the weights. Unfortunately, this approach it is probably doomed to fail, as implied by various computational hardness results [Blum and Rivest, 1989, Kearns and Valiant, 1994, Blum et al., 1994, Kharitonov, 1993, Klivans and Sherstov, 2006, 2007, Daniely et al., 2014, Daniely and Shalev-Shwartz, 2016]. So, what model classes should we consider? With a few isolated exceptions (e.g. Bshouty et al. [1998]) all known efficiently learnable model classes are either a linear model class, or contained in an efficiently learnable linear model class. Namely, functions classes composed of compositions of some predefined embedding with linear threshold functions, or linear functions over some finite field. Coming up we new tractable models would be a fascinating progress. Still, as linear function classes are the main tool that learning theory currently has for providing guarantees on learning, it seems natural to try to analyze SGD via linear model classes. Our work follows this line of thought, and we believe that there is much more to achieve via this approach. Concretely, while our bounds are polynomial, the degree of the polynomials is rather large, and possibly much better quantitative bounds can be achieved. To be more concrete, suppose that we consider simple fully connected architecture, with 2-layers, ReLU activation, and n hidden neurons. In this case, the capacity of the model class that our results guarantee that SGD will learn is Θ � n 1 3 � . For comparison, the capacity of the class of all functions that are realized by this network is Θ � n2 � . As a challenge, we encourage the reader to prove that with this architecture (possibly with an activation that is different from the ReLU), SGD is guaranteed to learn some model class of capacity that is super-linear in n. 2 Preliminaries Notation. We denote vectors by bold-face letters (e.g. x), matrices by upper case letters (e.g. W ), and collection of matrices by bold-face upper case letters (e.g. W). The p-norm of x ∈ Rd is denoted by �x�p = ��d i=1 |xi|p � 1 p . We will also use the convention that �x� = �x�2. For functions σ : R → R we let �σ� := � EX∼N (0,1) σ2(X) = � 1√ 2π �∞ −∞ σ 2(x)e− x2 2 dx . Let G = (V,E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is denoted in(v) := {u ∈ V | uv ∈ E}. We also denote deg(v) = |in(v)|. Given weight function δ : V → [0,∞) and U ⊂ V we let δ(U) = �u∈U δ(u). The d− 1 dimensional sphere is denoted Sd−1 = {x ∈ Rd | �x� = 1}. We use [x]+ to denote max(x, 0). Input space. Throughout the paper we assume that each example is a sequence of n elements, each of which is represented as a unit vector. Namely, we fix n and take the input space to be X = Xn,d = � Sd−1 �n . Each input example is denoted, x = (x1, . . . ,xn), where xi ∈ Sd−1 . (1) While this notation is slightly non-standard, it unifies input types seen in various domains (see Daniely et al. [2016]). Supervised learning. The goal in supervised learning is to devise a mapping from the input space X to an output space Y based on a sample S = {(x1, y1), . . . , (xm, ym)}, where (xi, yi) ∈ X × Y drawn i.i.d. from a distribution D over X × Y . A supervised learning problem is further specified by an output length k and a loss function � : Rk × Y → [0,∞), and the goal is to find a predictor h : X → Rk whose loss, LD(h) := E(x,y)∼D �(h(x), y), is small. The empirical loss LS(h) := 1m �m i=1 �(h(xi), yi) is commonly used as a proxy for the loss LD. When h is defined by a vector w of parameters, we will use the notations LD(w) = LD(h), LS(w) = LS(h) and �(x,y)(w) = �(h(x), y). Regression problems correspond to k = 1, Y = R and, for instance, the squared loss �square(ŷ, y) = (ŷ − y)2. Binary classification is captured by k = 1, Y = {±1} and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ 0] or the hinge loss �hinge(ŷ, y) = [1 − ŷy]+. Multiclass classification is captured by k being the number of classes, Y = [k], and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ argmaxy� ŷy� ] or the logistic loss �log(ŷ, y) = − log (py(ŷ)) where p : Rk → Δk−1 is given by pi(ŷ) = e ŷi�k j=1 e ŷj . A loss � is L-Lipschitz if for all y ∈ Y , the function �y(ŷ) := �(ŷ, y) is L-Lipschitz. Likewise, it is convex if �y is convex for every y ∈ Y . Neural network learning. We define a neural network N to be a vertices weighted directed acyclic graph (DAG) whose nodes are denoted V (N ) and edges E(N ). The weight function will be denoted by δ : V (N ) → [0,∞), and its sole role would be to dictate the distribution of the initial weights. We will refer N ’s nodes by neurons. Each of non-input neuron, i.e. neuron with incoming edges, is associated with an activation function σv : R → R. In this paper, an activation can be any function σ : R → R that is right and left differentiable, square integrable with respect to the Gaussian measure on R, and is normalized in the sense that �σ� = 1. The set of neurons having only incoming edges are called the output neurons. To match the setup of supervised learning defined above, a network N has nd input neurons and k output neurons, denoted o1, . . . , ok. A network N together with a weight vector w = {wuv | uv ∈ E} ∪ {bv | v ∈ V is an internal neuron} defines a predictor hN ,w : X → Rk whose prediction is given by “propagating” x forward through the network. Concretely, we define hv,w(·) to be the output of the subgraph of the neuron v as follows: for an input neuron v, hv,w outputs the corresponding coordinate in x, and internal neurons, we define hv,w recursively as hv,w(x) = σv �� u∈in(v) wuv hu,w(x) + bv � . For output neurons, we define hv,w as hv,w(x) = � u∈in(v) wuv hu,w(x) . Finally, we let hN ,w(x) = (ho1,w(x), . . . , hok,w(x)). We next describe the learning algorithm that we analyze in this paper. While there is no standard training algorithm for neural networks, the algorithms used in practice are usually quite similar to the one we describe, both in the way the weights are initialized and the way they are updated. We will use the popular Xavier initialization [Glorot and Bengio, 2010] for the network weights. Fix 0 ≤ β ≤ 1. We say that w0 = {w0uv}uv∈E ∪ {bv}v∈V is an internal neuron are β-biased random weights (or, β-biased random initialization) if each weight wuv is sampled independently from a normal distribution with mean 0 and variance (1− β)dδ(u)/δ(in(v)) if u is an input neuron and (1− β)δ(u)/δ(in(v)) otherwise. Finally, each bias term bv is sampled independently from a normal distribution with mean 0 and variance β. We note that the rational behind this initialization scheme is that for every example x and every neuron v we have Ew0 (hv,w0(x)) 2 = 1 (see Glorot and Bengio [2010]) Kernel classes. A function κ : X × X → R is a reproducing kernel, or simply a kernel, if for every x1, . . . ,xr ∈ X , the r × r matrix Γi,j = {κ(xi,xj)} is positive semi-definite. Each kernel induces a Hilbert space Hκ of functions from X to R with a corresponding norm � · �κ. For h ∈ Hkκ we denote �h�κ = ��k i=1 �hi�2κ. A kernel and its corresponding space are normalized if ∀x ∈ X , κ(x,x) = 1. Algorithm 1 Generic Neural Network Training Input: Network N , learning rate η > 0, batch size m, number of steps T > 0, bias parameter 0 ≤ β ≤ 1, flag zero prediction layer ∈ {True,False}. Let w0 be β-biased random weights if zero prediction layer then Set w0uv = 0 whenever v is an output neuron end if for t = 1, . . . , T do Obtain a mini-batch St = {(xti, yti)}mi=1 ∼ Dm Using back-propagation, calculate a stochastic gradient vt = ∇LSt(wt) Update wt+1 = wt − ηvt Kernels give rise to popular benchmarks for learning algorithms. Fix a normalized kernel κ and M > 0. It is well known that that for L-Lipschitz loss �, the SGD algorithm is guaranteed to return a function h such that ELD(h) ≤ minh�∈Hkκ, �h��κ≤M LD(h�) + � using � LM � �2 examples. In the context of multiclass classification, for γ > 0 we define �γ : Rk × [k] → R by �γ(ŷ, y) = 1[ŷy ≤ γ+maxy� �=y ŷy� ]. We say that a distribution D on X × [k] is M -separable w.r.t. κ if there is h∗ ∈ Hkκ such that �h∗�κ ≤ M and L1D(h∗) = 0. In this case, the perceptron algorithm is guaranteed to return a function h such that EL0−1D (h) ≤ � using 2M 2 � examples. We note that both for perceptron and SGD, the above mentioned results are best possible, in the sense that any algorithm with the same guarantees, will have to use at least the same number of examples, up to a constant factor. Computation skeletons [Daniely et al., 2016] In this section we define a simple structure which we term a computation skeleton. The purpose of a computational skeleton is to compactly describe a feed-forward computation from an input to an output. A single skeleton encompasses a family of neural networks that share the same skeletal structure. Likewise, it defines a corresponding normalized kernel. Definition 1. A computation skeleton S is a DAG with n inputs, whose non-input nodes are labeled by activations, and has a single output node out(S). Figure 1 shows four example skeletons, omitting the designation of the activation functions. We denote by |S| the number of non-input nodes of S. The following definition shows how a skeleton, accompanied with a replication parameter r ≥ 1 and a number of output nodes k, induces a neural network architecture. Definition 2 (Realization of a skeleton). Let S be a computation skeleton and consider input coordinates in Sd−1 as in (1). For r, k ≥ 1 we define the following neural network N = N (S, r, k). For each input node in S , N has d corresponding input neurons with weight 1/d. For each internal node v ∈ S labelled by an activation σ, N has r neurons v1, . . . , vr, each with an activation σ and weight 1/r. In addition, N has k output neurons o1, . . . , ok with the identity activation σ(x) = x and weight 1. There is an edge viuj ∈ E(N ) whenever uv ∈ E(S). For every output node v in S, each neuron vj is connected to all output neurons o1, . . . , ok. We term N the (r, k)-fold realization of S . Note that the notion of the replication parameter r corresponds, in the terminology of convolutional networks, to the number of channels taken in a convolutional layer and to the number of hidden neurons taken in a fully-connected layer. In addition to networks’ architectures, a computation skeleton S also defines a normalized kernel κS : X × X → [−1, 1]. To define the kernel, we use the notion of a conjugate activation. For ρ ∈ [−1, 1], we denote by Nρ the multivariate Gaussian distribution on R2 with mean 0 and covariance matrix � 1 ρ ρ 1 � . Definition 3 (Conjugate activation). The conjugate activation of an activation σ is the function σ̂ : [−1, 1] → R defined as σ̂(ρ) = E(X,Y )∼Nρ σ(X)σ(Y ) . The following definition gives the kernel corresponding to a skeleton Definition 4 (Compositional kernels). Let S be a computation skeleton and let 0 ≤ β ≤ 1. For every node v, inductively define a kernel κβv : X × X → R as follows. For an input node v corresponding to the ith coordinate, define κβv (x,y) = �xi,yi�. For a non-input node v, define κβv (x,y) = σ̂v � (1− β) � u∈in(v) κ β u(x,y) |in(v)| + β � . The final kernel κβS is κ β out(S). The resulting Hilbert space and norm are denoted HS,β and � · �S,β respectively. 3 Main results An activation σ : R → R is called C-bounded if �σ�∞, �σ��∞, �σ���∞ ≤ C. Fix a skeleton S and 1-Lipschitz1 convex loss �. Define comp(S) = �depth(S)i=1 maxv∈S,depth(v)=i(deg(v) + 1) and C(S) = (8C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-bounded, and depth(v) is the maximal length of a path from an input node to v. We also define C�(S) = (4C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-Lipschitz and satisfy |σ(0)| ≤ C. Through this and remaining sections we use � to hide universal constants. Likewise, we fix the bias parameter β and therefore omit it from the relevant notation. 1If � is L-Lipschitz, we can replace � by 1 L � and the learning rate η by Lη. The operation of algorithm 1 will be identical to its operation before the modification. Given this observation, it is very easy to derive results for general L given our results. Hence, to save one paramater, we will assume that L = 1. We note that for constant depth skeletons with maximal degree that is polynomial in n, C(S) and C�(S) are polynomial in n. These quantities are polynomial in n also for various log-depth skeletons. For example, this is true for fully connected skeletons, or more generally, layered skeletons with constantly many layers that are not fully connected. Theorem 1. Suppose that all activations are C-bounded. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � C 4(Tη�)2M2(C�(S))4 log(C|S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. We next consider ReLU activations. Here, C�(S) = ( √ 32)depth(S) � comp(S). Theorem 2. Suppose that all activations are the ReLU. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � (Tη �)2M2(C�(S))4 log( |S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. Finally, we consider the case in which the last layer is also initialized randomly. Here, we provide guarantees in a more restricted setting of supervised learning. Concretely, we consider multiclass classification, when D is separable with margin, and � is the logistic loss. Theorem 3. Suppose that all activations are C-bounded, that D is M -separable with w.r.t. κS and let � > 0. Suppose we run algorithm 1 on N (S, r, k) with the following parameters: • η = η�r for η� � � 2 M2(C(S))4 • T � log(k)M 2 η��2 • r � C4 (C(S))4 M2 (Tη�)2 log � C|S| � � + k + d • Randomly initialized prediction layer • Arbitrary m Then, w.p. ≥ 14 over the choice of the initial weights and the training examples, there is t ∈ [T ] such that L0−1D (wt) ≤ � 3.1 Implications To demonstrate our results, let us elaborate on a few implications for specific network architectures. To this end, let us fix the instance space X to be either {±1}n or Sn−1. Also, fix a bias parameter 1 ≥ β > 0, a batch size m, and a skeleton S that is a skeleton of a fully connected network of depth between 2 and log(n). Finally, we also fix the activation function to be either the ReLU or a C-bounded activation, assume that the prediction layer is initialized to 0, and fix the loss function to be some convex and Lipschitz loss function. Very similar results are valid for convolutional networks with constantly many convolutional layers. We however omit the details for brevity. Our first implication shows that SGD is guaranteed to efficiently learn constant degree polynomials with polynomially bounded weights. To this end, let us denote by Pt the collection of degree t polynomials. Furthermore, for any polynomial p we denote by �p� the �2 norm of its coefficients. Corollary 4. Fix any positive integers t0, t1. Suppose that we run algorithm 1 on the network N (S, r, 1) with the following parameters: • η � poly � � n � • T, r � poly � n � , log (1/δ) � Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minp∈Pt0 , �p�≤nt1 LD(p) + �. Here, the expectation is over the training examples. We note that several hypothesis classes that were studied in PAC learning can be realized by polynomial threshold functions with polynomially bounded coefficients. This includes conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. If we take the loss function to be the logistic loss or the hinge loss, Corollary 4 implies that SGD efficiently learns these hypothesis classes as well. Our second implication shows that any continuous function is learnable (not necessarily in polynomial time) by SGD. Corollary 5. Fix a continuous function h∗ : Sn−1 → R and �, δ > 0. Assume that D is realized2 by h∗. Assume that we run algorithm 1 on the network N (S, r, 1). If η > 0 is sufficiently small and T and r are sufficiently large, then, w.p. ≥ 1− δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ �. 3.2 Extensions We next remark on two extensions of our main results. The extended results can be proved in a similar fashion to our results. To avoid cumbersome notation, we restrict the proofs to the main theorems as stated, and will elaborate on the extended results in an extended version of this manuscript. First, we assume that the replication parameter is the same for all nodes. In practice, replication parameters for different nodes are different. This can be captured by a vector {rv}v∈Int(S). Our main results can be extended to this case if for all v, rv ≤ � u∈in(v) ru (a requirement that usually holds in practice). Second, we assume that there is no weight sharing that is standard in convolutional networks. Our results can be extended to convolutional networks with weight sharing. We also note that we assume that in each step of algorithm 1, a fresh batch of examples is given. In practice this is often not the case. Rather, the algorithm is given a training set of examples, and at each step it samples from that set. In this case, our results provide guarantees on the training loss. If the training set is large enough, this also implies guarantees on the population loss via standard sample complexity results. Acknowledgments The author thanks Roy Frostig, Yoram Singer and Kunal Talwar for valuable discussions and comments. 2That is, if (x, y) ∼ D then y = h∗(x) with probability 1.
1. What is the focus of the paper in deep learning? 2. What are the strengths and weaknesses of the proposed approach in the graphical framework? 3. What are the limitations of the presented results regarding the learnability of the stochastic gradient descent algorithm?
Review
Review Proving the learnability of the stochastic gradient descent algorithm is an important task in deep learning. The authors consider this problem in a graphical framework in terms of computational skeletons and provide PAC learning type analysis. Since the problem itself is very difficult, the presented results are acceptable. The work is not at the top level to this reviewers due to the following two reasons: 1. The skeleton with the same number r of incoming neurons and the homogeneous linear kernel defined for the input node are pretty special. This leads to the proof of learnability of polynomials only, though Corollary 5 gives a special case of approximating a continuous function when it is realizable by the distribution. 2. The lower bound for r of order O(\epsilon^{-2} \log (1/\epsilon)) required for the accuracy \epsilon in Theorems 1 and 2 is very demanding.
NIPS
Title SGD Learns the Conjugate Kernel Class of the Network Abstract We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. N/A As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. 1 Introduction While stochastic gradient decent (SGD) from a random initialization is probably the most popular supervised learning algorithm today, we have very few results that depicts conditions that guarantee its success. Indeed, to the best of our knowledge, Andoni et al. [2014] provides the only known result of this form, and it is valid in a rather restricted setting. Namely, for depth-2 networks, where the underlying distribution is Gaussian, the algorithm is full gradient decent (rather than SGD), and the task is regression when the learnt function is a constant degree polynomial. We build on the framework of Daniely et al. [2016] to establish guarantees on SGD in a rather general setting. Daniely et al. [2016] defined a framework that associates a reproducing kernel to a network architecture. They also connected the kernel to the network via the random initialization. Namely, they showed that right after the random initialization, any function in the kernel space can be approximated by changing the weights of the last layer. The quality of the approximation depends on the size of the network and the norm of the function in the kernel space. As optimizing the last layer is a convex procedure, the result of Daniely et al. [2016] intuitively shows that the optimization process starts from a favourable point for learning a function in the conjugate kernel space. In this paper we verify this intuition. Namely, for a fairly general family of architectures (that contains fully connected networks and convolutional networks) and supervised learning tasks, we show that if the network is large enough, the learning rate is small enough, and the number of SGD steps is large enough as well, SGD is guaranteed to learn any function in the corresponding kernel space. We emphasize that the number of steps and the size of the network are only required to be polynomial (which is best possible) in the relevant parameters – the norm of the function, the required accuracy parameter (�), and the dimension of the input and the output of the network. Likewise, the result holds for any input distribution. To evaluate our result, one should understand which functions it guarantee that SGD will learn. Namely, what functions reside in the conjugate kernel space, how rich it is, and how good those functions are as predictors. From an empirical perspective, in [Daniely et al., 2017], it is shown that for standard convolutional networks the conjugate class contains functions whose performance is 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. close to the performance of the function that is actually learned by the network. This is based on experiments on the standard CIFAR-10 dataset. From a theoretical perspective, we list below a few implications that demonstrate the richness of the conjugate kernel space. These implications are valid for fully connected networks of any depth between 2 and log(n), where n is the input dimension. Likewise, they are also valid for convolutional networks of any depth between 2 and log(n), and with constantly many convolutional layers. • SGD is guaranteed to learn in polynomial time constant degree polynomials with polynomially bounded coefficients. As a corollary, SGD is guaranteed to learn in polynomial time conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. These function classes comprise a considerable fraction of the function classes that are known to be poly-time (PAC) learnable by any method. Exceptions include constant degree polynomial thresholds with no restriction on the coefficients, decision lists and parities. • SGD is guaranteed to learn, not necessarily in polynomial time, any continuous function. This complements classical universal approximation results that show that neural networks can (approximately) express any continuous function (see Scarselli and Tsoi [1998] for a survey). Our results strengthen those results and show that networks are not only able to express those functions, but actually guaranteed to learn them. 1.1 Related work Guarantees on SGD. As noted above, there are very few results that provide polynomial time guarantees for SGD on NN. One notable exception is the work of Andoni et al. [2014], that proves a result that is similar to ours, but in a substantially more restricted setting. Concretely, their result holds for depth-2 fully connected networks, as opposed to rather general architecture and constant or logarithmic depth in our case. Likewise, the marginal distribution on the instance space is assumed to be Gaussian or uniform, as opposed to arbitrary in our case. In addition, the algorithm they consider is full gradient decent, which corresponds to SGD with infinitely large mini-batch, as opposed to SGD with arbitrary mini-batch size in our case. Finally, the underlying task is regression in which the target function is a constant degree polynomial, whereas we consider rather general supervised learning setting. Other polynomial time guarantees on learning deep architectures. Various recent papers show that poly-time learning is possible in the case that the the learnt function can be realized by a neural network with certain (usually fairly strong) restrictions on the weights [Livni et al., 2014, Zhang et al., 2016a, 2015, 2016b], or under the assumption that the data is generated by a generative model that is derived from the network architecture [Arora et al., 2014, 2016]. We emphasize that the main difference of those results from our results and the results of Andoni et al. [2014] is that they do not provide guarantees on the standard SGD learning algorithm. Rather, they show that under those aforementioned conditions, there are some algorithms, usually very different from SGD on the network, that are able to learn in polynomial time. Connection to kernels. As mentioned earlier, our paper builds on Daniely et al. [2016], who developed the association of kernels to NN which we rely on. Several previous papers [Mairal et al., 2014, Cho and Saul, 2009, Rahimi and Recht, 2009, 2007, Neal, 2012, Williams, 1997, Kar and Karnick, 2012, Pennington et al., 2015, Bach, 2015, 2014, Hazan and Jaakkola, 2015, Anselmi et al., 2015] investigated such associations, but in a more restricted settings (i.e., for less architectures). Some of those papers [Rahimi and Recht, 2009, 2007, Daniely et al., 2016, Kar and Karnick, 2012, Bach, 2015, 2014] also provide measure of concentration results, that show that w.h.p. the random initialization of the network’s weights is reach enough to approximate the functions in the corresponding kernel space. As a result, these papers provide polynomial time guarantees on the variant of SGD, where only the last layer is trained. We remark that with the exception of Daniely et al. [2016], those results apply just to depth-2 networks. 1.2 Discussion and future directions We next want to place this work in the appropriate learning theoretic context, and to elaborate further on this paper’s approach for investigating neural networks. For the sake of concreteness, let us restrict the discussion to binary classification over the Boolean cube. Namely, given examples from a distribution D on {±1}n × {0, 1}, the goal is to learn a function h : {±1}n → {0, 1} whose 0-1 error, L0−1D (h) = Pr(x,y)∼D (h(x) �= y), is as small as possible. We will use a bit of terminology. A model is a distribution D on {±1}n × {0, 1} and a model class is a collection M of models. We note that any function class H ⊂ {0, 1}{±1}n defines a model class, M(H), consisting of all models D such that L0−1D (h) = 0 for some h ∈ H. We define the capacity of a model class as the minimal number m for which there is an algorithm such that for every D ∈ M the following holds. Given m samples from D, the algorithm is guaranteed to return, w.p. ≥ 910 over the samples and its internal randomness, a function h : {±1}n → {0, 1} with 0-1 error ≤ 110 . We note that for function classes the capacity is the VC dimension, up to a constant factor. Learning theory analyses learning algorithms via model classes. Concretely, one fixes some model class M and show that the algorithm is guaranteed to succeed whenever the underlying model is from M. Often, the connection between the algorithm and the class at hand is very clear. For example, in the case that the model is derived from a function class H, the algorithm might simply be one that finds a function in H that makes no mistake on the given sample. The natural choice for a model class for analyzing SGD on NN would be the class of all functions that can be realized by the network, possibly with some reasonable restrictions on the weights. Unfortunately, this approach it is probably doomed to fail, as implied by various computational hardness results [Blum and Rivest, 1989, Kearns and Valiant, 1994, Blum et al., 1994, Kharitonov, 1993, Klivans and Sherstov, 2006, 2007, Daniely et al., 2014, Daniely and Shalev-Shwartz, 2016]. So, what model classes should we consider? With a few isolated exceptions (e.g. Bshouty et al. [1998]) all known efficiently learnable model classes are either a linear model class, or contained in an efficiently learnable linear model class. Namely, functions classes composed of compositions of some predefined embedding with linear threshold functions, or linear functions over some finite field. Coming up we new tractable models would be a fascinating progress. Still, as linear function classes are the main tool that learning theory currently has for providing guarantees on learning, it seems natural to try to analyze SGD via linear model classes. Our work follows this line of thought, and we believe that there is much more to achieve via this approach. Concretely, while our bounds are polynomial, the degree of the polynomials is rather large, and possibly much better quantitative bounds can be achieved. To be more concrete, suppose that we consider simple fully connected architecture, with 2-layers, ReLU activation, and n hidden neurons. In this case, the capacity of the model class that our results guarantee that SGD will learn is Θ � n 1 3 � . For comparison, the capacity of the class of all functions that are realized by this network is Θ � n2 � . As a challenge, we encourage the reader to prove that with this architecture (possibly with an activation that is different from the ReLU), SGD is guaranteed to learn some model class of capacity that is super-linear in n. 2 Preliminaries Notation. We denote vectors by bold-face letters (e.g. x), matrices by upper case letters (e.g. W ), and collection of matrices by bold-face upper case letters (e.g. W). The p-norm of x ∈ Rd is denoted by �x�p = ��d i=1 |xi|p � 1 p . We will also use the convention that �x� = �x�2. For functions σ : R → R we let �σ� := � EX∼N (0,1) σ2(X) = � 1√ 2π �∞ −∞ σ 2(x)e− x2 2 dx . Let G = (V,E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is denoted in(v) := {u ∈ V | uv ∈ E}. We also denote deg(v) = |in(v)|. Given weight function δ : V → [0,∞) and U ⊂ V we let δ(U) = �u∈U δ(u). The d− 1 dimensional sphere is denoted Sd−1 = {x ∈ Rd | �x� = 1}. We use [x]+ to denote max(x, 0). Input space. Throughout the paper we assume that each example is a sequence of n elements, each of which is represented as a unit vector. Namely, we fix n and take the input space to be X = Xn,d = � Sd−1 �n . Each input example is denoted, x = (x1, . . . ,xn), where xi ∈ Sd−1 . (1) While this notation is slightly non-standard, it unifies input types seen in various domains (see Daniely et al. [2016]). Supervised learning. The goal in supervised learning is to devise a mapping from the input space X to an output space Y based on a sample S = {(x1, y1), . . . , (xm, ym)}, where (xi, yi) ∈ X × Y drawn i.i.d. from a distribution D over X × Y . A supervised learning problem is further specified by an output length k and a loss function � : Rk × Y → [0,∞), and the goal is to find a predictor h : X → Rk whose loss, LD(h) := E(x,y)∼D �(h(x), y), is small. The empirical loss LS(h) := 1m �m i=1 �(h(xi), yi) is commonly used as a proxy for the loss LD. When h is defined by a vector w of parameters, we will use the notations LD(w) = LD(h), LS(w) = LS(h) and �(x,y)(w) = �(h(x), y). Regression problems correspond to k = 1, Y = R and, for instance, the squared loss �square(ŷ, y) = (ŷ − y)2. Binary classification is captured by k = 1, Y = {±1} and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ 0] or the hinge loss �hinge(ŷ, y) = [1 − ŷy]+. Multiclass classification is captured by k being the number of classes, Y = [k], and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ argmaxy� ŷy� ] or the logistic loss �log(ŷ, y) = − log (py(ŷ)) where p : Rk → Δk−1 is given by pi(ŷ) = e ŷi�k j=1 e ŷj . A loss � is L-Lipschitz if for all y ∈ Y , the function �y(ŷ) := �(ŷ, y) is L-Lipschitz. Likewise, it is convex if �y is convex for every y ∈ Y . Neural network learning. We define a neural network N to be a vertices weighted directed acyclic graph (DAG) whose nodes are denoted V (N ) and edges E(N ). The weight function will be denoted by δ : V (N ) → [0,∞), and its sole role would be to dictate the distribution of the initial weights. We will refer N ’s nodes by neurons. Each of non-input neuron, i.e. neuron with incoming edges, is associated with an activation function σv : R → R. In this paper, an activation can be any function σ : R → R that is right and left differentiable, square integrable with respect to the Gaussian measure on R, and is normalized in the sense that �σ� = 1. The set of neurons having only incoming edges are called the output neurons. To match the setup of supervised learning defined above, a network N has nd input neurons and k output neurons, denoted o1, . . . , ok. A network N together with a weight vector w = {wuv | uv ∈ E} ∪ {bv | v ∈ V is an internal neuron} defines a predictor hN ,w : X → Rk whose prediction is given by “propagating” x forward through the network. Concretely, we define hv,w(·) to be the output of the subgraph of the neuron v as follows: for an input neuron v, hv,w outputs the corresponding coordinate in x, and internal neurons, we define hv,w recursively as hv,w(x) = σv �� u∈in(v) wuv hu,w(x) + bv � . For output neurons, we define hv,w as hv,w(x) = � u∈in(v) wuv hu,w(x) . Finally, we let hN ,w(x) = (ho1,w(x), . . . , hok,w(x)). We next describe the learning algorithm that we analyze in this paper. While there is no standard training algorithm for neural networks, the algorithms used in practice are usually quite similar to the one we describe, both in the way the weights are initialized and the way they are updated. We will use the popular Xavier initialization [Glorot and Bengio, 2010] for the network weights. Fix 0 ≤ β ≤ 1. We say that w0 = {w0uv}uv∈E ∪ {bv}v∈V is an internal neuron are β-biased random weights (or, β-biased random initialization) if each weight wuv is sampled independently from a normal distribution with mean 0 and variance (1− β)dδ(u)/δ(in(v)) if u is an input neuron and (1− β)δ(u)/δ(in(v)) otherwise. Finally, each bias term bv is sampled independently from a normal distribution with mean 0 and variance β. We note that the rational behind this initialization scheme is that for every example x and every neuron v we have Ew0 (hv,w0(x)) 2 = 1 (see Glorot and Bengio [2010]) Kernel classes. A function κ : X × X → R is a reproducing kernel, or simply a kernel, if for every x1, . . . ,xr ∈ X , the r × r matrix Γi,j = {κ(xi,xj)} is positive semi-definite. Each kernel induces a Hilbert space Hκ of functions from X to R with a corresponding norm � · �κ. For h ∈ Hkκ we denote �h�κ = ��k i=1 �hi�2κ. A kernel and its corresponding space are normalized if ∀x ∈ X , κ(x,x) = 1. Algorithm 1 Generic Neural Network Training Input: Network N , learning rate η > 0, batch size m, number of steps T > 0, bias parameter 0 ≤ β ≤ 1, flag zero prediction layer ∈ {True,False}. Let w0 be β-biased random weights if zero prediction layer then Set w0uv = 0 whenever v is an output neuron end if for t = 1, . . . , T do Obtain a mini-batch St = {(xti, yti)}mi=1 ∼ Dm Using back-propagation, calculate a stochastic gradient vt = ∇LSt(wt) Update wt+1 = wt − ηvt Kernels give rise to popular benchmarks for learning algorithms. Fix a normalized kernel κ and M > 0. It is well known that that for L-Lipschitz loss �, the SGD algorithm is guaranteed to return a function h such that ELD(h) ≤ minh�∈Hkκ, �h��κ≤M LD(h�) + � using � LM � �2 examples. In the context of multiclass classification, for γ > 0 we define �γ : Rk × [k] → R by �γ(ŷ, y) = 1[ŷy ≤ γ+maxy� �=y ŷy� ]. We say that a distribution D on X × [k] is M -separable w.r.t. κ if there is h∗ ∈ Hkκ such that �h∗�κ ≤ M and L1D(h∗) = 0. In this case, the perceptron algorithm is guaranteed to return a function h such that EL0−1D (h) ≤ � using 2M 2 � examples. We note that both for perceptron and SGD, the above mentioned results are best possible, in the sense that any algorithm with the same guarantees, will have to use at least the same number of examples, up to a constant factor. Computation skeletons [Daniely et al., 2016] In this section we define a simple structure which we term a computation skeleton. The purpose of a computational skeleton is to compactly describe a feed-forward computation from an input to an output. A single skeleton encompasses a family of neural networks that share the same skeletal structure. Likewise, it defines a corresponding normalized kernel. Definition 1. A computation skeleton S is a DAG with n inputs, whose non-input nodes are labeled by activations, and has a single output node out(S). Figure 1 shows four example skeletons, omitting the designation of the activation functions. We denote by |S| the number of non-input nodes of S. The following definition shows how a skeleton, accompanied with a replication parameter r ≥ 1 and a number of output nodes k, induces a neural network architecture. Definition 2 (Realization of a skeleton). Let S be a computation skeleton and consider input coordinates in Sd−1 as in (1). For r, k ≥ 1 we define the following neural network N = N (S, r, k). For each input node in S , N has d corresponding input neurons with weight 1/d. For each internal node v ∈ S labelled by an activation σ, N has r neurons v1, . . . , vr, each with an activation σ and weight 1/r. In addition, N has k output neurons o1, . . . , ok with the identity activation σ(x) = x and weight 1. There is an edge viuj ∈ E(N ) whenever uv ∈ E(S). For every output node v in S, each neuron vj is connected to all output neurons o1, . . . , ok. We term N the (r, k)-fold realization of S . Note that the notion of the replication parameter r corresponds, in the terminology of convolutional networks, to the number of channels taken in a convolutional layer and to the number of hidden neurons taken in a fully-connected layer. In addition to networks’ architectures, a computation skeleton S also defines a normalized kernel κS : X × X → [−1, 1]. To define the kernel, we use the notion of a conjugate activation. For ρ ∈ [−1, 1], we denote by Nρ the multivariate Gaussian distribution on R2 with mean 0 and covariance matrix � 1 ρ ρ 1 � . Definition 3 (Conjugate activation). The conjugate activation of an activation σ is the function σ̂ : [−1, 1] → R defined as σ̂(ρ) = E(X,Y )∼Nρ σ(X)σ(Y ) . The following definition gives the kernel corresponding to a skeleton Definition 4 (Compositional kernels). Let S be a computation skeleton and let 0 ≤ β ≤ 1. For every node v, inductively define a kernel κβv : X × X → R as follows. For an input node v corresponding to the ith coordinate, define κβv (x,y) = �xi,yi�. For a non-input node v, define κβv (x,y) = σ̂v � (1− β) � u∈in(v) κ β u(x,y) |in(v)| + β � . The final kernel κβS is κ β out(S). The resulting Hilbert space and norm are denoted HS,β and � · �S,β respectively. 3 Main results An activation σ : R → R is called C-bounded if �σ�∞, �σ��∞, �σ���∞ ≤ C. Fix a skeleton S and 1-Lipschitz1 convex loss �. Define comp(S) = �depth(S)i=1 maxv∈S,depth(v)=i(deg(v) + 1) and C(S) = (8C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-bounded, and depth(v) is the maximal length of a path from an input node to v. We also define C�(S) = (4C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-Lipschitz and satisfy |σ(0)| ≤ C. Through this and remaining sections we use � to hide universal constants. Likewise, we fix the bias parameter β and therefore omit it from the relevant notation. 1If � is L-Lipschitz, we can replace � by 1 L � and the learning rate η by Lη. The operation of algorithm 1 will be identical to its operation before the modification. Given this observation, it is very easy to derive results for general L given our results. Hence, to save one paramater, we will assume that L = 1. We note that for constant depth skeletons with maximal degree that is polynomial in n, C(S) and C�(S) are polynomial in n. These quantities are polynomial in n also for various log-depth skeletons. For example, this is true for fully connected skeletons, or more generally, layered skeletons with constantly many layers that are not fully connected. Theorem 1. Suppose that all activations are C-bounded. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � C 4(Tη�)2M2(C�(S))4 log(C|S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. We next consider ReLU activations. Here, C�(S) = ( √ 32)depth(S) � comp(S). Theorem 2. Suppose that all activations are the ReLU. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � (Tη �)2M2(C�(S))4 log( |S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. Finally, we consider the case in which the last layer is also initialized randomly. Here, we provide guarantees in a more restricted setting of supervised learning. Concretely, we consider multiclass classification, when D is separable with margin, and � is the logistic loss. Theorem 3. Suppose that all activations are C-bounded, that D is M -separable with w.r.t. κS and let � > 0. Suppose we run algorithm 1 on N (S, r, k) with the following parameters: • η = η�r for η� � � 2 M2(C(S))4 • T � log(k)M 2 η��2 • r � C4 (C(S))4 M2 (Tη�)2 log � C|S| � � + k + d • Randomly initialized prediction layer • Arbitrary m Then, w.p. ≥ 14 over the choice of the initial weights and the training examples, there is t ∈ [T ] such that L0−1D (wt) ≤ � 3.1 Implications To demonstrate our results, let us elaborate on a few implications for specific network architectures. To this end, let us fix the instance space X to be either {±1}n or Sn−1. Also, fix a bias parameter 1 ≥ β > 0, a batch size m, and a skeleton S that is a skeleton of a fully connected network of depth between 2 and log(n). Finally, we also fix the activation function to be either the ReLU or a C-bounded activation, assume that the prediction layer is initialized to 0, and fix the loss function to be some convex and Lipschitz loss function. Very similar results are valid for convolutional networks with constantly many convolutional layers. We however omit the details for brevity. Our first implication shows that SGD is guaranteed to efficiently learn constant degree polynomials with polynomially bounded weights. To this end, let us denote by Pt the collection of degree t polynomials. Furthermore, for any polynomial p we denote by �p� the �2 norm of its coefficients. Corollary 4. Fix any positive integers t0, t1. Suppose that we run algorithm 1 on the network N (S, r, 1) with the following parameters: • η � poly � � n � • T, r � poly � n � , log (1/δ) � Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minp∈Pt0 , �p�≤nt1 LD(p) + �. Here, the expectation is over the training examples. We note that several hypothesis classes that were studied in PAC learning can be realized by polynomial threshold functions with polynomially bounded coefficients. This includes conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. If we take the loss function to be the logistic loss or the hinge loss, Corollary 4 implies that SGD efficiently learns these hypothesis classes as well. Our second implication shows that any continuous function is learnable (not necessarily in polynomial time) by SGD. Corollary 5. Fix a continuous function h∗ : Sn−1 → R and �, δ > 0. Assume that D is realized2 by h∗. Assume that we run algorithm 1 on the network N (S, r, 1). If η > 0 is sufficiently small and T and r are sufficiently large, then, w.p. ≥ 1− δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ �. 3.2 Extensions We next remark on two extensions of our main results. The extended results can be proved in a similar fashion to our results. To avoid cumbersome notation, we restrict the proofs to the main theorems as stated, and will elaborate on the extended results in an extended version of this manuscript. First, we assume that the replication parameter is the same for all nodes. In practice, replication parameters for different nodes are different. This can be captured by a vector {rv}v∈Int(S). Our main results can be extended to this case if for all v, rv ≤ � u∈in(v) ru (a requirement that usually holds in practice). Second, we assume that there is no weight sharing that is standard in convolutional networks. Our results can be extended to convolutional networks with weight sharing. We also note that we assume that in each step of algorithm 1, a fresh batch of examples is given. In practice this is often not the case. Rather, the algorithm is given a training set of examples, and at each step it samples from that set. In this case, our results provide guarantees on the training loss. If the training set is large enough, this also implies guarantees on the population loss via standard sample complexity results. Acknowledgments The author thanks Roy Frostig, Yoram Singer and Kunal Talwar for valuable discussions and comments. 2That is, if (x, y) ∼ D then y = h∗(x) with probability 1.
1. What is the focus of the paper regarding neural networks and SGD? 2. What are the strengths of the proposed approach, particularly in its applicability to various architectures and activation functions? 3. What are the weaknesses of the paper, especially regarding the definition of the kernel and the limitation of the result in practical scenarios? 4. How does the reviewer assess the novelty and significance of the paper's contribution to the field of neural networks and machine learning?
Review
Review This paper studies the problem of learning a class of functions related to neural networks using SGD. The class of functions can be defined using a kernel that is related to the neural network structure (which was defined in [Daniely et al. 2016]). The result shows that if SGD is applied to a neural network with similar structure, but with many duplicated nodes, then the result can be competitive to the best function in the class. Pros: - This is one of the first analysis for SGD on multi-layer, non-linear neural networks. - Result applies to various architectures and activation functions. Cons: - The definition of the kernel is not very intuitive, and it would be good to discuss what is the relationship between functions representable using a neural network and functions representable using this kernel? - The same result seems to be easier to prove if all previous layers have random weights and SGD is applied only to last layer. It's certainly good that SGD in this paper runs on all layers (which is similar to what's done in practice). However, for most practical architectures just training the last layer is not going to get a good performance (when the task is difficult enough). The paper does not help in explaining why training all layers is more powerful than training the last layer. Overall this is still an interesting result.
NIPS
Title SGD Learns the Conjugate Kernel Class of the Network Abstract We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. N/A As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. 1 Introduction While stochastic gradient decent (SGD) from a random initialization is probably the most popular supervised learning algorithm today, we have very few results that depicts conditions that guarantee its success. Indeed, to the best of our knowledge, Andoni et al. [2014] provides the only known result of this form, and it is valid in a rather restricted setting. Namely, for depth-2 networks, where the underlying distribution is Gaussian, the algorithm is full gradient decent (rather than SGD), and the task is regression when the learnt function is a constant degree polynomial. We build on the framework of Daniely et al. [2016] to establish guarantees on SGD in a rather general setting. Daniely et al. [2016] defined a framework that associates a reproducing kernel to a network architecture. They also connected the kernel to the network via the random initialization. Namely, they showed that right after the random initialization, any function in the kernel space can be approximated by changing the weights of the last layer. The quality of the approximation depends on the size of the network and the norm of the function in the kernel space. As optimizing the last layer is a convex procedure, the result of Daniely et al. [2016] intuitively shows that the optimization process starts from a favourable point for learning a function in the conjugate kernel space. In this paper we verify this intuition. Namely, for a fairly general family of architectures (that contains fully connected networks and convolutional networks) and supervised learning tasks, we show that if the network is large enough, the learning rate is small enough, and the number of SGD steps is large enough as well, SGD is guaranteed to learn any function in the corresponding kernel space. We emphasize that the number of steps and the size of the network are only required to be polynomial (which is best possible) in the relevant parameters – the norm of the function, the required accuracy parameter (�), and the dimension of the input and the output of the network. Likewise, the result holds for any input distribution. To evaluate our result, one should understand which functions it guarantee that SGD will learn. Namely, what functions reside in the conjugate kernel space, how rich it is, and how good those functions are as predictors. From an empirical perspective, in [Daniely et al., 2017], it is shown that for standard convolutional networks the conjugate class contains functions whose performance is 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. close to the performance of the function that is actually learned by the network. This is based on experiments on the standard CIFAR-10 dataset. From a theoretical perspective, we list below a few implications that demonstrate the richness of the conjugate kernel space. These implications are valid for fully connected networks of any depth between 2 and log(n), where n is the input dimension. Likewise, they are also valid for convolutional networks of any depth between 2 and log(n), and with constantly many convolutional layers. • SGD is guaranteed to learn in polynomial time constant degree polynomials with polynomially bounded coefficients. As a corollary, SGD is guaranteed to learn in polynomial time conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. These function classes comprise a considerable fraction of the function classes that are known to be poly-time (PAC) learnable by any method. Exceptions include constant degree polynomial thresholds with no restriction on the coefficients, decision lists and parities. • SGD is guaranteed to learn, not necessarily in polynomial time, any continuous function. This complements classical universal approximation results that show that neural networks can (approximately) express any continuous function (see Scarselli and Tsoi [1998] for a survey). Our results strengthen those results and show that networks are not only able to express those functions, but actually guaranteed to learn them. 1.1 Related work Guarantees on SGD. As noted above, there are very few results that provide polynomial time guarantees for SGD on NN. One notable exception is the work of Andoni et al. [2014], that proves a result that is similar to ours, but in a substantially more restricted setting. Concretely, their result holds for depth-2 fully connected networks, as opposed to rather general architecture and constant or logarithmic depth in our case. Likewise, the marginal distribution on the instance space is assumed to be Gaussian or uniform, as opposed to arbitrary in our case. In addition, the algorithm they consider is full gradient decent, which corresponds to SGD with infinitely large mini-batch, as opposed to SGD with arbitrary mini-batch size in our case. Finally, the underlying task is regression in which the target function is a constant degree polynomial, whereas we consider rather general supervised learning setting. Other polynomial time guarantees on learning deep architectures. Various recent papers show that poly-time learning is possible in the case that the the learnt function can be realized by a neural network with certain (usually fairly strong) restrictions on the weights [Livni et al., 2014, Zhang et al., 2016a, 2015, 2016b], or under the assumption that the data is generated by a generative model that is derived from the network architecture [Arora et al., 2014, 2016]. We emphasize that the main difference of those results from our results and the results of Andoni et al. [2014] is that they do not provide guarantees on the standard SGD learning algorithm. Rather, they show that under those aforementioned conditions, there are some algorithms, usually very different from SGD on the network, that are able to learn in polynomial time. Connection to kernels. As mentioned earlier, our paper builds on Daniely et al. [2016], who developed the association of kernels to NN which we rely on. Several previous papers [Mairal et al., 2014, Cho and Saul, 2009, Rahimi and Recht, 2009, 2007, Neal, 2012, Williams, 1997, Kar and Karnick, 2012, Pennington et al., 2015, Bach, 2015, 2014, Hazan and Jaakkola, 2015, Anselmi et al., 2015] investigated such associations, but in a more restricted settings (i.e., for less architectures). Some of those papers [Rahimi and Recht, 2009, 2007, Daniely et al., 2016, Kar and Karnick, 2012, Bach, 2015, 2014] also provide measure of concentration results, that show that w.h.p. the random initialization of the network’s weights is reach enough to approximate the functions in the corresponding kernel space. As a result, these papers provide polynomial time guarantees on the variant of SGD, where only the last layer is trained. We remark that with the exception of Daniely et al. [2016], those results apply just to depth-2 networks. 1.2 Discussion and future directions We next want to place this work in the appropriate learning theoretic context, and to elaborate further on this paper’s approach for investigating neural networks. For the sake of concreteness, let us restrict the discussion to binary classification over the Boolean cube. Namely, given examples from a distribution D on {±1}n × {0, 1}, the goal is to learn a function h : {±1}n → {0, 1} whose 0-1 error, L0−1D (h) = Pr(x,y)∼D (h(x) �= y), is as small as possible. We will use a bit of terminology. A model is a distribution D on {±1}n × {0, 1} and a model class is a collection M of models. We note that any function class H ⊂ {0, 1}{±1}n defines a model class, M(H), consisting of all models D such that L0−1D (h) = 0 for some h ∈ H. We define the capacity of a model class as the minimal number m for which there is an algorithm such that for every D ∈ M the following holds. Given m samples from D, the algorithm is guaranteed to return, w.p. ≥ 910 over the samples and its internal randomness, a function h : {±1}n → {0, 1} with 0-1 error ≤ 110 . We note that for function classes the capacity is the VC dimension, up to a constant factor. Learning theory analyses learning algorithms via model classes. Concretely, one fixes some model class M and show that the algorithm is guaranteed to succeed whenever the underlying model is from M. Often, the connection between the algorithm and the class at hand is very clear. For example, in the case that the model is derived from a function class H, the algorithm might simply be one that finds a function in H that makes no mistake on the given sample. The natural choice for a model class for analyzing SGD on NN would be the class of all functions that can be realized by the network, possibly with some reasonable restrictions on the weights. Unfortunately, this approach it is probably doomed to fail, as implied by various computational hardness results [Blum and Rivest, 1989, Kearns and Valiant, 1994, Blum et al., 1994, Kharitonov, 1993, Klivans and Sherstov, 2006, 2007, Daniely et al., 2014, Daniely and Shalev-Shwartz, 2016]. So, what model classes should we consider? With a few isolated exceptions (e.g. Bshouty et al. [1998]) all known efficiently learnable model classes are either a linear model class, or contained in an efficiently learnable linear model class. Namely, functions classes composed of compositions of some predefined embedding with linear threshold functions, or linear functions over some finite field. Coming up we new tractable models would be a fascinating progress. Still, as linear function classes are the main tool that learning theory currently has for providing guarantees on learning, it seems natural to try to analyze SGD via linear model classes. Our work follows this line of thought, and we believe that there is much more to achieve via this approach. Concretely, while our bounds are polynomial, the degree of the polynomials is rather large, and possibly much better quantitative bounds can be achieved. To be more concrete, suppose that we consider simple fully connected architecture, with 2-layers, ReLU activation, and n hidden neurons. In this case, the capacity of the model class that our results guarantee that SGD will learn is Θ � n 1 3 � . For comparison, the capacity of the class of all functions that are realized by this network is Θ � n2 � . As a challenge, we encourage the reader to prove that with this architecture (possibly with an activation that is different from the ReLU), SGD is guaranteed to learn some model class of capacity that is super-linear in n. 2 Preliminaries Notation. We denote vectors by bold-face letters (e.g. x), matrices by upper case letters (e.g. W ), and collection of matrices by bold-face upper case letters (e.g. W). The p-norm of x ∈ Rd is denoted by �x�p = ��d i=1 |xi|p � 1 p . We will also use the convention that �x� = �x�2. For functions σ : R → R we let �σ� := � EX∼N (0,1) σ2(X) = � 1√ 2π �∞ −∞ σ 2(x)e− x2 2 dx . Let G = (V,E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is denoted in(v) := {u ∈ V | uv ∈ E}. We also denote deg(v) = |in(v)|. Given weight function δ : V → [0,∞) and U ⊂ V we let δ(U) = �u∈U δ(u). The d− 1 dimensional sphere is denoted Sd−1 = {x ∈ Rd | �x� = 1}. We use [x]+ to denote max(x, 0). Input space. Throughout the paper we assume that each example is a sequence of n elements, each of which is represented as a unit vector. Namely, we fix n and take the input space to be X = Xn,d = � Sd−1 �n . Each input example is denoted, x = (x1, . . . ,xn), where xi ∈ Sd−1 . (1) While this notation is slightly non-standard, it unifies input types seen in various domains (see Daniely et al. [2016]). Supervised learning. The goal in supervised learning is to devise a mapping from the input space X to an output space Y based on a sample S = {(x1, y1), . . . , (xm, ym)}, where (xi, yi) ∈ X × Y drawn i.i.d. from a distribution D over X × Y . A supervised learning problem is further specified by an output length k and a loss function � : Rk × Y → [0,∞), and the goal is to find a predictor h : X → Rk whose loss, LD(h) := E(x,y)∼D �(h(x), y), is small. The empirical loss LS(h) := 1m �m i=1 �(h(xi), yi) is commonly used as a proxy for the loss LD. When h is defined by a vector w of parameters, we will use the notations LD(w) = LD(h), LS(w) = LS(h) and �(x,y)(w) = �(h(x), y). Regression problems correspond to k = 1, Y = R and, for instance, the squared loss �square(ŷ, y) = (ŷ − y)2. Binary classification is captured by k = 1, Y = {±1} and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ 0] or the hinge loss �hinge(ŷ, y) = [1 − ŷy]+. Multiclass classification is captured by k being the number of classes, Y = [k], and, say, the zero-one loss �0−1(ŷ, y) = 1[ŷy ≤ argmaxy� ŷy� ] or the logistic loss �log(ŷ, y) = − log (py(ŷ)) where p : Rk → Δk−1 is given by pi(ŷ) = e ŷi�k j=1 e ŷj . A loss � is L-Lipschitz if for all y ∈ Y , the function �y(ŷ) := �(ŷ, y) is L-Lipschitz. Likewise, it is convex if �y is convex for every y ∈ Y . Neural network learning. We define a neural network N to be a vertices weighted directed acyclic graph (DAG) whose nodes are denoted V (N ) and edges E(N ). The weight function will be denoted by δ : V (N ) → [0,∞), and its sole role would be to dictate the distribution of the initial weights. We will refer N ’s nodes by neurons. Each of non-input neuron, i.e. neuron with incoming edges, is associated with an activation function σv : R → R. In this paper, an activation can be any function σ : R → R that is right and left differentiable, square integrable with respect to the Gaussian measure on R, and is normalized in the sense that �σ� = 1. The set of neurons having only incoming edges are called the output neurons. To match the setup of supervised learning defined above, a network N has nd input neurons and k output neurons, denoted o1, . . . , ok. A network N together with a weight vector w = {wuv | uv ∈ E} ∪ {bv | v ∈ V is an internal neuron} defines a predictor hN ,w : X → Rk whose prediction is given by “propagating” x forward through the network. Concretely, we define hv,w(·) to be the output of the subgraph of the neuron v as follows: for an input neuron v, hv,w outputs the corresponding coordinate in x, and internal neurons, we define hv,w recursively as hv,w(x) = σv �� u∈in(v) wuv hu,w(x) + bv � . For output neurons, we define hv,w as hv,w(x) = � u∈in(v) wuv hu,w(x) . Finally, we let hN ,w(x) = (ho1,w(x), . . . , hok,w(x)). We next describe the learning algorithm that we analyze in this paper. While there is no standard training algorithm for neural networks, the algorithms used in practice are usually quite similar to the one we describe, both in the way the weights are initialized and the way they are updated. We will use the popular Xavier initialization [Glorot and Bengio, 2010] for the network weights. Fix 0 ≤ β ≤ 1. We say that w0 = {w0uv}uv∈E ∪ {bv}v∈V is an internal neuron are β-biased random weights (or, β-biased random initialization) if each weight wuv is sampled independently from a normal distribution with mean 0 and variance (1− β)dδ(u)/δ(in(v)) if u is an input neuron and (1− β)δ(u)/δ(in(v)) otherwise. Finally, each bias term bv is sampled independently from a normal distribution with mean 0 and variance β. We note that the rational behind this initialization scheme is that for every example x and every neuron v we have Ew0 (hv,w0(x)) 2 = 1 (see Glorot and Bengio [2010]) Kernel classes. A function κ : X × X → R is a reproducing kernel, or simply a kernel, if for every x1, . . . ,xr ∈ X , the r × r matrix Γi,j = {κ(xi,xj)} is positive semi-definite. Each kernel induces a Hilbert space Hκ of functions from X to R with a corresponding norm � · �κ. For h ∈ Hkκ we denote �h�κ = ��k i=1 �hi�2κ. A kernel and its corresponding space are normalized if ∀x ∈ X , κ(x,x) = 1. Algorithm 1 Generic Neural Network Training Input: Network N , learning rate η > 0, batch size m, number of steps T > 0, bias parameter 0 ≤ β ≤ 1, flag zero prediction layer ∈ {True,False}. Let w0 be β-biased random weights if zero prediction layer then Set w0uv = 0 whenever v is an output neuron end if for t = 1, . . . , T do Obtain a mini-batch St = {(xti, yti)}mi=1 ∼ Dm Using back-propagation, calculate a stochastic gradient vt = ∇LSt(wt) Update wt+1 = wt − ηvt Kernels give rise to popular benchmarks for learning algorithms. Fix a normalized kernel κ and M > 0. It is well known that that for L-Lipschitz loss �, the SGD algorithm is guaranteed to return a function h such that ELD(h) ≤ minh�∈Hkκ, �h��κ≤M LD(h�) + � using � LM � �2 examples. In the context of multiclass classification, for γ > 0 we define �γ : Rk × [k] → R by �γ(ŷ, y) = 1[ŷy ≤ γ+maxy� �=y ŷy� ]. We say that a distribution D on X × [k] is M -separable w.r.t. κ if there is h∗ ∈ Hkκ such that �h∗�κ ≤ M and L1D(h∗) = 0. In this case, the perceptron algorithm is guaranteed to return a function h such that EL0−1D (h) ≤ � using 2M 2 � examples. We note that both for perceptron and SGD, the above mentioned results are best possible, in the sense that any algorithm with the same guarantees, will have to use at least the same number of examples, up to a constant factor. Computation skeletons [Daniely et al., 2016] In this section we define a simple structure which we term a computation skeleton. The purpose of a computational skeleton is to compactly describe a feed-forward computation from an input to an output. A single skeleton encompasses a family of neural networks that share the same skeletal structure. Likewise, it defines a corresponding normalized kernel. Definition 1. A computation skeleton S is a DAG with n inputs, whose non-input nodes are labeled by activations, and has a single output node out(S). Figure 1 shows four example skeletons, omitting the designation of the activation functions. We denote by |S| the number of non-input nodes of S. The following definition shows how a skeleton, accompanied with a replication parameter r ≥ 1 and a number of output nodes k, induces a neural network architecture. Definition 2 (Realization of a skeleton). Let S be a computation skeleton and consider input coordinates in Sd−1 as in (1). For r, k ≥ 1 we define the following neural network N = N (S, r, k). For each input node in S , N has d corresponding input neurons with weight 1/d. For each internal node v ∈ S labelled by an activation σ, N has r neurons v1, . . . , vr, each with an activation σ and weight 1/r. In addition, N has k output neurons o1, . . . , ok with the identity activation σ(x) = x and weight 1. There is an edge viuj ∈ E(N ) whenever uv ∈ E(S). For every output node v in S, each neuron vj is connected to all output neurons o1, . . . , ok. We term N the (r, k)-fold realization of S . Note that the notion of the replication parameter r corresponds, in the terminology of convolutional networks, to the number of channels taken in a convolutional layer and to the number of hidden neurons taken in a fully-connected layer. In addition to networks’ architectures, a computation skeleton S also defines a normalized kernel κS : X × X → [−1, 1]. To define the kernel, we use the notion of a conjugate activation. For ρ ∈ [−1, 1], we denote by Nρ the multivariate Gaussian distribution on R2 with mean 0 and covariance matrix � 1 ρ ρ 1 � . Definition 3 (Conjugate activation). The conjugate activation of an activation σ is the function σ̂ : [−1, 1] → R defined as σ̂(ρ) = E(X,Y )∼Nρ σ(X)σ(Y ) . The following definition gives the kernel corresponding to a skeleton Definition 4 (Compositional kernels). Let S be a computation skeleton and let 0 ≤ β ≤ 1. For every node v, inductively define a kernel κβv : X × X → R as follows. For an input node v corresponding to the ith coordinate, define κβv (x,y) = �xi,yi�. For a non-input node v, define κβv (x,y) = σ̂v � (1− β) � u∈in(v) κ β u(x,y) |in(v)| + β � . The final kernel κβS is κ β out(S). The resulting Hilbert space and norm are denoted HS,β and � · �S,β respectively. 3 Main results An activation σ : R → R is called C-bounded if �σ�∞, �σ��∞, �σ���∞ ≤ C. Fix a skeleton S and 1-Lipschitz1 convex loss �. Define comp(S) = �depth(S)i=1 maxv∈S,depth(v)=i(deg(v) + 1) and C(S) = (8C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-bounded, and depth(v) is the maximal length of a path from an input node to v. We also define C�(S) = (4C)depth(S) � comp(S), where C is the minimal number for which all the activations in S are C-Lipschitz and satisfy |σ(0)| ≤ C. Through this and remaining sections we use � to hide universal constants. Likewise, we fix the bias parameter β and therefore omit it from the relevant notation. 1If � is L-Lipschitz, we can replace � by 1 L � and the learning rate η by Lη. The operation of algorithm 1 will be identical to its operation before the modification. Given this observation, it is very easy to derive results for general L given our results. Hence, to save one paramater, we will assume that L = 1. We note that for constant depth skeletons with maximal degree that is polynomial in n, C(S) and C�(S) are polynomial in n. These quantities are polynomial in n also for various log-depth skeletons. For example, this is true for fully connected skeletons, or more generally, layered skeletons with constantly many layers that are not fully connected. Theorem 1. Suppose that all activations are C-bounded. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � C 4(Tη�)2M2(C�(S))4 log(C|S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. We next consider ReLU activations. Here, C�(S) = ( √ 32)depth(S) � comp(S). Theorem 2. Suppose that all activations are the ReLU. Let M, � > 0. Suppose that we run algorithm 1 on the network N (S, r, k) with the following parameters: • η = η�r for η� � �(C�(S))2 • T � M2η�� • r � (Tη �)2M2(C�(S))4 log( |S|�δ ) �2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minh∈HkS , �h�S≤M LD(h) + �. Here, the expectation is over the training examples. Finally, we consider the case in which the last layer is also initialized randomly. Here, we provide guarantees in a more restricted setting of supervised learning. Concretely, we consider multiclass classification, when D is separable with margin, and � is the logistic loss. Theorem 3. Suppose that all activations are C-bounded, that D is M -separable with w.r.t. κS and let � > 0. Suppose we run algorithm 1 on N (S, r, k) with the following parameters: • η = η�r for η� � � 2 M2(C(S))4 • T � log(k)M 2 η��2 • r � C4 (C(S))4 M2 (Tη�)2 log � C|S| � � + k + d • Randomly initialized prediction layer • Arbitrary m Then, w.p. ≥ 14 over the choice of the initial weights and the training examples, there is t ∈ [T ] such that L0−1D (wt) ≤ � 3.1 Implications To demonstrate our results, let us elaborate on a few implications for specific network architectures. To this end, let us fix the instance space X to be either {±1}n or Sn−1. Also, fix a bias parameter 1 ≥ β > 0, a batch size m, and a skeleton S that is a skeleton of a fully connected network of depth between 2 and log(n). Finally, we also fix the activation function to be either the ReLU or a C-bounded activation, assume that the prediction layer is initialized to 0, and fix the loss function to be some convex and Lipschitz loss function. Very similar results are valid for convolutional networks with constantly many convolutional layers. We however omit the details for brevity. Our first implication shows that SGD is guaranteed to efficiently learn constant degree polynomials with polynomially bounded weights. To this end, let us denote by Pt the collection of degree t polynomials. Furthermore, for any polynomial p we denote by �p� the �2 norm of its coefficients. Corollary 4. Fix any positive integers t0, t1. Suppose that we run algorithm 1 on the network N (S, r, 1) with the following parameters: • η � poly � � n � • T, r � poly � n � , log (1/δ) � Then, w.p. ≥ 1 − δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ minp∈Pt0 , �p�≤nt1 LD(p) + �. Here, the expectation is over the training examples. We note that several hypothesis classes that were studied in PAC learning can be realized by polynomial threshold functions with polynomially bounded coefficients. This includes conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. If we take the loss function to be the logistic loss or the hinge loss, Corollary 4 implies that SGD efficiently learns these hypothesis classes as well. Our second implication shows that any continuous function is learnable (not necessarily in polynomial time) by SGD. Corollary 5. Fix a continuous function h∗ : Sn−1 → R and �, δ > 0. Assume that D is realized2 by h∗. Assume that we run algorithm 1 on the network N (S, r, 1). If η > 0 is sufficiently small and T and r are sufficiently large, then, w.p. ≥ 1− δ over the choice of the initial weights, there is t ∈ [T ] such that ELD(wt) ≤ �. 3.2 Extensions We next remark on two extensions of our main results. The extended results can be proved in a similar fashion to our results. To avoid cumbersome notation, we restrict the proofs to the main theorems as stated, and will elaborate on the extended results in an extended version of this manuscript. First, we assume that the replication parameter is the same for all nodes. In practice, replication parameters for different nodes are different. This can be captured by a vector {rv}v∈Int(S). Our main results can be extended to this case if for all v, rv ≤ � u∈in(v) ru (a requirement that usually holds in practice). Second, we assume that there is no weight sharing that is standard in convolutional networks. Our results can be extended to convolutional networks with weight sharing. We also note that we assume that in each step of algorithm 1, a fresh batch of examples is given. In practice this is often not the case. Rather, the algorithm is given a training set of examples, and at each step it samples from that set. In this case, our results provide guarantees on the training loss. If the training set is large enough, this also implies guarantees on the population loss via standard sample complexity results. Acknowledgments The author thanks Roy Frostig, Yoram Singer and Kunal Talwar for valuable discussions and comments. 2That is, if (x, y) ∼ D then y = h∗(x) with probability 1.
1. What is the focus of the paper in terms of theoretical problems related to neural network learning? 2. What is the main contribution of the paper, particularly in regard to the approximation of the best function in the conjugate kernel space of the network? 3. How does the reviewer assess the clarity and quality of the paper's content, specifically regarding the introduction and proof of main theorems? 4. Are there any minor issues or typos mentioned by the reviewer that could be improved upon?
Review
Review This paper addressed important theoretical problem how a standard stochastic gradient descend algorithm can guarentee on learning in polynomial time to approximate the best function in the conjugate kernel space of the network.It is claimed by authors that this is the first polynomial time guarentee for neural network learning with depth more than two. In general, the paper is clearly written but authors uses too many pages for background introduction such kernel classes and neural network learning,leaving little room for proofs of main theorems in the paper and thus make it quite challenging to fully understand all technical details. Another minor issue is the typo of descend. Authors keep using decend in the paper. Overall, I think this paper addresses an important theoretical problem on neural network learning.
NIPS
Title Value Function Decomposition for Iterative Design of Reinforcement Learning Agents Abstract Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons, and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate value decomposition into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent’s learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward influence metric, which measures each reward component’s effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition’s use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner’s toolbox. 1 Introduction Deep reinforcement-learning (RL) approaches have achieved successes in a range of application areas such as gaming ([5, 32, 36, 38]), robotics ([22]), and the natural sciences ([25, 35]). Despite these successes, applying RL techniques to complex control problems remains a daunting undertaking, where initial attempts often result in underwhelming performance. Unfortunately, there are many reasons why an agent may fail to learn a good policy, making it difficult to diagnose which reason(s) caused a particular agent to fail. For example: an agent may fail because the state features were insufficient to make accurate predictions, different task objectives defining the reward function may be imbalanced, the agent may fail to sufficiently explore the state-action space, values may not accurately propagate to more distant states, the neural network may not have sufficient capacity to †Sony AI ⇤Equal contribution ‡The University of Texas at Austin 36th Conference on Neural Information Processing Systems (NeurIPS 2022). approximate the policy or value function(s), or, there may be subtle differences between training and evaluation environments. Without a way to diagnose the causes of poor performance or to recognize when a problem has been remedied, practitioners typically engage in a long trial-and-error design process until an agent reaches a desired level of performance. Frustrations with this trial-and-error process have been expressed in other work [16]. We describe how value decomposition, a simple, broadly-applicable technique, can address these application challenges. In RL, the agent receives a reward that is often a sum of many reward components, each designed to encode some aspect of the desired agent behavior. From this composite reward, it learns a single composite value function. Using value decomposition, an agent learns a component value function for each reward component. To perform policy optimization, the composite value function is recovered by taking a weighted sum of the component value functions. While prior work has proposed value decomposition methods for discrete-action Q-learning [19, 29, 31], we show how value decomposition can be incorporated into a broad class of actor-critic (AC) methods. In addition, we introduce SAC-D, a version of soft actor-critic (SAC) [13, 14] with value decomposition, and explore its use in multi-dimensional continuous-action environments. We also introduce the influence metric, which measures how much an agent’s decisions are affected by each reward component. While earlier work focuses on its use in reward design [16, 19], value decomposition can facilitate diagnosis of a wide range of issues and enable new training methodologies. To demonstrate its utility, in Sec. 6 we show how to use it to: (1) diagnose insufficient state features; (2) diagnose value prediction errors and exploit the decomposed structure to inject background-knowledge; and (3) identify reward components that are inhibiting exploration and mitigate the effect by gradually incorporating component predictions into policy optimization. Value decomposition’s additional diagnostic and training capabilities come at the cost of a morechallenging prediction problem: instead of learning a single value function, many must be learned. To investigate if this difficulty negatively impacts agent performance, we compare the average performance of SAC-D to SAC on benchmark environments. We find that a naive implementation of SAC-D underperforms SAC and then show how to improve SAC-D so that it matches and sometimes exceeds SAC’s performance. These improvements may also be applied to value decomposition for other AC algorithms. While variations of value decomposition has been explored extensively in past work (see Sec. 7), in this paper, we make the following contributions. (1) We show how to integrate value decomposition into a broad class of actor-critic algorithms. (2) We analyze the performance of different implementations of value decomposition for SAC on a range of benchmark continuous-action environments. (3) We introduce the influence metric: a novel value decomposition metric for measuring how much each reward component affects decision-making. (4) We provide a set of illustrative examples of how value decomposition and influence can be used to diagnose various kinds of learning challenges. (5) We describe new training methods that exploit the value decomposition structure and can be used to mitigate different learning challenges. 2 Background 2.1 MDPs and Q-functions In RL, an agent’s interaction with the environment is modeled as a Markov Decision Process (MDP): (S,A, P,R, ), where S is a set of states, A is a set of actions, P : S ⇥ A ⇥ S ! R is a state transition probability function P (s, a, s0) = Pr(St+1 = s0|St = s,At = a), R : S ⇥ A ! R is a reward function R(s, a) = E[Rt+1|St = s,At = a], and 2 [0, 1] discounts future rewards.4 The goal of an agent is to learn a policy ⇡(a|s) that maps states to an action probability distribution that maximizes the sum of future rewards. The agent is trained to maximize the discounted return E [ P1 t tR(st, at)]. The Q-function maps state-action pairs to the expected cumulative discounted reward when starting in state s, taking action a, and then following policy ⇡ thereafter: Q⇡(s, a) , E " 1X t=0 tR(st, at)|⇡, s0 = s, a0 = a # . (1) 4For continuing tasks, must be < 1, and we only consider algorithms for < 1. 2.2 Soft actor-critic Soft actor-critic (SAC) [13, 14] is an off-policy actor-critic algorithm parameterized with five neural networks: a stochastic policy network ⇡ with parameters , and two pairs of Q-functions and target Q-functions with parameters (✓1, ✓2) and (✓̄1, ✓̄2), respectively. As with other actor-critic algorithms, SAC has two main steps: policy evaluation (in which it estimates the Q-function for policy ⇡), and policy improvement (in which it optimize the policy to maximize its Q-function estimates). Unlike other actor-critic algorithms, SAC optimizes a maximum entropy formulation of the MDP, in which rewards are augmented with policy entropy bonuses that prevent premature policy collapse. To perform policy evaluation and improvement, SAC minimizes the following loss functions simultaneously: LQi = E 1 2 (Q(s, a; ✓i) y)2 for i 2 {1, 2} , (2a) L⇡ = E ↵ log ⇡(u|s; ) min j2{1,2} Q(s, u; ✓j) , (2b) where (s, a, r, s0) transitions are drawn from an experience replay buffer, y := r + minj2{1,2} Q(s 0, a0; ✓̄j) ↵ log ⇡(a0|s0; ) , a0 ⇠ ⇡(·|s0; ), u ⇠ ⇡(·|s; ), and ↵ is an (optionally learned) entropy regularization parameter. The min of Q-function pairs addresses overestimation bias in value function estimation [11, 15]. The parameters ✓̄1 and ✓̄2 are updated toward ✓1 and ✓2 via an exponentially moving average each step. 2.3 Environments Throughout the paper, we italicize descriptive names for the components of the continuous-action Lunar Lander (LL), Bipedal Walker (BW) and Bipedal Walker Hardcore (BWH) environments. In LL, an agent must land a spacecraft in the center of a landing zone using as little fuel as possible. The reward components include: a reward for successful landing; penalties for crashing (crash) and engine usage (main, side); shaping rewards used to encourage the agent to stay upright (angle), move towards the center of the landing pad (position) with low velocity (velocity) and land with both legs (right leg, left leg). In BW, an agent learns to make a 2-legged robot walk. The reward components include: a reward for forward progress, a penalty for falling (failure), a cost for actions (control), and a shaping reward to discourage head movement (head). BWH is identical to BW, but adds additional obstacles for the agent to navigate. 3 Value decomposition for actor-critic methods Most RL algorithms estimate the value function and use it to improve its policy. Unfortunately, value functions and policies provide little insight into the agent’s decision-making. However, reward functions are often composite functions of multiple component state-action signals. By learning a value function estimate for the current policy for each component, practitioners gain insight into what the agent expects to happen and how these reward components interact. Naturally, policy improvement still requires the composite value function. In Sec. 3.1 we show how the composite Qfunction can be recovered from the component Q-functions. From this property, a range of actor-critic algorithms can be adapted to use value decomposition by following the below template.5 1. Alter Q-function networks to have m outputs instead of 1, where m is the number of reward components. 2. Use the base algorithm’s Q-function update for each of the m components, replacing the composite reward term with the respective component reward term. 3. Apply the base algorithm’s policy improvement step by first recovering the composite Q-function. For example, this template can be applied to algorithms that use TD(0) [33], or ones that use Retrace [27]. It works with algorithms that improve the policy by differentiating through the Qfunction [11, 14, 23], and ones that fit it toward non-parametric target action distributions [1]. 5Composite state value functions can similarly be recovered; however, Q-functions allow for deeper introspection, so we focus on that setting in this work. Algorithm 1 SAC-D and SAC-D-CAGrad Update Require: Experience replay buffer B; twin Q-function parameters ✓1, ✓2 (with ⇥ = ✓1 [ ✓2) and target parameters ✓̄1, ✓̄2; policy parameters ; discount factor ; entropy parameter ↵; reward weights w 2 Rm+1; learning rates q, ⇡; target network step size ⌘; Boolean use_cagrad for SAC-D-CAGRAD or SAC-D. 1: Sample transition (minibatch) (s, a, r, s0) ⇠ B . r 2 Rm is a vector of m reward components 2: Sample policy actions a0 ⇠ ⇡(·|s0; ) and u ⇠ ⇡(·|s; ) 3: rm+1 ↵ log ⇡(a0|s0; ) . Extend reward vector to include entropy reward 4: j argmin j2{1,2} Pm+1 i wiQi(s 0, a0; ✓̄j) . Find target network by minimum composite Q-value 5: yi ri + Qi(s0, a0; ✓̄j) 6: LQi P2 j=1 1 2 (Qi(s, a; ✓j) yi) 2 7: L⇡ ↵ log ⇡(u|s; ) minj2{1,2} Pm+1 i wiQi(s, u; ✓j) 8: if use_cagrad then 9: ⇥ ⇥ qCAGRAD(JLQ,⇥) 10: else 11: ⇥ ⇥ qr⇥ 1m+1 Pm+1 i LQi 12: end if 13: ⇡r L⇡ 14: Update target networks ⇥̄ (1 ⌘)⇥̄+ ⌘⇥ Although this template is conceptually simple, learning component Q-functions poses a more difficult prediction problem: multiple predictions must be learned instead of one composite prediction. Ideally, this increased difficulty would not negatively impact agent performance. In Sec. 3.2 we introduce SAC-D, an adaptation of SAC to use value decomposition, and describe additions we made to the above template to maintain performance with conventional SAC. Although these additions are contextualised to SAC, they are general and can be used when adapting other actor-critic algorithms. 3.1 Recovering the composite Q-function We assume the environment’s reward function is a linear combination of m components: R(s, a) ,Pm i wiRi(s, a), where wi 2 R is a scalar component weight for the ith component and Ri(s, a) ! R is the reward function of the ith component for state-action pair (s, a). Applying the linearity of expectation, we find the Q-function inherits the linear structure from the reward6: Q⇡(s, a) = mX i wiQ ⇡ i (s, a), (3) where we define the i’th component Q-function as Q⇡i (s, a) , E[ P t tRi(st, at)|⇡, s0 = s, a0 = a]. Unless otherwise specified, we assume wi = 1 for all i. Because the component weights are factored out of the component Q-functions, they may be varied without changing the component prediction target, allowing the policy to be evaluated for any weight combination. Although the assumption of linearity may seem restrictive, note that each reward component may be a non-linear function of state variables, allowing for very expressive environment rewards. Furthermore, many environments, including all the environments we investigate in this paper, are naturally structured as a sum of (non-linear) reward components. 3.2 SAC with value decomposition Here we introduce SAC-D (Alg. 1), an adaptation of SAC to use value decomposition. Adapting SAC only requires one additional consideration from our template: the entropy bonus reward term SAC adds is treated as an m+ 1’th reward component: Rm+1(s0) , ↵ log ⇡(a0|s0; ) (line 3). However, this approach, which we refer to as SAC-D-Naive, underperforms SAC in many settings. We found two additional modifications essential to match the performance of SAC. The first concerns how we apply the twin-network minimum of Eq. 2 in the context of value decomposition. The second is to use Conflict-Averse Gradient descent (CAGrad) [24] to address optimization problems that arise when training multi-headed neural networks. We refer to SAC with value decomposition and the 6See Theorem A.1 for the proof. This linear decomposition property of value functions has been explored elsewhere [3, 9], but in different contexts and with different motivations. See Sec. 7 for more information. twin-network correction as SAC-D, and the variant with twin-network correction and CAGrad as SAC-D-CAGrad. Twin-network minimums in value decomposition: In SAC, the Q-value target is the minimum of two Q-function networks (Eq. 2). Using the same Q-value update rule for each component, as described in our template, suggests using a minimum for each component target: qi := minj2{1,2} Qi(s, a; ✓̄j). However, this is not a good choice in practice. The purpose of the twinnetwork minimum is to mitigate overestimation bias from the feedback loop of the policy optimizing the Q-function. Because the policy optimizes the composite Q-function, a better approach is to use all the predictions from the network with the minimum composite Q-function (Alg. 1, lines 5-6)). This approach reduces underestimation bias and improves performance compared to an element-wise minimum (see Sec. 4). Mediating the difficulty of multi-objective optimization: Even though the scalar values of the composite Q-function are identical to those used in SAC, simultaneous optimization of all Q⇡i components may introduce training problems common in multi-objective optimization: conflicting gradients, high curvature and large differences in gradient magnitudes [39].7 The CAGrad method, designed for the multi-task RL setting, addresses these issues by replacing the gradient of a multi-task objective with a weighted sum of per-task loss gradients. This updated gradient step maximizes the improvement of the worst-performing task on each optimization step, and still converges to a minimum of the unmodified loss. We incorporate CAGrad into SAC-D by treating each component as a “task”, and update the gradient vector accordingly (Alg. 1, line 9). 4 Robustness experiments We benchmark SAC-D, SAC-D-CAGrad and SAC-D-Naive against SAC on a selection of continuousaction Gym [7] environments. For each environment, we exposed existing additive reward components without altering the behavior of the environments or their composite rewards. That is, these environments already implemented their reward functions as a linear combination of separate reward components and we simply exposed that information to the algorithm (for details, see App. B). As outlined in App. C, we used hyperparameters previously published for use with SAC [14] for all experiments. We tied SAC-D’s hyperparameters to SAC’s because our goal is for value decomposition to be a drop-in addition without significant loss in agent-performance. However, it is possible better performance could be reached with tuning. Figure 1(a) shows the performance of each algorithm aggregated across all environments and all experimental runs. Figure 1(b) shows the same information, but highlights the distribution of scores across experimental runs. In aggregate, SAC-D-CAGrad slightly outperforms SAC, although it has a broader range of performance scores. SAC-D-Naive significantly underperforms SAC. 7Multi-headed prediction can also improve representation learning, as in work on auxiliary tasks [18, 26] We provide training curves for the 8 environments investigated in App. D, but highlight the atypical training curves for BWH (Fig. 1(c)) and Ant (Fig. 1(d)). In the case of BWH, SAC-D-CAGrad significantly outperforms SAC. It was not the goal of this work for SAC-D to improve on SAC. Rather, we sought to provide more insights into the learning process. As such, we make no strong claims about when SAC-D can be expected to outperform SAC. Nevertheless, this result does suggest that SAC-D may sometimes benefit from auxiliary task learning. We leave this question as a subject for future investigation. In Ant, SAC-D (without CAGrad) underperforms all other methods. We found that infrequent environment termination causes large Q-function errors and leads to catastrophic gradient conflicts. Further analysis of termination issues and CAGrad behavior is described in Appendices E and F respectively. 5 Reward component influence It can be difficult to understand how an agent’s predictions interact to affect decision-making. We now introduce the reward influence metric, which indicates how much each component contributes to an agent’s decision. Intuitively, low influence means that removing a component would not alter decision-making; high influence means that removing it would significantly alter decision-making. For multi-dimensional continuous actions, we define the optimal influence of component i in state s by how much the optimal policy ⇡⇤ in state s differs from the optimal policy when component i is removed: I⇤i (s) , ||⇡⇤(s) ⇡⇤¬i(s)||2, where ⇡⇤¬i , argmax⇡ E hP1 t t P j 6=i wjRj(st, at)|⇡ i .8 In practice, we apply two approximations to I⇤i (s) for computational efficiency. First, rather than compare the difference of optimal policies, we compare the difference between one step of policy improvement: I⇡i (s) , || argmaxa Q⇡(s, a) argmaxa Q⇡¬i(s, a)||2, where Q⇡¬i(s, a) ,P j 6=i wjQ ⇡ j (s, a). Second, since argmax is computationally demanding and sensitive to statistical noise, we replace the argmaxa Q(s, a) policy improvement operator with a policy gradient-step 8Discrete-action spaces could use a probability distance measure, but we focus on continuous-action spaces. operator (typical in RL algorithms like SAC): ā + rāQ(s, ā), where ā is a deterministic policy action selection (such as the mode) and 2 (0, 1) is a step size. When taking the difference of the gradient-step operator applied to the Q⇡ and Q⇡¬i surfaces, the ā terms cancel, and can be factored out of the norm. The result is the influence metric (Fig. 2): I⇡i (s; ✓) , ||rāQ⇡(s, ā; ✓) rāQ⇡¬i(s, ā; ✓)||2. (4) The raw magnitudes of component influence can be informative by themselves; for example, a sharper Q-function surface leads to larger influence values. However, to compare influence values across components, we typically compute the fractional influence by normalizing the (always non-negative) influence: Î⇡i (s; ✓) , I⇡i (s;✓)Pm j I ⇡ j (s;✓) . We use several techniques to visualize the fractional influence. For trajectories, plotting influences over timesteps may help identify and explain key decision points (Fig. 3(a)). When studying an agent’s behavior across training, we maintain summary statistics of fractional influence. We visualize mean fractional influence across all components as a stack plot, sorted so that the component with the maximum influence at the end of training is at the bottom. Figure 3(b) shows such a diagram for the Lunar Lander environment; we provide figures for all environments in App. G. 6 Value decomposition: strategies for agent design Agent design is often a brute-force process of trial and error: when an agent doesn’t perform as expected, we choose some aspect of the agent’s design to vary, and then we train again. Although this approach can succeed, it can be expensive in both time and computation. In this section, we illustrate a different approach, showing how a decomposed reward helps break from trial-and-error by encouraging the designer to consider an agent’s point of view. In three examples, each using either the LL or BWH environments (Sec. 2.3), we draw on value decomposition tools to: (1) identify learning problems by comparing components’ empirical returns to their predictions; (2) constrain component value estimation; (3) identify adverse reward interactions with the influence metric; (4) dynamically re-weight reward components. These examples are not just meant to demonstrate these specific techniques, nor to demonstrate significant performance improvements on these well-tested benchmark environments (in fact, SAC-D eventually produces good policies for both environments without additional tuning). Rather, the goal is to showcase a general approach to agent design for novel applications in which iteration and failure is costly. Combined with statistical tools that allow us to reason with small-sample statistics, value decomposition provides a vocabulary to describe an agent’s behavior and interpretable tools for targeted interventions. Even these small examples are more intuitive – and less computationally demanding – than they would be with a trial-and-error approach. For all examples, training parameters are identical to the robustness results of Sec. 4. Diagnosing and improving insufficient features: We analyze the behavior of an agent trained on Lunar Lander by comparing each component’s empirical return to the agent’s value predictions, Qi. The agent is trained to land successfully, and generally the component value predictions match their empirical returns well. Curiously, all component predictions are flat near the end of the episode. For most components, these flat predictions are a good match for their returns, but not for landing. Investigating the landing dynamics, we found the simulator waits many steps after touchdown before producing a landing reward. During this period, the observations are constant, suggesting the features are inadequate to represent landing’s return. To make the observations Markov, we introduce a new feature to the agent’s observations that indicates the duration since the agent’s velocity (horizontal, vertical and angular) went to zero: V trace0 (t) = V steps 0 (t)/c, where V steps 0 (t) is the number of time steps since all the velocities dropped below a threshold and c is a fixed normalizing constant. With this feature, post-landing predictions show a marked improvement (see Figure 4(a) and Appendices C, H). Diagnosing and mitigating value errors using domain knowledge: Lunar Lander’s design makes clear that certain reward components are always non-positive (crash, main, side) while others are always non-negative (landing). However, we observe that the agent’s decomposed value predictions do not always match these bounds. In particular, value predictions of crash have a tendency to oscillate about 0 after the agent learns to land. Value decomposition allows us to explicitly enforce a sign constraint on crash (Figure 4(b); see App. I for details). In this particular example, constraints do not alter policy learning performance, but the resulting predictions are easier to interpret, and the same technique may improve performance in more complex environments. Mitigating an adverse reward with component weight scheduling: Under the BWH reward function, a random policy is far more likely to experience an unsuccessful outcome (falling over) than a successful one (walking forward). This bias can inhibit agent exploration early in training, causing an agent’s policy to fall into a local minimum (the agent stands still). Here, we diagnose this dynamic using component predictions and influence metrics, and remedy it by varying a single component weight during training. We find that the failure component’s fractional influence dominates the forward component’s fractional influence early in training, and that this relationship reverses as agent performance improves (Fig. 5(a)). The forward component’s near-zero value predictions (Fig. 5(b)) early in learning indicate that in many episodes, the agent neither moves nor expects to move (Fig. 5(c)). Early dominance by the easy-to-observe failure suggests that it is inhibiting exploration. To mitigate this problem, we vary the failure component weight, wfailure, from 0.01 to 1 over the course of learning according to the schedule described in App. J. This schedule significantly increases the agents’ forward progress (Fig. 5(c)), and accelerates learning. 7 Related work Our work builds upon earlier studies of value decomposition in the explainable reinforcement learning (XRL) literature: DrQ [19], DrSARSA [29], and HRA [31].9 Like our approach, these methods learn separate value function estimates for each term of a linear reward function. DrQ and HRA are off-policy Q-learning-like methods, while DrSARSA is an on-policy method. HRA does not converge to a globally optimal policy, as each value function is only locally-optimal for the reward component it measures. The RDX and MSX metrics proposed by DrQ could be adopted in our setting, but the influence metric is easier to use with continuous-actions, and aggregate with summary statistics. Our approach improves upon these prior contributions by: (1) working in continuous-action and discrete-action environments; (2) allowing for and demonstrating dynamic re-weighting of reward components during training; and (3) being applicable to a family of actor-critic methods. While these approaches and ours explore using value decomposition for explainability, these works focus on describing why an agent took certain actions to users, whereas we focus on how to use value decomposition to diagnose and remedy learning challenges. The Horde architecture [34] and UVFA [30], methods for multi-goal learning, also employ multiple value functions (one for each goal). UVFAs use a parameterized continuous space of goals, while Horde makes multiple discrete value function predictions. The value functions in our work are conditioned on a policy that optimizes the global reward, whereas in Horde and UVFAs the value functions are conditioned on independent policies that greedily optimize local goals (similar to HRA). Additionally, in our approach, the composite value function can be recovered from the components. Other value decomposition work includes Empathic Q-learning [21] and Orchestrated Value Mapping [10]. The primary difference between our work these other approaches regards the motivation and application of value decomposition. These works focus on how value decomposition can improve sample efficiency, generalization, or other aspects of the core learning problem, rather than how to diagnose and remedy problems. Mathematically, value decomposition bears resemblance to work on successor features, which has focused primarily on state representation and transfer learning [3, 4, 6, 12]. Methods for multiobjective RL [16, 28] learn sets of policies, each with a distinct linear weight over multiple reward objectives. Our work also recovers the value function for different linear reward preferences, but uses this capability to diagnose behavior and learning problems rather than to learn multiple policies. 9For a broad overview of XRL, we direct the reader to Heuillet et al. [17]. 8 Concluding remarks We have argued that the iterative design of reinforcement learning agents can be improved through the use of value decomposition, in which we keep individual reward components separate and learn value estimates of each. We provided a simple prescription for deriving value decomposition algorithms from actor-critic methods, and applied it to SAC to derive the SAC-D algorithm. Combined with the CAGrad method, SAC-D meets or exceeds SAC’s performance in all environments we tested. We introduced the influence metric, and demonstrated its use in measuring each reward component’s effect on an agent’s decisions. Finally, we provided several examples of how value decomposition can diagnose and remedy agent learning problems. Although value decomposition is a simple and broadly applicable tool, we note the following limitations. (1) Our method requires a composite reward function of multiple components. (2) We only study linear reward decomposition. (3) Component predictions only tell you the agent expectations under the single learned policy; changing the weights doesn’t tell you what to expect after re-optimizing the policy for them. (4) Our approach benefits most with methods that learn Q-functions. Methods that optimize the policy with empirical returns have a weaker link between agent expectations (component Q-functions) and policy decisions. The influence metric also requires a Q-function model. Our method presents the same societal benefits and risks as other RL methods. However, we believe this technology has a net beneficial impact, because making agent decisions more introspectable enables developers to catch problematic behavior before deploying the technology. One particular concern, however, is that value estimates represent an agent’s beliefs, not ground truth; this should be kept in mind when such predictions are used in real-world decision-making, as they may reinforce biases or lead to incorrect conclusions.
1. What is the focus and contribution of the paper on integrating value decomposition into actor-critic algorithms? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its originality and quality? 3. Do you have any concerns or questions about the paper's experiments and their limitations? 4. How does the reviewer assess the clarity and significance of the paper's content? 5. Are there any limitations or potential drawbacks of the proposed method that the reviewer would like to highlight?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper provides a framework to integrate value decomposition into a broad class of actor-critic algorithms, and applied it to SAC to derive the SAC-D algorithm. Lots of experiments are conduct to verify SAC-D's performance over SAC. This paper introduces the influence metric, and demonstrated its use in measuring each reward component's effect on an agent's decisions. Finally, they provides several examples of how value decomposition can diagnose and remedy agent learning problems. Strengths And Weaknesses Originality: The operation of integrating value decomposition into reinforcement learning algorithms seems to be common sense, which has been proposed by many existing literature. And the performance of value decomposition has been explored in a large body of literature. The work of this paper seems to be just another confirmation of previous findings. Therefore, this paper is less original. Quality: This paper is well written and easy to understand. But, I have two concerns as follows. (1) Since SAC-D is developed by incremental, heuristic refinement to SAC, it is not clear whether the performance gain obtained by this refinement is limited to a few experiments in this paper. (2) The iterative design of RL agents shown in Section 6 is quite interesting. However, it still seems to require strong task-specific priors from experts. Clarity: This paper is well written and easy to understand. Significance: The work is a bit significant, but does not give new insights into value decomposition. Questions (1) Since SAC-D is developed by incremental, heuristic refinement to SAC, it is not clear whether the performance gain obtained by this refinement is limited to a few experiments in this paper. (2) Value decomposition in this paper is artificially divided. How does value decomposition with different divisions affect the effect of the algorithm? Limitations Yes.
NIPS
Title Value Function Decomposition for Iterative Design of Reinforcement Learning Agents Abstract Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons, and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate value decomposition into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent’s learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward influence metric, which measures each reward component’s effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition’s use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner’s toolbox. 1 Introduction Deep reinforcement-learning (RL) approaches have achieved successes in a range of application areas such as gaming ([5, 32, 36, 38]), robotics ([22]), and the natural sciences ([25, 35]). Despite these successes, applying RL techniques to complex control problems remains a daunting undertaking, where initial attempts often result in underwhelming performance. Unfortunately, there are many reasons why an agent may fail to learn a good policy, making it difficult to diagnose which reason(s) caused a particular agent to fail. For example: an agent may fail because the state features were insufficient to make accurate predictions, different task objectives defining the reward function may be imbalanced, the agent may fail to sufficiently explore the state-action space, values may not accurately propagate to more distant states, the neural network may not have sufficient capacity to †Sony AI ⇤Equal contribution ‡The University of Texas at Austin 36th Conference on Neural Information Processing Systems (NeurIPS 2022). approximate the policy or value function(s), or, there may be subtle differences between training and evaluation environments. Without a way to diagnose the causes of poor performance or to recognize when a problem has been remedied, practitioners typically engage in a long trial-and-error design process until an agent reaches a desired level of performance. Frustrations with this trial-and-error process have been expressed in other work [16]. We describe how value decomposition, a simple, broadly-applicable technique, can address these application challenges. In RL, the agent receives a reward that is often a sum of many reward components, each designed to encode some aspect of the desired agent behavior. From this composite reward, it learns a single composite value function. Using value decomposition, an agent learns a component value function for each reward component. To perform policy optimization, the composite value function is recovered by taking a weighted sum of the component value functions. While prior work has proposed value decomposition methods for discrete-action Q-learning [19, 29, 31], we show how value decomposition can be incorporated into a broad class of actor-critic (AC) methods. In addition, we introduce SAC-D, a version of soft actor-critic (SAC) [13, 14] with value decomposition, and explore its use in multi-dimensional continuous-action environments. We also introduce the influence metric, which measures how much an agent’s decisions are affected by each reward component. While earlier work focuses on its use in reward design [16, 19], value decomposition can facilitate diagnosis of a wide range of issues and enable new training methodologies. To demonstrate its utility, in Sec. 6 we show how to use it to: (1) diagnose insufficient state features; (2) diagnose value prediction errors and exploit the decomposed structure to inject background-knowledge; and (3) identify reward components that are inhibiting exploration and mitigate the effect by gradually incorporating component predictions into policy optimization. Value decomposition’s additional diagnostic and training capabilities come at the cost of a morechallenging prediction problem: instead of learning a single value function, many must be learned. To investigate if this difficulty negatively impacts agent performance, we compare the average performance of SAC-D to SAC on benchmark environments. We find that a naive implementation of SAC-D underperforms SAC and then show how to improve SAC-D so that it matches and sometimes exceeds SAC’s performance. These improvements may also be applied to value decomposition for other AC algorithms. While variations of value decomposition has been explored extensively in past work (see Sec. 7), in this paper, we make the following contributions. (1) We show how to integrate value decomposition into a broad class of actor-critic algorithms. (2) We analyze the performance of different implementations of value decomposition for SAC on a range of benchmark continuous-action environments. (3) We introduce the influence metric: a novel value decomposition metric for measuring how much each reward component affects decision-making. (4) We provide a set of illustrative examples of how value decomposition and influence can be used to diagnose various kinds of learning challenges. (5) We describe new training methods that exploit the value decomposition structure and can be used to mitigate different learning challenges. 2 Background 2.1 MDPs and Q-functions In RL, an agent’s interaction with the environment is modeled as a Markov Decision Process (MDP): (S,A, P,R, ), where S is a set of states, A is a set of actions, P : S ⇥ A ⇥ S ! R is a state transition probability function P (s, a, s0) = Pr(St+1 = s0|St = s,At = a), R : S ⇥ A ! R is a reward function R(s, a) = E[Rt+1|St = s,At = a], and 2 [0, 1] discounts future rewards.4 The goal of an agent is to learn a policy ⇡(a|s) that maps states to an action probability distribution that maximizes the sum of future rewards. The agent is trained to maximize the discounted return E [ P1 t tR(st, at)]. The Q-function maps state-action pairs to the expected cumulative discounted reward when starting in state s, taking action a, and then following policy ⇡ thereafter: Q⇡(s, a) , E " 1X t=0 tR(st, at)|⇡, s0 = s, a0 = a # . (1) 4For continuing tasks, must be < 1, and we only consider algorithms for < 1. 2.2 Soft actor-critic Soft actor-critic (SAC) [13, 14] is an off-policy actor-critic algorithm parameterized with five neural networks: a stochastic policy network ⇡ with parameters , and two pairs of Q-functions and target Q-functions with parameters (✓1, ✓2) and (✓̄1, ✓̄2), respectively. As with other actor-critic algorithms, SAC has two main steps: policy evaluation (in which it estimates the Q-function for policy ⇡), and policy improvement (in which it optimize the policy to maximize its Q-function estimates). Unlike other actor-critic algorithms, SAC optimizes a maximum entropy formulation of the MDP, in which rewards are augmented with policy entropy bonuses that prevent premature policy collapse. To perform policy evaluation and improvement, SAC minimizes the following loss functions simultaneously: LQi = E 1 2 (Q(s, a; ✓i) y)2 for i 2 {1, 2} , (2a) L⇡ = E ↵ log ⇡(u|s; ) min j2{1,2} Q(s, u; ✓j) , (2b) where (s, a, r, s0) transitions are drawn from an experience replay buffer, y := r + minj2{1,2} Q(s 0, a0; ✓̄j) ↵ log ⇡(a0|s0; ) , a0 ⇠ ⇡(·|s0; ), u ⇠ ⇡(·|s; ), and ↵ is an (optionally learned) entropy regularization parameter. The min of Q-function pairs addresses overestimation bias in value function estimation [11, 15]. The parameters ✓̄1 and ✓̄2 are updated toward ✓1 and ✓2 via an exponentially moving average each step. 2.3 Environments Throughout the paper, we italicize descriptive names for the components of the continuous-action Lunar Lander (LL), Bipedal Walker (BW) and Bipedal Walker Hardcore (BWH) environments. In LL, an agent must land a spacecraft in the center of a landing zone using as little fuel as possible. The reward components include: a reward for successful landing; penalties for crashing (crash) and engine usage (main, side); shaping rewards used to encourage the agent to stay upright (angle), move towards the center of the landing pad (position) with low velocity (velocity) and land with both legs (right leg, left leg). In BW, an agent learns to make a 2-legged robot walk. The reward components include: a reward for forward progress, a penalty for falling (failure), a cost for actions (control), and a shaping reward to discourage head movement (head). BWH is identical to BW, but adds additional obstacles for the agent to navigate. 3 Value decomposition for actor-critic methods Most RL algorithms estimate the value function and use it to improve its policy. Unfortunately, value functions and policies provide little insight into the agent’s decision-making. However, reward functions are often composite functions of multiple component state-action signals. By learning a value function estimate for the current policy for each component, practitioners gain insight into what the agent expects to happen and how these reward components interact. Naturally, policy improvement still requires the composite value function. In Sec. 3.1 we show how the composite Qfunction can be recovered from the component Q-functions. From this property, a range of actor-critic algorithms can be adapted to use value decomposition by following the below template.5 1. Alter Q-function networks to have m outputs instead of 1, where m is the number of reward components. 2. Use the base algorithm’s Q-function update for each of the m components, replacing the composite reward term with the respective component reward term. 3. Apply the base algorithm’s policy improvement step by first recovering the composite Q-function. For example, this template can be applied to algorithms that use TD(0) [33], or ones that use Retrace [27]. It works with algorithms that improve the policy by differentiating through the Qfunction [11, 14, 23], and ones that fit it toward non-parametric target action distributions [1]. 5Composite state value functions can similarly be recovered; however, Q-functions allow for deeper introspection, so we focus on that setting in this work. Algorithm 1 SAC-D and SAC-D-CAGrad Update Require: Experience replay buffer B; twin Q-function parameters ✓1, ✓2 (with ⇥ = ✓1 [ ✓2) and target parameters ✓̄1, ✓̄2; policy parameters ; discount factor ; entropy parameter ↵; reward weights w 2 Rm+1; learning rates q, ⇡; target network step size ⌘; Boolean use_cagrad for SAC-D-CAGRAD or SAC-D. 1: Sample transition (minibatch) (s, a, r, s0) ⇠ B . r 2 Rm is a vector of m reward components 2: Sample policy actions a0 ⇠ ⇡(·|s0; ) and u ⇠ ⇡(·|s; ) 3: rm+1 ↵ log ⇡(a0|s0; ) . Extend reward vector to include entropy reward 4: j argmin j2{1,2} Pm+1 i wiQi(s 0, a0; ✓̄j) . Find target network by minimum composite Q-value 5: yi ri + Qi(s0, a0; ✓̄j) 6: LQi P2 j=1 1 2 (Qi(s, a; ✓j) yi) 2 7: L⇡ ↵ log ⇡(u|s; ) minj2{1,2} Pm+1 i wiQi(s, u; ✓j) 8: if use_cagrad then 9: ⇥ ⇥ qCAGRAD(JLQ,⇥) 10: else 11: ⇥ ⇥ qr⇥ 1m+1 Pm+1 i LQi 12: end if 13: ⇡r L⇡ 14: Update target networks ⇥̄ (1 ⌘)⇥̄+ ⌘⇥ Although this template is conceptually simple, learning component Q-functions poses a more difficult prediction problem: multiple predictions must be learned instead of one composite prediction. Ideally, this increased difficulty would not negatively impact agent performance. In Sec. 3.2 we introduce SAC-D, an adaptation of SAC to use value decomposition, and describe additions we made to the above template to maintain performance with conventional SAC. Although these additions are contextualised to SAC, they are general and can be used when adapting other actor-critic algorithms. 3.1 Recovering the composite Q-function We assume the environment’s reward function is a linear combination of m components: R(s, a) ,Pm i wiRi(s, a), where wi 2 R is a scalar component weight for the ith component and Ri(s, a) ! R is the reward function of the ith component for state-action pair (s, a). Applying the linearity of expectation, we find the Q-function inherits the linear structure from the reward6: Q⇡(s, a) = mX i wiQ ⇡ i (s, a), (3) where we define the i’th component Q-function as Q⇡i (s, a) , E[ P t tRi(st, at)|⇡, s0 = s, a0 = a]. Unless otherwise specified, we assume wi = 1 for all i. Because the component weights are factored out of the component Q-functions, they may be varied without changing the component prediction target, allowing the policy to be evaluated for any weight combination. Although the assumption of linearity may seem restrictive, note that each reward component may be a non-linear function of state variables, allowing for very expressive environment rewards. Furthermore, many environments, including all the environments we investigate in this paper, are naturally structured as a sum of (non-linear) reward components. 3.2 SAC with value decomposition Here we introduce SAC-D (Alg. 1), an adaptation of SAC to use value decomposition. Adapting SAC only requires one additional consideration from our template: the entropy bonus reward term SAC adds is treated as an m+ 1’th reward component: Rm+1(s0) , ↵ log ⇡(a0|s0; ) (line 3). However, this approach, which we refer to as SAC-D-Naive, underperforms SAC in many settings. We found two additional modifications essential to match the performance of SAC. The first concerns how we apply the twin-network minimum of Eq. 2 in the context of value decomposition. The second is to use Conflict-Averse Gradient descent (CAGrad) [24] to address optimization problems that arise when training multi-headed neural networks. We refer to SAC with value decomposition and the 6See Theorem A.1 for the proof. This linear decomposition property of value functions has been explored elsewhere [3, 9], but in different contexts and with different motivations. See Sec. 7 for more information. twin-network correction as SAC-D, and the variant with twin-network correction and CAGrad as SAC-D-CAGrad. Twin-network minimums in value decomposition: In SAC, the Q-value target is the minimum of two Q-function networks (Eq. 2). Using the same Q-value update rule for each component, as described in our template, suggests using a minimum for each component target: qi := minj2{1,2} Qi(s, a; ✓̄j). However, this is not a good choice in practice. The purpose of the twinnetwork minimum is to mitigate overestimation bias from the feedback loop of the policy optimizing the Q-function. Because the policy optimizes the composite Q-function, a better approach is to use all the predictions from the network with the minimum composite Q-function (Alg. 1, lines 5-6)). This approach reduces underestimation bias and improves performance compared to an element-wise minimum (see Sec. 4). Mediating the difficulty of multi-objective optimization: Even though the scalar values of the composite Q-function are identical to those used in SAC, simultaneous optimization of all Q⇡i components may introduce training problems common in multi-objective optimization: conflicting gradients, high curvature and large differences in gradient magnitudes [39].7 The CAGrad method, designed for the multi-task RL setting, addresses these issues by replacing the gradient of a multi-task objective with a weighted sum of per-task loss gradients. This updated gradient step maximizes the improvement of the worst-performing task on each optimization step, and still converges to a minimum of the unmodified loss. We incorporate CAGrad into SAC-D by treating each component as a “task”, and update the gradient vector accordingly (Alg. 1, line 9). 4 Robustness experiments We benchmark SAC-D, SAC-D-CAGrad and SAC-D-Naive against SAC on a selection of continuousaction Gym [7] environments. For each environment, we exposed existing additive reward components without altering the behavior of the environments or their composite rewards. That is, these environments already implemented their reward functions as a linear combination of separate reward components and we simply exposed that information to the algorithm (for details, see App. B). As outlined in App. C, we used hyperparameters previously published for use with SAC [14] for all experiments. We tied SAC-D’s hyperparameters to SAC’s because our goal is for value decomposition to be a drop-in addition without significant loss in agent-performance. However, it is possible better performance could be reached with tuning. Figure 1(a) shows the performance of each algorithm aggregated across all environments and all experimental runs. Figure 1(b) shows the same information, but highlights the distribution of scores across experimental runs. In aggregate, SAC-D-CAGrad slightly outperforms SAC, although it has a broader range of performance scores. SAC-D-Naive significantly underperforms SAC. 7Multi-headed prediction can also improve representation learning, as in work on auxiliary tasks [18, 26] We provide training curves for the 8 environments investigated in App. D, but highlight the atypical training curves for BWH (Fig. 1(c)) and Ant (Fig. 1(d)). In the case of BWH, SAC-D-CAGrad significantly outperforms SAC. It was not the goal of this work for SAC-D to improve on SAC. Rather, we sought to provide more insights into the learning process. As such, we make no strong claims about when SAC-D can be expected to outperform SAC. Nevertheless, this result does suggest that SAC-D may sometimes benefit from auxiliary task learning. We leave this question as a subject for future investigation. In Ant, SAC-D (without CAGrad) underperforms all other methods. We found that infrequent environment termination causes large Q-function errors and leads to catastrophic gradient conflicts. Further analysis of termination issues and CAGrad behavior is described in Appendices E and F respectively. 5 Reward component influence It can be difficult to understand how an agent’s predictions interact to affect decision-making. We now introduce the reward influence metric, which indicates how much each component contributes to an agent’s decision. Intuitively, low influence means that removing a component would not alter decision-making; high influence means that removing it would significantly alter decision-making. For multi-dimensional continuous actions, we define the optimal influence of component i in state s by how much the optimal policy ⇡⇤ in state s differs from the optimal policy when component i is removed: I⇤i (s) , ||⇡⇤(s) ⇡⇤¬i(s)||2, where ⇡⇤¬i , argmax⇡ E hP1 t t P j 6=i wjRj(st, at)|⇡ i .8 In practice, we apply two approximations to I⇤i (s) for computational efficiency. First, rather than compare the difference of optimal policies, we compare the difference between one step of policy improvement: I⇡i (s) , || argmaxa Q⇡(s, a) argmaxa Q⇡¬i(s, a)||2, where Q⇡¬i(s, a) ,P j 6=i wjQ ⇡ j (s, a). Second, since argmax is computationally demanding and sensitive to statistical noise, we replace the argmaxa Q(s, a) policy improvement operator with a policy gradient-step 8Discrete-action spaces could use a probability distance measure, but we focus on continuous-action spaces. operator (typical in RL algorithms like SAC): ā + rāQ(s, ā), where ā is a deterministic policy action selection (such as the mode) and 2 (0, 1) is a step size. When taking the difference of the gradient-step operator applied to the Q⇡ and Q⇡¬i surfaces, the ā terms cancel, and can be factored out of the norm. The result is the influence metric (Fig. 2): I⇡i (s; ✓) , ||rāQ⇡(s, ā; ✓) rāQ⇡¬i(s, ā; ✓)||2. (4) The raw magnitudes of component influence can be informative by themselves; for example, a sharper Q-function surface leads to larger influence values. However, to compare influence values across components, we typically compute the fractional influence by normalizing the (always non-negative) influence: Î⇡i (s; ✓) , I⇡i (s;✓)Pm j I ⇡ j (s;✓) . We use several techniques to visualize the fractional influence. For trajectories, plotting influences over timesteps may help identify and explain key decision points (Fig. 3(a)). When studying an agent’s behavior across training, we maintain summary statistics of fractional influence. We visualize mean fractional influence across all components as a stack plot, sorted so that the component with the maximum influence at the end of training is at the bottom. Figure 3(b) shows such a diagram for the Lunar Lander environment; we provide figures for all environments in App. G. 6 Value decomposition: strategies for agent design Agent design is often a brute-force process of trial and error: when an agent doesn’t perform as expected, we choose some aspect of the agent’s design to vary, and then we train again. Although this approach can succeed, it can be expensive in both time and computation. In this section, we illustrate a different approach, showing how a decomposed reward helps break from trial-and-error by encouraging the designer to consider an agent’s point of view. In three examples, each using either the LL or BWH environments (Sec. 2.3), we draw on value decomposition tools to: (1) identify learning problems by comparing components’ empirical returns to their predictions; (2) constrain component value estimation; (3) identify adverse reward interactions with the influence metric; (4) dynamically re-weight reward components. These examples are not just meant to demonstrate these specific techniques, nor to demonstrate significant performance improvements on these well-tested benchmark environments (in fact, SAC-D eventually produces good policies for both environments without additional tuning). Rather, the goal is to showcase a general approach to agent design for novel applications in which iteration and failure is costly. Combined with statistical tools that allow us to reason with small-sample statistics, value decomposition provides a vocabulary to describe an agent’s behavior and interpretable tools for targeted interventions. Even these small examples are more intuitive – and less computationally demanding – than they would be with a trial-and-error approach. For all examples, training parameters are identical to the robustness results of Sec. 4. Diagnosing and improving insufficient features: We analyze the behavior of an agent trained on Lunar Lander by comparing each component’s empirical return to the agent’s value predictions, Qi. The agent is trained to land successfully, and generally the component value predictions match their empirical returns well. Curiously, all component predictions are flat near the end of the episode. For most components, these flat predictions are a good match for their returns, but not for landing. Investigating the landing dynamics, we found the simulator waits many steps after touchdown before producing a landing reward. During this period, the observations are constant, suggesting the features are inadequate to represent landing’s return. To make the observations Markov, we introduce a new feature to the agent’s observations that indicates the duration since the agent’s velocity (horizontal, vertical and angular) went to zero: V trace0 (t) = V steps 0 (t)/c, where V steps 0 (t) is the number of time steps since all the velocities dropped below a threshold and c is a fixed normalizing constant. With this feature, post-landing predictions show a marked improvement (see Figure 4(a) and Appendices C, H). Diagnosing and mitigating value errors using domain knowledge: Lunar Lander’s design makes clear that certain reward components are always non-positive (crash, main, side) while others are always non-negative (landing). However, we observe that the agent’s decomposed value predictions do not always match these bounds. In particular, value predictions of crash have a tendency to oscillate about 0 after the agent learns to land. Value decomposition allows us to explicitly enforce a sign constraint on crash (Figure 4(b); see App. I for details). In this particular example, constraints do not alter policy learning performance, but the resulting predictions are easier to interpret, and the same technique may improve performance in more complex environments. Mitigating an adverse reward with component weight scheduling: Under the BWH reward function, a random policy is far more likely to experience an unsuccessful outcome (falling over) than a successful one (walking forward). This bias can inhibit agent exploration early in training, causing an agent’s policy to fall into a local minimum (the agent stands still). Here, we diagnose this dynamic using component predictions and influence metrics, and remedy it by varying a single component weight during training. We find that the failure component’s fractional influence dominates the forward component’s fractional influence early in training, and that this relationship reverses as agent performance improves (Fig. 5(a)). The forward component’s near-zero value predictions (Fig. 5(b)) early in learning indicate that in many episodes, the agent neither moves nor expects to move (Fig. 5(c)). Early dominance by the easy-to-observe failure suggests that it is inhibiting exploration. To mitigate this problem, we vary the failure component weight, wfailure, from 0.01 to 1 over the course of learning according to the schedule described in App. J. This schedule significantly increases the agents’ forward progress (Fig. 5(c)), and accelerates learning. 7 Related work Our work builds upon earlier studies of value decomposition in the explainable reinforcement learning (XRL) literature: DrQ [19], DrSARSA [29], and HRA [31].9 Like our approach, these methods learn separate value function estimates for each term of a linear reward function. DrQ and HRA are off-policy Q-learning-like methods, while DrSARSA is an on-policy method. HRA does not converge to a globally optimal policy, as each value function is only locally-optimal for the reward component it measures. The RDX and MSX metrics proposed by DrQ could be adopted in our setting, but the influence metric is easier to use with continuous-actions, and aggregate with summary statistics. Our approach improves upon these prior contributions by: (1) working in continuous-action and discrete-action environments; (2) allowing for and demonstrating dynamic re-weighting of reward components during training; and (3) being applicable to a family of actor-critic methods. While these approaches and ours explore using value decomposition for explainability, these works focus on describing why an agent took certain actions to users, whereas we focus on how to use value decomposition to diagnose and remedy learning challenges. The Horde architecture [34] and UVFA [30], methods for multi-goal learning, also employ multiple value functions (one for each goal). UVFAs use a parameterized continuous space of goals, while Horde makes multiple discrete value function predictions. The value functions in our work are conditioned on a policy that optimizes the global reward, whereas in Horde and UVFAs the value functions are conditioned on independent policies that greedily optimize local goals (similar to HRA). Additionally, in our approach, the composite value function can be recovered from the components. Other value decomposition work includes Empathic Q-learning [21] and Orchestrated Value Mapping [10]. The primary difference between our work these other approaches regards the motivation and application of value decomposition. These works focus on how value decomposition can improve sample efficiency, generalization, or other aspects of the core learning problem, rather than how to diagnose and remedy problems. Mathematically, value decomposition bears resemblance to work on successor features, which has focused primarily on state representation and transfer learning [3, 4, 6, 12]. Methods for multiobjective RL [16, 28] learn sets of policies, each with a distinct linear weight over multiple reward objectives. Our work also recovers the value function for different linear reward preferences, but uses this capability to diagnose behavior and learning problems rather than to learn multiple policies. 9For a broad overview of XRL, we direct the reader to Heuillet et al. [17]. 8 Concluding remarks We have argued that the iterative design of reinforcement learning agents can be improved through the use of value decomposition, in which we keep individual reward components separate and learn value estimates of each. We provided a simple prescription for deriving value decomposition algorithms from actor-critic methods, and applied it to SAC to derive the SAC-D algorithm. Combined with the CAGrad method, SAC-D meets or exceeds SAC’s performance in all environments we tested. We introduced the influence metric, and demonstrated its use in measuring each reward component’s effect on an agent’s decisions. Finally, we provided several examples of how value decomposition can diagnose and remedy agent learning problems. Although value decomposition is a simple and broadly applicable tool, we note the following limitations. (1) Our method requires a composite reward function of multiple components. (2) We only study linear reward decomposition. (3) Component predictions only tell you the agent expectations under the single learned policy; changing the weights doesn’t tell you what to expect after re-optimizing the policy for them. (4) Our approach benefits most with methods that learn Q-functions. Methods that optimize the policy with empirical returns have a weaker link between agent expectations (component Q-functions) and policy decisions. The influence metric also requires a Q-function model. Our method presents the same societal benefits and risks as other RL methods. However, we believe this technology has a net beneficial impact, because making agent decisions more introspectable enables developers to catch problematic behavior before deploying the technology. One particular concern, however, is that value estimates represent an agent’s beliefs, not ground truth; this should be kept in mind when such predictions are used in real-world decision-making, as they may reinforce biases or lead to incorrect conclusions.
1. What is the main contribution of the paper regarding value decomposition for diagnosing and designing actor-critic algorithms? 2. What are the strengths and weaknesses of the proposed method, particularly in its simplicity, ease of understanding, and potential applications? 3. Why does the reviewer wonder if SAC is necessary for the rest of the story, and how does this relate to Q-learning based value decomposition? 4. Does the reviewer anticipate any limitations or challenges in applying the pragmatic suggestions from Sections 5 and 6 to Q-learning based value decomposition? 5. How does the reviewer view the difference between value decomposition and linear RL or linear function approximation in Jin et al. (2020), and what are the implications for interpretability?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes value decomposition for diagnosing and iteratively designing actor-critic algorithms. The key idea of value decomposition is to treat the reward signal as constituted by a number of distinct components that together yield value. However, rather than just take some simple operation of these distinct signals, value decomposition proposes to learn a Q function for each separate component of the reward: a Q-network, then, outputs m predictions (for m reward components) rather than just 1 . This relatively simple idea is explored as a mechanism for diagnosing a variety of characteristics of an RL algorithm, and ultimately give rise to the design and improvement of new algorithms. As the paper points out, the core idea of value decomposition is not new, and has been applied to Q-learning variants in the past. Indeed, value decomposition is conceptually similar to typical linear assumptions on the reward or value, but encourages algorithm design to explicitly learn these distinct value components. Moreover, to my knowledge, this is the first extension to actor critic algorithms: the paper proposes SAC-D, a version of Soft Actor-Critic with value decomposition in the mix. Section 4 presents findings from experiments contrasting various forms of SAC-D in different continuous Gym environments, such as Bipedal walker or Ant. Arguably the most significant aspect of value decomposition is that it unlocks new perspectives for diagnosing agent failures, and for iteratively improving the design of these agents in light of these failures. Section 5 presents a series of experiments and insights showcasing how dropping out one of the reward components can impact aspects of the agent. Figure 2, for example, illustrates the the influence each reward component exerts on the Q function through contour maps over different mixes of the reward components Q 1 , Q 2 and Q 3 , while Figure 3 explores the fractional influence of each component. I found Figure 3b to be particularly illuminating. These insights culminate in Section 6 that provides a pragmatic view on how to incrementally improve agent design using the diagnostic tool of value decomposition. For instance, in Lunar Lander, certain reward components should be negative, while others should be non-negative. Observing that an agent is incorrectly predicting certain values to be outside their possible bound allows for incremental adjustment of the agent. Strengths And Weaknesses [STRENGTHS] Overall, this paper proposes a novel perspective for diagnosing agent failures and iteratively improving agents as a result. As the introduction of the paper points out, this new viewpoint directly engages with the practical difficulties of deploying RL agents: they often fail for unknown reasons, and staring at a monolithic neural network and chaotic learning curves can often be daunting to debug and react to. The method of value decomposition is simple, easy to understand, and can be broadly applicable. The paper is well written and takes care in motivating many of its claims. It also simply lays out its core contributions, and limitations. The experiments are broad, interesting, and reveal the considerable potential behind the proposed method. I believe this paper possesses many virtues, and is sure to be of interest to the community. For this reason, pending any issues uncovered by other reviewers, I recommend accepting this paper. [WEAKNESSES] There are several small things that could improve the paper, but none are major. First, the scope of the paper is quite ambitious, but I believe ultimately that sections 5 and 6 and the perspectives therein really constitute the core contributions. In this sense, it is not clear the new algorithm SAC-D or the experiments in Figure 1 are strictly necessary to the core of the work. A mild suggestion, but perhaps they can be moved to the appendix. Otherwise, I have a few minor writing suggestions and typos to fix (see below). Writing Suggestions: In Algorithm 1, consider using the \texttt{} wrapper for the boolean "use_cagrad" I believe in Equation 2b Q i should be changed to Q j . The pseudocode for Algorithm 1 might be cleaned up. For instance, I find the added text to detract from the readability of the algorithm. There are a few too many comments as well. Another minor point, but I found the y-axis of Figure 1b to be counter intuitive. The first few times I read this plot, I assumed the "Normalized Expected Reward" in Figure 1a was applying its y-axis to both plots. I wonder if there is some visual adjustment to be made that can prevent this collision. Small point of consistency: the subfigure labels differ throughout the paper in style. Questions Q1: At first I found the extension to SAC quite natural, but reading it through a few times, I find it less clear why SAC is needed for the rest of the story. To me the main claim of the work is that value decomposition can be a powerful diagnostic tool for understanding agent limitations, and fixing them. This perspective applies just as easily to the existing Q-learning approaches to value decomposition. So, I am left wondering: why is SAC the focus, rather than Q-learning? Another way to put it: do you anticipate that the pragmatic suggestions from Section 5 and 6 extend to Q-learning based value decomposition as well? Q2: Do you anticipate the linearity of the reward composition to be problematic or overly limiting? Q3: One reaction I have to the work is that linear functions (and approaches focused on linearity) in general are praised for their interpretability. In this way, the idea of value decomposition is very nearly just making a linearity assumption. How, in general, should we think about this approach as different from linear RL, or just assuming that the reward admits a linear decomposition with some unknown weights as in Jin et al. (2020)? Jin, Chi, et al. "Provably efficient reinforcement learning with linear function approximation." Conference on Learning Theory. PMLR, 2020. Limitations N/A
NIPS
Title Value Function Decomposition for Iterative Design of Reinforcement Learning Agents Abstract Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons, and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate value decomposition into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent’s learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward influence metric, which measures each reward component’s effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition’s use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner’s toolbox. 1 Introduction Deep reinforcement-learning (RL) approaches have achieved successes in a range of application areas such as gaming ([5, 32, 36, 38]), robotics ([22]), and the natural sciences ([25, 35]). Despite these successes, applying RL techniques to complex control problems remains a daunting undertaking, where initial attempts often result in underwhelming performance. Unfortunately, there are many reasons why an agent may fail to learn a good policy, making it difficult to diagnose which reason(s) caused a particular agent to fail. For example: an agent may fail because the state features were insufficient to make accurate predictions, different task objectives defining the reward function may be imbalanced, the agent may fail to sufficiently explore the state-action space, values may not accurately propagate to more distant states, the neural network may not have sufficient capacity to †Sony AI ⇤Equal contribution ‡The University of Texas at Austin 36th Conference on Neural Information Processing Systems (NeurIPS 2022). approximate the policy or value function(s), or, there may be subtle differences between training and evaluation environments. Without a way to diagnose the causes of poor performance or to recognize when a problem has been remedied, practitioners typically engage in a long trial-and-error design process until an agent reaches a desired level of performance. Frustrations with this trial-and-error process have been expressed in other work [16]. We describe how value decomposition, a simple, broadly-applicable technique, can address these application challenges. In RL, the agent receives a reward that is often a sum of many reward components, each designed to encode some aspect of the desired agent behavior. From this composite reward, it learns a single composite value function. Using value decomposition, an agent learns a component value function for each reward component. To perform policy optimization, the composite value function is recovered by taking a weighted sum of the component value functions. While prior work has proposed value decomposition methods for discrete-action Q-learning [19, 29, 31], we show how value decomposition can be incorporated into a broad class of actor-critic (AC) methods. In addition, we introduce SAC-D, a version of soft actor-critic (SAC) [13, 14] with value decomposition, and explore its use in multi-dimensional continuous-action environments. We also introduce the influence metric, which measures how much an agent’s decisions are affected by each reward component. While earlier work focuses on its use in reward design [16, 19], value decomposition can facilitate diagnosis of a wide range of issues and enable new training methodologies. To demonstrate its utility, in Sec. 6 we show how to use it to: (1) diagnose insufficient state features; (2) diagnose value prediction errors and exploit the decomposed structure to inject background-knowledge; and (3) identify reward components that are inhibiting exploration and mitigate the effect by gradually incorporating component predictions into policy optimization. Value decomposition’s additional diagnostic and training capabilities come at the cost of a morechallenging prediction problem: instead of learning a single value function, many must be learned. To investigate if this difficulty negatively impacts agent performance, we compare the average performance of SAC-D to SAC on benchmark environments. We find that a naive implementation of SAC-D underperforms SAC and then show how to improve SAC-D so that it matches and sometimes exceeds SAC’s performance. These improvements may also be applied to value decomposition for other AC algorithms. While variations of value decomposition has been explored extensively in past work (see Sec. 7), in this paper, we make the following contributions. (1) We show how to integrate value decomposition into a broad class of actor-critic algorithms. (2) We analyze the performance of different implementations of value decomposition for SAC on a range of benchmark continuous-action environments. (3) We introduce the influence metric: a novel value decomposition metric for measuring how much each reward component affects decision-making. (4) We provide a set of illustrative examples of how value decomposition and influence can be used to diagnose various kinds of learning challenges. (5) We describe new training methods that exploit the value decomposition structure and can be used to mitigate different learning challenges. 2 Background 2.1 MDPs and Q-functions In RL, an agent’s interaction with the environment is modeled as a Markov Decision Process (MDP): (S,A, P,R, ), where S is a set of states, A is a set of actions, P : S ⇥ A ⇥ S ! R is a state transition probability function P (s, a, s0) = Pr(St+1 = s0|St = s,At = a), R : S ⇥ A ! R is a reward function R(s, a) = E[Rt+1|St = s,At = a], and 2 [0, 1] discounts future rewards.4 The goal of an agent is to learn a policy ⇡(a|s) that maps states to an action probability distribution that maximizes the sum of future rewards. The agent is trained to maximize the discounted return E [ P1 t tR(st, at)]. The Q-function maps state-action pairs to the expected cumulative discounted reward when starting in state s, taking action a, and then following policy ⇡ thereafter: Q⇡(s, a) , E " 1X t=0 tR(st, at)|⇡, s0 = s, a0 = a # . (1) 4For continuing tasks, must be < 1, and we only consider algorithms for < 1. 2.2 Soft actor-critic Soft actor-critic (SAC) [13, 14] is an off-policy actor-critic algorithm parameterized with five neural networks: a stochastic policy network ⇡ with parameters , and two pairs of Q-functions and target Q-functions with parameters (✓1, ✓2) and (✓̄1, ✓̄2), respectively. As with other actor-critic algorithms, SAC has two main steps: policy evaluation (in which it estimates the Q-function for policy ⇡), and policy improvement (in which it optimize the policy to maximize its Q-function estimates). Unlike other actor-critic algorithms, SAC optimizes a maximum entropy formulation of the MDP, in which rewards are augmented with policy entropy bonuses that prevent premature policy collapse. To perform policy evaluation and improvement, SAC minimizes the following loss functions simultaneously: LQi = E 1 2 (Q(s, a; ✓i) y)2 for i 2 {1, 2} , (2a) L⇡ = E ↵ log ⇡(u|s; ) min j2{1,2} Q(s, u; ✓j) , (2b) where (s, a, r, s0) transitions are drawn from an experience replay buffer, y := r + minj2{1,2} Q(s 0, a0; ✓̄j) ↵ log ⇡(a0|s0; ) , a0 ⇠ ⇡(·|s0; ), u ⇠ ⇡(·|s; ), and ↵ is an (optionally learned) entropy regularization parameter. The min of Q-function pairs addresses overestimation bias in value function estimation [11, 15]. The parameters ✓̄1 and ✓̄2 are updated toward ✓1 and ✓2 via an exponentially moving average each step. 2.3 Environments Throughout the paper, we italicize descriptive names for the components of the continuous-action Lunar Lander (LL), Bipedal Walker (BW) and Bipedal Walker Hardcore (BWH) environments. In LL, an agent must land a spacecraft in the center of a landing zone using as little fuel as possible. The reward components include: a reward for successful landing; penalties for crashing (crash) and engine usage (main, side); shaping rewards used to encourage the agent to stay upright (angle), move towards the center of the landing pad (position) with low velocity (velocity) and land with both legs (right leg, left leg). In BW, an agent learns to make a 2-legged robot walk. The reward components include: a reward for forward progress, a penalty for falling (failure), a cost for actions (control), and a shaping reward to discourage head movement (head). BWH is identical to BW, but adds additional obstacles for the agent to navigate. 3 Value decomposition for actor-critic methods Most RL algorithms estimate the value function and use it to improve its policy. Unfortunately, value functions and policies provide little insight into the agent’s decision-making. However, reward functions are often composite functions of multiple component state-action signals. By learning a value function estimate for the current policy for each component, practitioners gain insight into what the agent expects to happen and how these reward components interact. Naturally, policy improvement still requires the composite value function. In Sec. 3.1 we show how the composite Qfunction can be recovered from the component Q-functions. From this property, a range of actor-critic algorithms can be adapted to use value decomposition by following the below template.5 1. Alter Q-function networks to have m outputs instead of 1, where m is the number of reward components. 2. Use the base algorithm’s Q-function update for each of the m components, replacing the composite reward term with the respective component reward term. 3. Apply the base algorithm’s policy improvement step by first recovering the composite Q-function. For example, this template can be applied to algorithms that use TD(0) [33], or ones that use Retrace [27]. It works with algorithms that improve the policy by differentiating through the Qfunction [11, 14, 23], and ones that fit it toward non-parametric target action distributions [1]. 5Composite state value functions can similarly be recovered; however, Q-functions allow for deeper introspection, so we focus on that setting in this work. Algorithm 1 SAC-D and SAC-D-CAGrad Update Require: Experience replay buffer B; twin Q-function parameters ✓1, ✓2 (with ⇥ = ✓1 [ ✓2) and target parameters ✓̄1, ✓̄2; policy parameters ; discount factor ; entropy parameter ↵; reward weights w 2 Rm+1; learning rates q, ⇡; target network step size ⌘; Boolean use_cagrad for SAC-D-CAGRAD or SAC-D. 1: Sample transition (minibatch) (s, a, r, s0) ⇠ B . r 2 Rm is a vector of m reward components 2: Sample policy actions a0 ⇠ ⇡(·|s0; ) and u ⇠ ⇡(·|s; ) 3: rm+1 ↵ log ⇡(a0|s0; ) . Extend reward vector to include entropy reward 4: j argmin j2{1,2} Pm+1 i wiQi(s 0, a0; ✓̄j) . Find target network by minimum composite Q-value 5: yi ri + Qi(s0, a0; ✓̄j) 6: LQi P2 j=1 1 2 (Qi(s, a; ✓j) yi) 2 7: L⇡ ↵ log ⇡(u|s; ) minj2{1,2} Pm+1 i wiQi(s, u; ✓j) 8: if use_cagrad then 9: ⇥ ⇥ qCAGRAD(JLQ,⇥) 10: else 11: ⇥ ⇥ qr⇥ 1m+1 Pm+1 i LQi 12: end if 13: ⇡r L⇡ 14: Update target networks ⇥̄ (1 ⌘)⇥̄+ ⌘⇥ Although this template is conceptually simple, learning component Q-functions poses a more difficult prediction problem: multiple predictions must be learned instead of one composite prediction. Ideally, this increased difficulty would not negatively impact agent performance. In Sec. 3.2 we introduce SAC-D, an adaptation of SAC to use value decomposition, and describe additions we made to the above template to maintain performance with conventional SAC. Although these additions are contextualised to SAC, they are general and can be used when adapting other actor-critic algorithms. 3.1 Recovering the composite Q-function We assume the environment’s reward function is a linear combination of m components: R(s, a) ,Pm i wiRi(s, a), where wi 2 R is a scalar component weight for the ith component and Ri(s, a) ! R is the reward function of the ith component for state-action pair (s, a). Applying the linearity of expectation, we find the Q-function inherits the linear structure from the reward6: Q⇡(s, a) = mX i wiQ ⇡ i (s, a), (3) where we define the i’th component Q-function as Q⇡i (s, a) , E[ P t tRi(st, at)|⇡, s0 = s, a0 = a]. Unless otherwise specified, we assume wi = 1 for all i. Because the component weights are factored out of the component Q-functions, they may be varied without changing the component prediction target, allowing the policy to be evaluated for any weight combination. Although the assumption of linearity may seem restrictive, note that each reward component may be a non-linear function of state variables, allowing for very expressive environment rewards. Furthermore, many environments, including all the environments we investigate in this paper, are naturally structured as a sum of (non-linear) reward components. 3.2 SAC with value decomposition Here we introduce SAC-D (Alg. 1), an adaptation of SAC to use value decomposition. Adapting SAC only requires one additional consideration from our template: the entropy bonus reward term SAC adds is treated as an m+ 1’th reward component: Rm+1(s0) , ↵ log ⇡(a0|s0; ) (line 3). However, this approach, which we refer to as SAC-D-Naive, underperforms SAC in many settings. We found two additional modifications essential to match the performance of SAC. The first concerns how we apply the twin-network minimum of Eq. 2 in the context of value decomposition. The second is to use Conflict-Averse Gradient descent (CAGrad) [24] to address optimization problems that arise when training multi-headed neural networks. We refer to SAC with value decomposition and the 6See Theorem A.1 for the proof. This linear decomposition property of value functions has been explored elsewhere [3, 9], but in different contexts and with different motivations. See Sec. 7 for more information. twin-network correction as SAC-D, and the variant with twin-network correction and CAGrad as SAC-D-CAGrad. Twin-network minimums in value decomposition: In SAC, the Q-value target is the minimum of two Q-function networks (Eq. 2). Using the same Q-value update rule for each component, as described in our template, suggests using a minimum for each component target: qi := minj2{1,2} Qi(s, a; ✓̄j). However, this is not a good choice in practice. The purpose of the twinnetwork minimum is to mitigate overestimation bias from the feedback loop of the policy optimizing the Q-function. Because the policy optimizes the composite Q-function, a better approach is to use all the predictions from the network with the minimum composite Q-function (Alg. 1, lines 5-6)). This approach reduces underestimation bias and improves performance compared to an element-wise minimum (see Sec. 4). Mediating the difficulty of multi-objective optimization: Even though the scalar values of the composite Q-function are identical to those used in SAC, simultaneous optimization of all Q⇡i components may introduce training problems common in multi-objective optimization: conflicting gradients, high curvature and large differences in gradient magnitudes [39].7 The CAGrad method, designed for the multi-task RL setting, addresses these issues by replacing the gradient of a multi-task objective with a weighted sum of per-task loss gradients. This updated gradient step maximizes the improvement of the worst-performing task on each optimization step, and still converges to a minimum of the unmodified loss. We incorporate CAGrad into SAC-D by treating each component as a “task”, and update the gradient vector accordingly (Alg. 1, line 9). 4 Robustness experiments We benchmark SAC-D, SAC-D-CAGrad and SAC-D-Naive against SAC on a selection of continuousaction Gym [7] environments. For each environment, we exposed existing additive reward components without altering the behavior of the environments or their composite rewards. That is, these environments already implemented their reward functions as a linear combination of separate reward components and we simply exposed that information to the algorithm (for details, see App. B). As outlined in App. C, we used hyperparameters previously published for use with SAC [14] for all experiments. We tied SAC-D’s hyperparameters to SAC’s because our goal is for value decomposition to be a drop-in addition without significant loss in agent-performance. However, it is possible better performance could be reached with tuning. Figure 1(a) shows the performance of each algorithm aggregated across all environments and all experimental runs. Figure 1(b) shows the same information, but highlights the distribution of scores across experimental runs. In aggregate, SAC-D-CAGrad slightly outperforms SAC, although it has a broader range of performance scores. SAC-D-Naive significantly underperforms SAC. 7Multi-headed prediction can also improve representation learning, as in work on auxiliary tasks [18, 26] We provide training curves for the 8 environments investigated in App. D, but highlight the atypical training curves for BWH (Fig. 1(c)) and Ant (Fig. 1(d)). In the case of BWH, SAC-D-CAGrad significantly outperforms SAC. It was not the goal of this work for SAC-D to improve on SAC. Rather, we sought to provide more insights into the learning process. As such, we make no strong claims about when SAC-D can be expected to outperform SAC. Nevertheless, this result does suggest that SAC-D may sometimes benefit from auxiliary task learning. We leave this question as a subject for future investigation. In Ant, SAC-D (without CAGrad) underperforms all other methods. We found that infrequent environment termination causes large Q-function errors and leads to catastrophic gradient conflicts. Further analysis of termination issues and CAGrad behavior is described in Appendices E and F respectively. 5 Reward component influence It can be difficult to understand how an agent’s predictions interact to affect decision-making. We now introduce the reward influence metric, which indicates how much each component contributes to an agent’s decision. Intuitively, low influence means that removing a component would not alter decision-making; high influence means that removing it would significantly alter decision-making. For multi-dimensional continuous actions, we define the optimal influence of component i in state s by how much the optimal policy ⇡⇤ in state s differs from the optimal policy when component i is removed: I⇤i (s) , ||⇡⇤(s) ⇡⇤¬i(s)||2, where ⇡⇤¬i , argmax⇡ E hP1 t t P j 6=i wjRj(st, at)|⇡ i .8 In practice, we apply two approximations to I⇤i (s) for computational efficiency. First, rather than compare the difference of optimal policies, we compare the difference between one step of policy improvement: I⇡i (s) , || argmaxa Q⇡(s, a) argmaxa Q⇡¬i(s, a)||2, where Q⇡¬i(s, a) ,P j 6=i wjQ ⇡ j (s, a). Second, since argmax is computationally demanding and sensitive to statistical noise, we replace the argmaxa Q(s, a) policy improvement operator with a policy gradient-step 8Discrete-action spaces could use a probability distance measure, but we focus on continuous-action spaces. operator (typical in RL algorithms like SAC): ā + rāQ(s, ā), where ā is a deterministic policy action selection (such as the mode) and 2 (0, 1) is a step size. When taking the difference of the gradient-step operator applied to the Q⇡ and Q⇡¬i surfaces, the ā terms cancel, and can be factored out of the norm. The result is the influence metric (Fig. 2): I⇡i (s; ✓) , ||rāQ⇡(s, ā; ✓) rāQ⇡¬i(s, ā; ✓)||2. (4) The raw magnitudes of component influence can be informative by themselves; for example, a sharper Q-function surface leads to larger influence values. However, to compare influence values across components, we typically compute the fractional influence by normalizing the (always non-negative) influence: Î⇡i (s; ✓) , I⇡i (s;✓)Pm j I ⇡ j (s;✓) . We use several techniques to visualize the fractional influence. For trajectories, plotting influences over timesteps may help identify and explain key decision points (Fig. 3(a)). When studying an agent’s behavior across training, we maintain summary statistics of fractional influence. We visualize mean fractional influence across all components as a stack plot, sorted so that the component with the maximum influence at the end of training is at the bottom. Figure 3(b) shows such a diagram for the Lunar Lander environment; we provide figures for all environments in App. G. 6 Value decomposition: strategies for agent design Agent design is often a brute-force process of trial and error: when an agent doesn’t perform as expected, we choose some aspect of the agent’s design to vary, and then we train again. Although this approach can succeed, it can be expensive in both time and computation. In this section, we illustrate a different approach, showing how a decomposed reward helps break from trial-and-error by encouraging the designer to consider an agent’s point of view. In three examples, each using either the LL or BWH environments (Sec. 2.3), we draw on value decomposition tools to: (1) identify learning problems by comparing components’ empirical returns to their predictions; (2) constrain component value estimation; (3) identify adverse reward interactions with the influence metric; (4) dynamically re-weight reward components. These examples are not just meant to demonstrate these specific techniques, nor to demonstrate significant performance improvements on these well-tested benchmark environments (in fact, SAC-D eventually produces good policies for both environments without additional tuning). Rather, the goal is to showcase a general approach to agent design for novel applications in which iteration and failure is costly. Combined with statistical tools that allow us to reason with small-sample statistics, value decomposition provides a vocabulary to describe an agent’s behavior and interpretable tools for targeted interventions. Even these small examples are more intuitive – and less computationally demanding – than they would be with a trial-and-error approach. For all examples, training parameters are identical to the robustness results of Sec. 4. Diagnosing and improving insufficient features: We analyze the behavior of an agent trained on Lunar Lander by comparing each component’s empirical return to the agent’s value predictions, Qi. The agent is trained to land successfully, and generally the component value predictions match their empirical returns well. Curiously, all component predictions are flat near the end of the episode. For most components, these flat predictions are a good match for their returns, but not for landing. Investigating the landing dynamics, we found the simulator waits many steps after touchdown before producing a landing reward. During this period, the observations are constant, suggesting the features are inadequate to represent landing’s return. To make the observations Markov, we introduce a new feature to the agent’s observations that indicates the duration since the agent’s velocity (horizontal, vertical and angular) went to zero: V trace0 (t) = V steps 0 (t)/c, where V steps 0 (t) is the number of time steps since all the velocities dropped below a threshold and c is a fixed normalizing constant. With this feature, post-landing predictions show a marked improvement (see Figure 4(a) and Appendices C, H). Diagnosing and mitigating value errors using domain knowledge: Lunar Lander’s design makes clear that certain reward components are always non-positive (crash, main, side) while others are always non-negative (landing). However, we observe that the agent’s decomposed value predictions do not always match these bounds. In particular, value predictions of crash have a tendency to oscillate about 0 after the agent learns to land. Value decomposition allows us to explicitly enforce a sign constraint on crash (Figure 4(b); see App. I for details). In this particular example, constraints do not alter policy learning performance, but the resulting predictions are easier to interpret, and the same technique may improve performance in more complex environments. Mitigating an adverse reward with component weight scheduling: Under the BWH reward function, a random policy is far more likely to experience an unsuccessful outcome (falling over) than a successful one (walking forward). This bias can inhibit agent exploration early in training, causing an agent’s policy to fall into a local minimum (the agent stands still). Here, we diagnose this dynamic using component predictions and influence metrics, and remedy it by varying a single component weight during training. We find that the failure component’s fractional influence dominates the forward component’s fractional influence early in training, and that this relationship reverses as agent performance improves (Fig. 5(a)). The forward component’s near-zero value predictions (Fig. 5(b)) early in learning indicate that in many episodes, the agent neither moves nor expects to move (Fig. 5(c)). Early dominance by the easy-to-observe failure suggests that it is inhibiting exploration. To mitigate this problem, we vary the failure component weight, wfailure, from 0.01 to 1 over the course of learning according to the schedule described in App. J. This schedule significantly increases the agents’ forward progress (Fig. 5(c)), and accelerates learning. 7 Related work Our work builds upon earlier studies of value decomposition in the explainable reinforcement learning (XRL) literature: DrQ [19], DrSARSA [29], and HRA [31].9 Like our approach, these methods learn separate value function estimates for each term of a linear reward function. DrQ and HRA are off-policy Q-learning-like methods, while DrSARSA is an on-policy method. HRA does not converge to a globally optimal policy, as each value function is only locally-optimal for the reward component it measures. The RDX and MSX metrics proposed by DrQ could be adopted in our setting, but the influence metric is easier to use with continuous-actions, and aggregate with summary statistics. Our approach improves upon these prior contributions by: (1) working in continuous-action and discrete-action environments; (2) allowing for and demonstrating dynamic re-weighting of reward components during training; and (3) being applicable to a family of actor-critic methods. While these approaches and ours explore using value decomposition for explainability, these works focus on describing why an agent took certain actions to users, whereas we focus on how to use value decomposition to diagnose and remedy learning challenges. The Horde architecture [34] and UVFA [30], methods for multi-goal learning, also employ multiple value functions (one for each goal). UVFAs use a parameterized continuous space of goals, while Horde makes multiple discrete value function predictions. The value functions in our work are conditioned on a policy that optimizes the global reward, whereas in Horde and UVFAs the value functions are conditioned on independent policies that greedily optimize local goals (similar to HRA). Additionally, in our approach, the composite value function can be recovered from the components. Other value decomposition work includes Empathic Q-learning [21] and Orchestrated Value Mapping [10]. The primary difference between our work these other approaches regards the motivation and application of value decomposition. These works focus on how value decomposition can improve sample efficiency, generalization, or other aspects of the core learning problem, rather than how to diagnose and remedy problems. Mathematically, value decomposition bears resemblance to work on successor features, which has focused primarily on state representation and transfer learning [3, 4, 6, 12]. Methods for multiobjective RL [16, 28] learn sets of policies, each with a distinct linear weight over multiple reward objectives. Our work also recovers the value function for different linear reward preferences, but uses this capability to diagnose behavior and learning problems rather than to learn multiple policies. 9For a broad overview of XRL, we direct the reader to Heuillet et al. [17]. 8 Concluding remarks We have argued that the iterative design of reinforcement learning agents can be improved through the use of value decomposition, in which we keep individual reward components separate and learn value estimates of each. We provided a simple prescription for deriving value decomposition algorithms from actor-critic methods, and applied it to SAC to derive the SAC-D algorithm. Combined with the CAGrad method, SAC-D meets or exceeds SAC’s performance in all environments we tested. We introduced the influence metric, and demonstrated its use in measuring each reward component’s effect on an agent’s decisions. Finally, we provided several examples of how value decomposition can diagnose and remedy agent learning problems. Although value decomposition is a simple and broadly applicable tool, we note the following limitations. (1) Our method requires a composite reward function of multiple components. (2) We only study linear reward decomposition. (3) Component predictions only tell you the agent expectations under the single learned policy; changing the weights doesn’t tell you what to expect after re-optimizing the policy for them. (4) Our approach benefits most with methods that learn Q-functions. Methods that optimize the policy with empirical returns have a weaker link between agent expectations (component Q-functions) and policy decisions. The influence metric also requires a Q-function model. Our method presents the same societal benefits and risks as other RL methods. However, we believe this technology has a net beneficial impact, because making agent decisions more introspectable enables developers to catch problematic behavior before deploying the technology. One particular concern, however, is that value estimates represent an agent’s beliefs, not ground truth; this should be kept in mind when such predictions are used in real-world decision-making, as they may reinforce biases or lead to incorrect conclusions.
1. What is the focus and contribution of the paper regarding reward functions in reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its originality and significance? 3. What are the weaknesses of the paper, especially regarding its limitations? 4. Do you have any concerns or suggestions regarding the extension of the proposed method to non-linear rewards? 5. How do the diagnosis case studies demonstrate the effectiveness and practicality of the proposed tool for reward function design?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a tool to diagnose and design the reward function for training the reinforcement learning (RL) agent. In practice, the reward function in RL consists of multiple terms (e.g., collision penalty and velocity bonus). The proposed method assumes the reward is the linear combination of these terms and learn a Q-function for each reward term. Such a decomposition enables the reward function designer to inspect the influence of each reward on the agent's decision, thus reducing the efforts of tuning the reward function without guidance. Several diagnosis case studies showcase how RL practitioners use this tool to design the reward function. Strengths And Weaknesses Originality: Though value decomposition is not a new idea, using value decomposition to guide the reward function design and the proposed influence metric are an original contribution, to my best knowledge. Quality and clarity: This paper is easy-to-follow. The experiments are well-designed. Significance: The proposed reward design scheme is important in RL. Questions The current formulation needs the reward function to be a linear combination of multiple rewards. Can the author comment possibility of extending the proposed method to non-linear rewards? In section 6, the results of some diagnosis (e.g., weight scheduling, markov features) can be applied to SAC without value decomposition. Have the authors tried applying weight scheduling and markov features to SAC without value decomposition? This might help answer whether the diagnosis results can be applied back to non-decomposed version. Limitations The author acknowledged the limitations in the paper.
NIPS
Title Value Function Decomposition for Iterative Design of Reinforcement Learning Agents Abstract Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons, and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate value decomposition into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent’s learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward influence metric, which measures each reward component’s effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition’s use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner’s toolbox. 1 Introduction Deep reinforcement-learning (RL) approaches have achieved successes in a range of application areas such as gaming ([5, 32, 36, 38]), robotics ([22]), and the natural sciences ([25, 35]). Despite these successes, applying RL techniques to complex control problems remains a daunting undertaking, where initial attempts often result in underwhelming performance. Unfortunately, there are many reasons why an agent may fail to learn a good policy, making it difficult to diagnose which reason(s) caused a particular agent to fail. For example: an agent may fail because the state features were insufficient to make accurate predictions, different task objectives defining the reward function may be imbalanced, the agent may fail to sufficiently explore the state-action space, values may not accurately propagate to more distant states, the neural network may not have sufficient capacity to †Sony AI ⇤Equal contribution ‡The University of Texas at Austin 36th Conference on Neural Information Processing Systems (NeurIPS 2022). approximate the policy or value function(s), or, there may be subtle differences between training and evaluation environments. Without a way to diagnose the causes of poor performance or to recognize when a problem has been remedied, practitioners typically engage in a long trial-and-error design process until an agent reaches a desired level of performance. Frustrations with this trial-and-error process have been expressed in other work [16]. We describe how value decomposition, a simple, broadly-applicable technique, can address these application challenges. In RL, the agent receives a reward that is often a sum of many reward components, each designed to encode some aspect of the desired agent behavior. From this composite reward, it learns a single composite value function. Using value decomposition, an agent learns a component value function for each reward component. To perform policy optimization, the composite value function is recovered by taking a weighted sum of the component value functions. While prior work has proposed value decomposition methods for discrete-action Q-learning [19, 29, 31], we show how value decomposition can be incorporated into a broad class of actor-critic (AC) methods. In addition, we introduce SAC-D, a version of soft actor-critic (SAC) [13, 14] with value decomposition, and explore its use in multi-dimensional continuous-action environments. We also introduce the influence metric, which measures how much an agent’s decisions are affected by each reward component. While earlier work focuses on its use in reward design [16, 19], value decomposition can facilitate diagnosis of a wide range of issues and enable new training methodologies. To demonstrate its utility, in Sec. 6 we show how to use it to: (1) diagnose insufficient state features; (2) diagnose value prediction errors and exploit the decomposed structure to inject background-knowledge; and (3) identify reward components that are inhibiting exploration and mitigate the effect by gradually incorporating component predictions into policy optimization. Value decomposition’s additional diagnostic and training capabilities come at the cost of a morechallenging prediction problem: instead of learning a single value function, many must be learned. To investigate if this difficulty negatively impacts agent performance, we compare the average performance of SAC-D to SAC on benchmark environments. We find that a naive implementation of SAC-D underperforms SAC and then show how to improve SAC-D so that it matches and sometimes exceeds SAC’s performance. These improvements may also be applied to value decomposition for other AC algorithms. While variations of value decomposition has been explored extensively in past work (see Sec. 7), in this paper, we make the following contributions. (1) We show how to integrate value decomposition into a broad class of actor-critic algorithms. (2) We analyze the performance of different implementations of value decomposition for SAC on a range of benchmark continuous-action environments. (3) We introduce the influence metric: a novel value decomposition metric for measuring how much each reward component affects decision-making. (4) We provide a set of illustrative examples of how value decomposition and influence can be used to diagnose various kinds of learning challenges. (5) We describe new training methods that exploit the value decomposition structure and can be used to mitigate different learning challenges. 2 Background 2.1 MDPs and Q-functions In RL, an agent’s interaction with the environment is modeled as a Markov Decision Process (MDP): (S,A, P,R, ), where S is a set of states, A is a set of actions, P : S ⇥ A ⇥ S ! R is a state transition probability function P (s, a, s0) = Pr(St+1 = s0|St = s,At = a), R : S ⇥ A ! R is a reward function R(s, a) = E[Rt+1|St = s,At = a], and 2 [0, 1] discounts future rewards.4 The goal of an agent is to learn a policy ⇡(a|s) that maps states to an action probability distribution that maximizes the sum of future rewards. The agent is trained to maximize the discounted return E [ P1 t tR(st, at)]. The Q-function maps state-action pairs to the expected cumulative discounted reward when starting in state s, taking action a, and then following policy ⇡ thereafter: Q⇡(s, a) , E " 1X t=0 tR(st, at)|⇡, s0 = s, a0 = a # . (1) 4For continuing tasks, must be < 1, and we only consider algorithms for < 1. 2.2 Soft actor-critic Soft actor-critic (SAC) [13, 14] is an off-policy actor-critic algorithm parameterized with five neural networks: a stochastic policy network ⇡ with parameters , and two pairs of Q-functions and target Q-functions with parameters (✓1, ✓2) and (✓̄1, ✓̄2), respectively. As with other actor-critic algorithms, SAC has two main steps: policy evaluation (in which it estimates the Q-function for policy ⇡), and policy improvement (in which it optimize the policy to maximize its Q-function estimates). Unlike other actor-critic algorithms, SAC optimizes a maximum entropy formulation of the MDP, in which rewards are augmented with policy entropy bonuses that prevent premature policy collapse. To perform policy evaluation and improvement, SAC minimizes the following loss functions simultaneously: LQi = E 1 2 (Q(s, a; ✓i) y)2 for i 2 {1, 2} , (2a) L⇡ = E ↵ log ⇡(u|s; ) min j2{1,2} Q(s, u; ✓j) , (2b) where (s, a, r, s0) transitions are drawn from an experience replay buffer, y := r + minj2{1,2} Q(s 0, a0; ✓̄j) ↵ log ⇡(a0|s0; ) , a0 ⇠ ⇡(·|s0; ), u ⇠ ⇡(·|s; ), and ↵ is an (optionally learned) entropy regularization parameter. The min of Q-function pairs addresses overestimation bias in value function estimation [11, 15]. The parameters ✓̄1 and ✓̄2 are updated toward ✓1 and ✓2 via an exponentially moving average each step. 2.3 Environments Throughout the paper, we italicize descriptive names for the components of the continuous-action Lunar Lander (LL), Bipedal Walker (BW) and Bipedal Walker Hardcore (BWH) environments. In LL, an agent must land a spacecraft in the center of a landing zone using as little fuel as possible. The reward components include: a reward for successful landing; penalties for crashing (crash) and engine usage (main, side); shaping rewards used to encourage the agent to stay upright (angle), move towards the center of the landing pad (position) with low velocity (velocity) and land with both legs (right leg, left leg). In BW, an agent learns to make a 2-legged robot walk. The reward components include: a reward for forward progress, a penalty for falling (failure), a cost for actions (control), and a shaping reward to discourage head movement (head). BWH is identical to BW, but adds additional obstacles for the agent to navigate. 3 Value decomposition for actor-critic methods Most RL algorithms estimate the value function and use it to improve its policy. Unfortunately, value functions and policies provide little insight into the agent’s decision-making. However, reward functions are often composite functions of multiple component state-action signals. By learning a value function estimate for the current policy for each component, practitioners gain insight into what the agent expects to happen and how these reward components interact. Naturally, policy improvement still requires the composite value function. In Sec. 3.1 we show how the composite Qfunction can be recovered from the component Q-functions. From this property, a range of actor-critic algorithms can be adapted to use value decomposition by following the below template.5 1. Alter Q-function networks to have m outputs instead of 1, where m is the number of reward components. 2. Use the base algorithm’s Q-function update for each of the m components, replacing the composite reward term with the respective component reward term. 3. Apply the base algorithm’s policy improvement step by first recovering the composite Q-function. For example, this template can be applied to algorithms that use TD(0) [33], or ones that use Retrace [27]. It works with algorithms that improve the policy by differentiating through the Qfunction [11, 14, 23], and ones that fit it toward non-parametric target action distributions [1]. 5Composite state value functions can similarly be recovered; however, Q-functions allow for deeper introspection, so we focus on that setting in this work. Algorithm 1 SAC-D and SAC-D-CAGrad Update Require: Experience replay buffer B; twin Q-function parameters ✓1, ✓2 (with ⇥ = ✓1 [ ✓2) and target parameters ✓̄1, ✓̄2; policy parameters ; discount factor ; entropy parameter ↵; reward weights w 2 Rm+1; learning rates q, ⇡; target network step size ⌘; Boolean use_cagrad for SAC-D-CAGRAD or SAC-D. 1: Sample transition (minibatch) (s, a, r, s0) ⇠ B . r 2 Rm is a vector of m reward components 2: Sample policy actions a0 ⇠ ⇡(·|s0; ) and u ⇠ ⇡(·|s; ) 3: rm+1 ↵ log ⇡(a0|s0; ) . Extend reward vector to include entropy reward 4: j argmin j2{1,2} Pm+1 i wiQi(s 0, a0; ✓̄j) . Find target network by minimum composite Q-value 5: yi ri + Qi(s0, a0; ✓̄j) 6: LQi P2 j=1 1 2 (Qi(s, a; ✓j) yi) 2 7: L⇡ ↵ log ⇡(u|s; ) minj2{1,2} Pm+1 i wiQi(s, u; ✓j) 8: if use_cagrad then 9: ⇥ ⇥ qCAGRAD(JLQ,⇥) 10: else 11: ⇥ ⇥ qr⇥ 1m+1 Pm+1 i LQi 12: end if 13: ⇡r L⇡ 14: Update target networks ⇥̄ (1 ⌘)⇥̄+ ⌘⇥ Although this template is conceptually simple, learning component Q-functions poses a more difficult prediction problem: multiple predictions must be learned instead of one composite prediction. Ideally, this increased difficulty would not negatively impact agent performance. In Sec. 3.2 we introduce SAC-D, an adaptation of SAC to use value decomposition, and describe additions we made to the above template to maintain performance with conventional SAC. Although these additions are contextualised to SAC, they are general and can be used when adapting other actor-critic algorithms. 3.1 Recovering the composite Q-function We assume the environment’s reward function is a linear combination of m components: R(s, a) ,Pm i wiRi(s, a), where wi 2 R is a scalar component weight for the ith component and Ri(s, a) ! R is the reward function of the ith component for state-action pair (s, a). Applying the linearity of expectation, we find the Q-function inherits the linear structure from the reward6: Q⇡(s, a) = mX i wiQ ⇡ i (s, a), (3) where we define the i’th component Q-function as Q⇡i (s, a) , E[ P t tRi(st, at)|⇡, s0 = s, a0 = a]. Unless otherwise specified, we assume wi = 1 for all i. Because the component weights are factored out of the component Q-functions, they may be varied without changing the component prediction target, allowing the policy to be evaluated for any weight combination. Although the assumption of linearity may seem restrictive, note that each reward component may be a non-linear function of state variables, allowing for very expressive environment rewards. Furthermore, many environments, including all the environments we investigate in this paper, are naturally structured as a sum of (non-linear) reward components. 3.2 SAC with value decomposition Here we introduce SAC-D (Alg. 1), an adaptation of SAC to use value decomposition. Adapting SAC only requires one additional consideration from our template: the entropy bonus reward term SAC adds is treated as an m+ 1’th reward component: Rm+1(s0) , ↵ log ⇡(a0|s0; ) (line 3). However, this approach, which we refer to as SAC-D-Naive, underperforms SAC in many settings. We found two additional modifications essential to match the performance of SAC. The first concerns how we apply the twin-network minimum of Eq. 2 in the context of value decomposition. The second is to use Conflict-Averse Gradient descent (CAGrad) [24] to address optimization problems that arise when training multi-headed neural networks. We refer to SAC with value decomposition and the 6See Theorem A.1 for the proof. This linear decomposition property of value functions has been explored elsewhere [3, 9], but in different contexts and with different motivations. See Sec. 7 for more information. twin-network correction as SAC-D, and the variant with twin-network correction and CAGrad as SAC-D-CAGrad. Twin-network minimums in value decomposition: In SAC, the Q-value target is the minimum of two Q-function networks (Eq. 2). Using the same Q-value update rule for each component, as described in our template, suggests using a minimum for each component target: qi := minj2{1,2} Qi(s, a; ✓̄j). However, this is not a good choice in practice. The purpose of the twinnetwork minimum is to mitigate overestimation bias from the feedback loop of the policy optimizing the Q-function. Because the policy optimizes the composite Q-function, a better approach is to use all the predictions from the network with the minimum composite Q-function (Alg. 1, lines 5-6)). This approach reduces underestimation bias and improves performance compared to an element-wise minimum (see Sec. 4). Mediating the difficulty of multi-objective optimization: Even though the scalar values of the composite Q-function are identical to those used in SAC, simultaneous optimization of all Q⇡i components may introduce training problems common in multi-objective optimization: conflicting gradients, high curvature and large differences in gradient magnitudes [39].7 The CAGrad method, designed for the multi-task RL setting, addresses these issues by replacing the gradient of a multi-task objective with a weighted sum of per-task loss gradients. This updated gradient step maximizes the improvement of the worst-performing task on each optimization step, and still converges to a minimum of the unmodified loss. We incorporate CAGrad into SAC-D by treating each component as a “task”, and update the gradient vector accordingly (Alg. 1, line 9). 4 Robustness experiments We benchmark SAC-D, SAC-D-CAGrad and SAC-D-Naive against SAC on a selection of continuousaction Gym [7] environments. For each environment, we exposed existing additive reward components without altering the behavior of the environments or their composite rewards. That is, these environments already implemented their reward functions as a linear combination of separate reward components and we simply exposed that information to the algorithm (for details, see App. B). As outlined in App. C, we used hyperparameters previously published for use with SAC [14] for all experiments. We tied SAC-D’s hyperparameters to SAC’s because our goal is for value decomposition to be a drop-in addition without significant loss in agent-performance. However, it is possible better performance could be reached with tuning. Figure 1(a) shows the performance of each algorithm aggregated across all environments and all experimental runs. Figure 1(b) shows the same information, but highlights the distribution of scores across experimental runs. In aggregate, SAC-D-CAGrad slightly outperforms SAC, although it has a broader range of performance scores. SAC-D-Naive significantly underperforms SAC. 7Multi-headed prediction can also improve representation learning, as in work on auxiliary tasks [18, 26] We provide training curves for the 8 environments investigated in App. D, but highlight the atypical training curves for BWH (Fig. 1(c)) and Ant (Fig. 1(d)). In the case of BWH, SAC-D-CAGrad significantly outperforms SAC. It was not the goal of this work for SAC-D to improve on SAC. Rather, we sought to provide more insights into the learning process. As such, we make no strong claims about when SAC-D can be expected to outperform SAC. Nevertheless, this result does suggest that SAC-D may sometimes benefit from auxiliary task learning. We leave this question as a subject for future investigation. In Ant, SAC-D (without CAGrad) underperforms all other methods. We found that infrequent environment termination causes large Q-function errors and leads to catastrophic gradient conflicts. Further analysis of termination issues and CAGrad behavior is described in Appendices E and F respectively. 5 Reward component influence It can be difficult to understand how an agent’s predictions interact to affect decision-making. We now introduce the reward influence metric, which indicates how much each component contributes to an agent’s decision. Intuitively, low influence means that removing a component would not alter decision-making; high influence means that removing it would significantly alter decision-making. For multi-dimensional continuous actions, we define the optimal influence of component i in state s by how much the optimal policy ⇡⇤ in state s differs from the optimal policy when component i is removed: I⇤i (s) , ||⇡⇤(s) ⇡⇤¬i(s)||2, where ⇡⇤¬i , argmax⇡ E hP1 t t P j 6=i wjRj(st, at)|⇡ i .8 In practice, we apply two approximations to I⇤i (s) for computational efficiency. First, rather than compare the difference of optimal policies, we compare the difference between one step of policy improvement: I⇡i (s) , || argmaxa Q⇡(s, a) argmaxa Q⇡¬i(s, a)||2, where Q⇡¬i(s, a) ,P j 6=i wjQ ⇡ j (s, a). Second, since argmax is computationally demanding and sensitive to statistical noise, we replace the argmaxa Q(s, a) policy improvement operator with a policy gradient-step 8Discrete-action spaces could use a probability distance measure, but we focus on continuous-action spaces. operator (typical in RL algorithms like SAC): ā + rāQ(s, ā), where ā is a deterministic policy action selection (such as the mode) and 2 (0, 1) is a step size. When taking the difference of the gradient-step operator applied to the Q⇡ and Q⇡¬i surfaces, the ā terms cancel, and can be factored out of the norm. The result is the influence metric (Fig. 2): I⇡i (s; ✓) , ||rāQ⇡(s, ā; ✓) rāQ⇡¬i(s, ā; ✓)||2. (4) The raw magnitudes of component influence can be informative by themselves; for example, a sharper Q-function surface leads to larger influence values. However, to compare influence values across components, we typically compute the fractional influence by normalizing the (always non-negative) influence: Î⇡i (s; ✓) , I⇡i (s;✓)Pm j I ⇡ j (s;✓) . We use several techniques to visualize the fractional influence. For trajectories, plotting influences over timesteps may help identify and explain key decision points (Fig. 3(a)). When studying an agent’s behavior across training, we maintain summary statistics of fractional influence. We visualize mean fractional influence across all components as a stack plot, sorted so that the component with the maximum influence at the end of training is at the bottom. Figure 3(b) shows such a diagram for the Lunar Lander environment; we provide figures for all environments in App. G. 6 Value decomposition: strategies for agent design Agent design is often a brute-force process of trial and error: when an agent doesn’t perform as expected, we choose some aspect of the agent’s design to vary, and then we train again. Although this approach can succeed, it can be expensive in both time and computation. In this section, we illustrate a different approach, showing how a decomposed reward helps break from trial-and-error by encouraging the designer to consider an agent’s point of view. In three examples, each using either the LL or BWH environments (Sec. 2.3), we draw on value decomposition tools to: (1) identify learning problems by comparing components’ empirical returns to their predictions; (2) constrain component value estimation; (3) identify adverse reward interactions with the influence metric; (4) dynamically re-weight reward components. These examples are not just meant to demonstrate these specific techniques, nor to demonstrate significant performance improvements on these well-tested benchmark environments (in fact, SAC-D eventually produces good policies for both environments without additional tuning). Rather, the goal is to showcase a general approach to agent design for novel applications in which iteration and failure is costly. Combined with statistical tools that allow us to reason with small-sample statistics, value decomposition provides a vocabulary to describe an agent’s behavior and interpretable tools for targeted interventions. Even these small examples are more intuitive – and less computationally demanding – than they would be with a trial-and-error approach. For all examples, training parameters are identical to the robustness results of Sec. 4. Diagnosing and improving insufficient features: We analyze the behavior of an agent trained on Lunar Lander by comparing each component’s empirical return to the agent’s value predictions, Qi. The agent is trained to land successfully, and generally the component value predictions match their empirical returns well. Curiously, all component predictions are flat near the end of the episode. For most components, these flat predictions are a good match for their returns, but not for landing. Investigating the landing dynamics, we found the simulator waits many steps after touchdown before producing a landing reward. During this period, the observations are constant, suggesting the features are inadequate to represent landing’s return. To make the observations Markov, we introduce a new feature to the agent’s observations that indicates the duration since the agent’s velocity (horizontal, vertical and angular) went to zero: V trace0 (t) = V steps 0 (t)/c, where V steps 0 (t) is the number of time steps since all the velocities dropped below a threshold and c is a fixed normalizing constant. With this feature, post-landing predictions show a marked improvement (see Figure 4(a) and Appendices C, H). Diagnosing and mitigating value errors using domain knowledge: Lunar Lander’s design makes clear that certain reward components are always non-positive (crash, main, side) while others are always non-negative (landing). However, we observe that the agent’s decomposed value predictions do not always match these bounds. In particular, value predictions of crash have a tendency to oscillate about 0 after the agent learns to land. Value decomposition allows us to explicitly enforce a sign constraint on crash (Figure 4(b); see App. I for details). In this particular example, constraints do not alter policy learning performance, but the resulting predictions are easier to interpret, and the same technique may improve performance in more complex environments. Mitigating an adverse reward with component weight scheduling: Under the BWH reward function, a random policy is far more likely to experience an unsuccessful outcome (falling over) than a successful one (walking forward). This bias can inhibit agent exploration early in training, causing an agent’s policy to fall into a local minimum (the agent stands still). Here, we diagnose this dynamic using component predictions and influence metrics, and remedy it by varying a single component weight during training. We find that the failure component’s fractional influence dominates the forward component’s fractional influence early in training, and that this relationship reverses as agent performance improves (Fig. 5(a)). The forward component’s near-zero value predictions (Fig. 5(b)) early in learning indicate that in many episodes, the agent neither moves nor expects to move (Fig. 5(c)). Early dominance by the easy-to-observe failure suggests that it is inhibiting exploration. To mitigate this problem, we vary the failure component weight, wfailure, from 0.01 to 1 over the course of learning according to the schedule described in App. J. This schedule significantly increases the agents’ forward progress (Fig. 5(c)), and accelerates learning. 7 Related work Our work builds upon earlier studies of value decomposition in the explainable reinforcement learning (XRL) literature: DrQ [19], DrSARSA [29], and HRA [31].9 Like our approach, these methods learn separate value function estimates for each term of a linear reward function. DrQ and HRA are off-policy Q-learning-like methods, while DrSARSA is an on-policy method. HRA does not converge to a globally optimal policy, as each value function is only locally-optimal for the reward component it measures. The RDX and MSX metrics proposed by DrQ could be adopted in our setting, but the influence metric is easier to use with continuous-actions, and aggregate with summary statistics. Our approach improves upon these prior contributions by: (1) working in continuous-action and discrete-action environments; (2) allowing for and demonstrating dynamic re-weighting of reward components during training; and (3) being applicable to a family of actor-critic methods. While these approaches and ours explore using value decomposition for explainability, these works focus on describing why an agent took certain actions to users, whereas we focus on how to use value decomposition to diagnose and remedy learning challenges. The Horde architecture [34] and UVFA [30], methods for multi-goal learning, also employ multiple value functions (one for each goal). UVFAs use a parameterized continuous space of goals, while Horde makes multiple discrete value function predictions. The value functions in our work are conditioned on a policy that optimizes the global reward, whereas in Horde and UVFAs the value functions are conditioned on independent policies that greedily optimize local goals (similar to HRA). Additionally, in our approach, the composite value function can be recovered from the components. Other value decomposition work includes Empathic Q-learning [21] and Orchestrated Value Mapping [10]. The primary difference between our work these other approaches regards the motivation and application of value decomposition. These works focus on how value decomposition can improve sample efficiency, generalization, or other aspects of the core learning problem, rather than how to diagnose and remedy problems. Mathematically, value decomposition bears resemblance to work on successor features, which has focused primarily on state representation and transfer learning [3, 4, 6, 12]. Methods for multiobjective RL [16, 28] learn sets of policies, each with a distinct linear weight over multiple reward objectives. Our work also recovers the value function for different linear reward preferences, but uses this capability to diagnose behavior and learning problems rather than to learn multiple policies. 9For a broad overview of XRL, we direct the reader to Heuillet et al. [17]. 8 Concluding remarks We have argued that the iterative design of reinforcement learning agents can be improved through the use of value decomposition, in which we keep individual reward components separate and learn value estimates of each. We provided a simple prescription for deriving value decomposition algorithms from actor-critic methods, and applied it to SAC to derive the SAC-D algorithm. Combined with the CAGrad method, SAC-D meets or exceeds SAC’s performance in all environments we tested. We introduced the influence metric, and demonstrated its use in measuring each reward component’s effect on an agent’s decisions. Finally, we provided several examples of how value decomposition can diagnose and remedy agent learning problems. Although value decomposition is a simple and broadly applicable tool, we note the following limitations. (1) Our method requires a composite reward function of multiple components. (2) We only study linear reward decomposition. (3) Component predictions only tell you the agent expectations under the single learned policy; changing the weights doesn’t tell you what to expect after re-optimizing the policy for them. (4) Our approach benefits most with methods that learn Q-functions. Methods that optimize the policy with empirical returns have a weaker link between agent expectations (component Q-functions) and policy decisions. The influence metric also requires a Q-function model. Our method presents the same societal benefits and risks as other RL methods. However, we believe this technology has a net beneficial impact, because making agent decisions more introspectable enables developers to catch problematic behavior before deploying the technology. One particular concern, however, is that value estimates represent an agent’s beliefs, not ground truth; this should be kept in mind when such predictions are used in real-world decision-making, as they may reinforce biases or lead to incorrect conclusions.
1. What is the main contribution of the paper regarding generic actor-critic methods? 2. What are the strengths and weaknesses of the proposed approach, particularly in its mathematical analysis and experimental results? 3. Do you have any concerns regarding the impact of value decomposition on AC methods, such as stability, convergence, and policy changes? 4. How does the reviewer assess the presentation and organization of the paper, including the omission of key references and unclear descriptions of some components? 5. Are there any specific issues or limitations that the reviewer agrees with, beyond their rejection of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes to use value decomposition for the critic of generic actor-critic (AC) methods, and then applies it to soft-actor-critic (SAC) as a particular case. The idea of value decomposition is not new and has been studied in various prior work over past 20 years or so. Thus, the actual contribution of the paper is simply to use value decomposition in the context of AC methods. Strengths And Weaknesses Strengths: Nothing in particular. Weaknesses: As the main idea of the paper is quite simple, I expected a neat mathematical analysis. For example, what is the impact of value decomposition on AC methods and why they should be used? When and how they change bias/variance of the base AC method? Does using value decomposition influence convergence? etc... Re-weighting the reward components may make the algorithm completely unstable and even non-convergent. A clear analysis is required, beyond the few presented experimental results. The presentation may also be improved. This paper mostly looks like writing a report around some experimental results. Minor: some missing key references for value decomposition. Questions The paper lacks any thorough analysis of possible issues and meaningful solutions with formal ground. This is essential. L127-128: "Because the component weights are factored out of the component Q-functions, they may be varied without changing the component prediction target." This is not correct. First, changing the weights will change Q, hence, it may change j at line 4 of Algorithm 1. Second, changing Q induces a change of policy. Notice that your value learning is on-policy and changing Q can beaks that because the policy that collects the transition becomes different from the learning policy. At the very least this can make the learning process unstable/slow depending on how much and how fast the weights are changing. This is not a trivial issue. Note also that if Q-learning is used instead of SARSA to make it off-policy, then the entire algorithm will NOT converge to optimality due to Jensen’s inequality. See reference [3] bellow for a detailed analysis. L186: Not clear why norm is used (also not clear what norm it is). Is your action-space multi dimensional? If yes, you may need to describe it properly in your section 2.1. Some references: [1] van Seijen, H., Fatemi, M., Romoff, J. & Laroche, R. Separation of Concerns in Reinforcement Learning. arXiv [cs.LG] (2016) [2] Fatemi, M. & Tavakoli, A. Orchestrated Value Mapping for Reinforcement Learning. in International Conference on Learning Representations (2022). [3] Laroche, R., Fatemi, M., Romoff, J. & van Seijen, H. Multi-Advisor Reinforcement Learning. arXiv [cs.LG] (2017) Limitations The authors counted four limitations, which I agree with in general. However, my rejection of the paper is not grounded on these limitations.
NIPS
Title Improved Training of Wasserstein GANs Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † 1 Introduction Generative Adversarial Networks (GANs) [9] are a powerful class of generative models that cast generative modeling as a game between two networks: a generator network produces synthetic data given some noise source and a discriminator network discriminates between the generator’s output and true data. GANs can produce very visually appealing samples, but are often hard to train, and much of the recent work on the subject [22, 18, 2, 20] has been devoted to finding ways of stabilizing training. Despite this, consistently stable training of GANs remains an open problem. In particular, [1] provides an analysis of the convergence properties of the value function being optimized by GANs. Their proposed alternative, named Wasserstein GAN (WGAN) [2], leverages the Wasserstein distance to produce a value function which has better theoretical properties than the original. WGAN requires that the discriminator (called the critic in that work) must lie within the space of 1-Lipschitz functions, which the authors enforce through weight clipping. Our contributions are as follows: 1. On toy datasets, we demonstrate how critic weight clipping can lead to undesired behavior. 2. We propose gradient penalty (WGAN-GP), which does not suffer from the same problems. 3. We demonstrate stable training of varied GAN architectures, performance improvements over weight clipping, high-quality image generation, and a character-level GAN language model without any discrete sampling. ⇤Now at Google Brain †Code for our models is available at https://github.com/igul222/improved wgan training. 2 Background 2.1 Generative adversarial networks The GAN training strategy is to define a game between two competing networks. The generator network maps a source of noise to the input space. The discriminator network receives either a generated sample or a true data sample and must distinguish between the two. The generator is trained to fool the discriminator. Formally, the game between the generator G and the discriminator D is the minimax objective: min G max D E x⇠P r [log(D(x))] + E x̃⇠P g [log(1 D(˜x))], (1) where P r is the data distribution and P g is the model distribution implicitly defined by ˜x = G(z), z ⇠ p(z) (the input z to the generator is sampled from some simple noise distribution, such as the uniform distribution or a spherical Gaussian distribution). If the discriminator is trained to optimality before each generator parameter update, then minimizing the value function amounts to minimizing the Jensen-Shannon divergence between P r and P g [9], but doing so often leads to vanishing gradients as the discriminator saturates. In practice, [9] advocates that the generator be instead trained to maximize E x̃⇠P g [log(D( ˜ x))], which goes some way to circumvent this difficulty. However, even this modified loss function can misbehave in the presence of a good discriminator [1]. 2.2 Wasserstein GANs [2] argues that the divergences which GANs typically minimize are potentially not continuous with respect to the generator’s parameters, leading to training difficulty. They propose instead using the Earth-Mover (also called Wasserstein-1) distance W (q, p), which is informally defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p (where the cost is mass times transport distance). Under mild assumptions, W (q, p) is continuous everywhere and differentiable almost everywhere. The WGAN value function is constructed using the Kantorovich-Rubinstein duality [24] to obtain min G max D2D E x⇠P r ⇥ D(x) ⇤ E x̃⇠P g ⇥ D( ˜ x)) ⇤ (2) where D is the set of 1-Lipschitz functions and P g is once again the model distribution implicitly defined by ˜x = G(z), z ⇠ p(z). In that case, under an optimal discriminator (called a critic in the paper, since it’s not trained to classify), minimizing the value function with respect to the generator parameters minimizes W (P r ,P g ). The WGAN value function results in a critic function whose gradient with respect to its input is better behaved than its GAN counterpart, making optimization of the generator easier. Additionally, WGAN has the desirable property that its value function correlates with sample quality, which is not the case for GANs. To enforce the Lipschitz constraint on the critic, [2] propose to clip the weights of the critic to lie within a compact space [ c, c]. The set of functions satisfying this constraint is a subset of the k-Lipschitz functions for some k which depends on c and the critic architecture. In the following sections, we demonstrate some of the issues with this approach and propose an alternative. 2.3 Properties of the optimal WGAN critic In order to understand why weight clipping is problematic in a WGAN critic, as well as to motivate our approach, we highlight some properties of the optimal critic in the WGAN framework. We prove these in the Appendix. Proposition 1. Let P r and P g be two distributions in X , a compact metric space. Then, there is a 1-Lipschitz function f⇤ which is the optimal solution of max kfk L 1 Ey⇠P r [f(y)] E x⇠P g [f(x)]. Let ⇡ be the optimal coupling between P r and P g , defined as the minimizer of: W (P r ,P g ) = inf ⇡2⇧(P r ,P g ) E(x,y)⇠⇡ [kx yk] where ⇧(Pr,Pg) is the set of joint distributions ⇡(x, y) whose marginals are P r and P g , respectively. Then, if f⇤ is differentiable‡, ⇡(x = y) = 0§, and x t = tx + (1 t)y with 0 t 1, it holds that P(x,y)⇠⇡ h rf⇤(x t ) = y x t ky x t k i = 1. Corollary 1. f⇤ has gradient norm 1 almost everywhere under P r and P g . 3 Difficulties with weight constraints We find that weight clipping in WGAN leads to optimization difficulties, and that even when optimization succeeds the resulting critic can have a pathological value surface. We explain these problems below and demonstrate their effects; however we do not claim that each one always occurs in practice, nor that they are the only such mechanisms. Our experiments use the specific form of weight constraint from [2] (hard clipping of the magnitude of each weight), but we also tried other weight constraints (L2 norm clipping, weight normalization), as well as soft constraints (L1 and L2 weight decay) and found that they exhibit similar problems. To some extent these problems can be mitigated with batch normalization in the critic, which [2] use in all of their experiments. However even with batch normalization, we observe that very deep WGAN critics often fail to converge. 3.1 Capacity underuse Implementing a k-Lipshitz constraint via weight clipping biases the critic towards much simpler functions. As stated previously in Corollary 1, the optimal WGAN critic has unit gradient norm almost everywhere under P r and P g ; under a weight-clipping constraint, we observe that our neural network architectures which try to attain their maximum gradient norm k end up learning extremely simple functions. To demonstrate this, we train WGAN critics with weight clipping to optimality on several toy distributions, holding the generator distribution P g fixed at the real distribution plus unit-variance Gaussian noise. We plot value surfaces of the critics in Figure 1a. We omit batch normalization in the ‡We can actually assume much less, and talk only about directional derivatives on the direction of the line; which we show in the proof always exist. This would imply that in every point where f⇤ is differentiable (and thus we can take gradients in a neural network setting) the statement holds. §This assumption is in order to exclude the case when the matching point of sample x is x itself. It is satisfied in the case that Pr and Pg have supports that intersect in a set of measure 0, such as when they are supported by two low dimensional manifolds that don’t perfectly align [1]. Algorithm 1 WGAN with gradient penalty. We use default values of = 10, ncritic = 5, ↵ = 0.0001, 1 = 0, 2 = 0.9. Require: The gradient penalty coefficient , the number of critic iterations per generator iteration ncritic, the batch size m, Adam hyperparameters ↵, 1, 2. Require: initial critic parameters w0, initial generator parameters ✓0. 1: while ✓ has not converged do 2: for t = 1, ..., ncritic do 3: for i = 1, ...,m do 4: Sample real data x ⇠ P r , latent variable z ⇠ p(z), a random number ✏ ⇠ U [0, 1]. 5: ˜x G ✓ (z) 6: ˆx ✏x + (1 ✏)˜x 7: L(i) D w ( ˜ x) D w (x) + (kr x̂ D w ( ˆ x)k2 1)2 8: end for 9: w Adam(r w 1 m P m i=1 L (i) , w,↵, 1, 2) 10: end for 11: Sample a batch of latent variables {z(i)}m i=1 ⇠ p(z). 12: ✓ Adam(r ✓ 1 m P m i=1 Dw(G✓(z)), ✓,↵, 1, 2) 13: end while critic. In each case, the critic trained with weight clipping ignores higher moments of the data distribution and instead models very simple approximations to the optimal functions. In contrast, our approach does not suffer from this behavior. 3.2 Exploding and vanishing gradients We observe that the WGAN optimization process is difficult because of interactions between the weight constraint and the cost function, which result in either vanishing or exploding gradients without careful tuning of the clipping threshold c. To demonstrate this, we train WGAN on the Swiss Roll toy dataset, varying the clipping threshold c in [10 1, 10 2, 10 3], and plot the norm of the gradient of the critic loss with respect to successive layers of activations. Both generator and critic are 12-layer ReLU MLPs without batch normalization. Figure 1b shows that for each of these values, the gradient either grows or decays exponentially as we move farther back in the network. We find our method results in more stable gradients that neither vanish nor explode, allowing training of more complicated networks. 4 Gradient penalty We now propose an alternative way to enforce the Lipschitz constraint. A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere, so we consider directly constraining the gradient norm of the critic’s output with respect to its input. To circumvent tractability issues, we enforce a soft version of the constraint with a penalty on the gradient norm for random samples ˆx ⇠ P x̂ . Our new objective is L = E x̃⇠P g [D( ˜ x)] E x⇠P r [D(x)] | {z } Original critic loss + E x̂⇠P x̂ ⇥ (kr x̂ D( ˆ x)k2 1)2 ⇤ . | {z } Our gradient penalty (3) Sampling distribution We implicitly define P x̂ sampling uniformly along straight lines between pairs of points sampled from the data distribution P r and the generator distribution P g . This is motivated by the fact that the optimal critic contains straight lines with gradient norm 1 connecting coupled points from P r and P g (see Proposition 1). Given that enforcing the unit gradient norm constraint everywhere is intractable, enforcing it only along these straight lines seems sufficient and experimentally results in good performance. Penalty coefficient All experiments in this paper use = 10, which we found to work well across a variety of architectures and datasets ranging from toy tasks to large ImageNet CNNs. No critic batch normalization Most prior GAN implementations [21, 22, 2] use batch normalization in both the generator and the discriminator to help stabilize training, but batch normalization changes the form of the discriminator’s problem from mapping a single input to a single output to mapping from an entire batch of inputs to a batch of outputs [22]. Our penalized training objective is no longer valid in this setting, since we penalize the norm of the critic’s gradient with respect to each input independently, and not the entire batch. To resolve this, we simply omit batch normalization in the critic in our models, finding that they perform well without it. Our method works with normalization schemes which don’t introduce correlations between examples. In particular, we recommend layer normalization [3] as a drop-in replacement for batch normalization. Two-sided penalty We encourage the norm of the gradient to go towards 1 (two-sided penalty) instead of just staying below 1 (one-sided penalty). Empirically this seems not to constrain the critic too much, likely because the optimal WGAN critic anyway has gradients with norm 1 almost everywhere under P r and P g and in large portions of the region in between (see subsection 2.3). In our early observations we found this to perform slightly better, but we don’t investigate this fully. We describe experiments on the one-sided penalty in the appendix. 5 Experiments 5.1 Training random architectures within a set We experimentally demonstrate our model’s ability to train a large number of architectures which we think are useful to be able to train. Starting from the DCGAN architecture, we define a set of architecture variants by changing model settings to random corresponding values in Table 1. We believe that reliable training of many of the architectures in this set is a useful goal, but we do not claim that our set is an unbiased or representative sample of the whole space of useful architectures: it is designed to demonstrate a successful regime of our method, and readers should evaluate whether it contains architectures similar to their intended application. From this set, we sample 200 architectures and train each on 32⇥32 ImageNet with both WGAN-GP and the standard GAN objectives. Table 2 lists the number of instances where either: only the standard GAN succeeded, only WGAN-GP succeeded, both succeeded, or both failed, where success is defined as inception score > min score. For most choices of score threshold, WGAN-GP successfully trains many architectures from this set which we were unable to train with the standard GAN objective. 5.2 Training varied architectures on LSUN bedrooms To demonstrate our model’s ability to train many architectures with its default settings, we train six different GAN architectures on the LSUN bedrooms dataset [30]. In addition to the baseline DCGAN architecture from [21], we choose six architectures whose successful training we demonstrate: (1) no BN and a constant number of filters in the generator, as in [2], (2) 4-layer 512-dim ReLU MLP generator, as in [2], (3) no normalization in either the discriminator or generator (4) gated multiplicative nonlinearities, as in [23], (5) tanh nonlinearities, and (6) 101-layer ResNet generator and discriminator. Although we do not claim it is impossible without our method, to the best of our knowledge this is the first time very deep residual networks were successfully trained in a GAN setting. For each architecture, we train models using four different GAN methods: WGAN-GP, WGAN with weight clipping, DCGAN [21], and Least-Squares GAN [17]. For each objective, we used the default set of optimizer hyperparameters recommended in that work (except LSGAN, where we searched over learning rates). For WGAN-GP, we replace any batch normalization in the discriminator with layer normalization (see section 4). We train each model for 200K iterations and present samples in Figure 2. We only succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP. For every other training method, some of these architectures were unstable or suffered from mode collapse. 5.3 Improved performance over weight clipping One advantage of our method over weight clipping is improved training speed and sample quality. To demonstrate this, we train WGANs with weight clipping and our gradient penalty on CIFAR10 [13] and plot Inception scores [22] over the course of training in Figure 3. For WGAN-GP, we train one model with the same optimizer (RMSProp) and learning rate as WGAN with weight clipping, and another model with Adam and a higher learning rate. Even with the same optimizer, our method converges faster and to a better score than weight clipping. Using Adam further improves performance. We also plot the performance of DCGAN [21] and find that our method converges more slowly (in wall-clock time) than DCGAN, but its score is more stable at convergence. 5.4 Sample quality on CIFAR-10 and LSUN bedrooms For equivalent architectures, our method achieves comparable sample quality to the standard GAN objective. However the increased stability allows us to improve sample quality by exploring a wider range of architectures. To demonstrate this, we find an architecture which establishes a new state of the art Inception score on unsupervised CIFAR-10 (Table 3). When we add label information (using the method in [19]), the same architecture outperforms all other published models except for SGAN. We also train a deep ResNet on 128 ⇥ 128 LSUN bedrooms and show samples in Figure 4. We believe these samples are at least competitive with the best reported so far on any resolution for this dataset. 5.5 Modeling discrete data with a continuous generator To demonstrate our method’s ability to model degenerate distributions, we consider the problem of modeling a complex discrete distribution with a GAN whose generator is defined over a continuous space. As an instance of this problem, we train a character-level GAN language model on the Google Billion Word dataset [6]. Our generator is a simple 1D CNN which deterministically transforms a latent vector into a sequence of 32 one-hot character vectors through 1D convolutions. We apply a softmax nonlinearity at the output, but use no sampling step: during training, the softmax output is passed directly into the critic (which, likewise, is a simple 1D CNN). When decoding samples, we just take the argmax of each output vector. We present samples from the model in Table 4. Our model makes frequent spelling errors (likely because it has to output each character independently) but nonetheless manages to learn quite a lot about the statistics of language. We were unable to produce comparable results with the standard GAN objective, though we do not claim that doing so is impossible. Table 4: Samples from a WGAN character-level language model trained with our method on sentences from the Billion Word dataset, truncated to 32 characters. The model learns to directly output one-hot character embeddings from a latent vector without any discrete sampling step. We were unable to achieve comparable results with the standard GAN objective and a continuous generator. WGAN with gradient penalty (1D CNN) Busino game camperate spent odea Solice Norkedin pring in since In the bankaway of smarling the ThiS record ( 31. ) UBS ) and Ch SingersMay , who kill that imvic It was not the annuas were plogr Keray Pents of the same Reagun D This will be us , the ect of DAN Manging include a tudancs shat " These leaded as most-worsd p2 a0 His Zuith Dudget , the Denmbern The time I paidOa South Cubry i In during the Uitational questio Dour Fraps higs it was these del Divos from The ’ noth ronkies of This year out howneed allowed lo She like Monday , of macunsuer S Kaulna Seto consficutes to repor The difference in performance between WGAN and other GANs can be explained as follows. Consider the simplex n = {p 2 Rn : p i 0, P i p i = 1}, and the set of vertices on the simplex (or one-hot vectors) V n = {p 2 Rn : p i 2 {0, 1}, P i p i = 1} ✓ n . If we have a vocabulary of size n and we have a distribution P r over sequences of size T , we have that P r is a distribution on V T n = V n ⇥ · · ·⇥ V n . Since V T n is a subset of T n , we can also treat P r as a distribution on T n (by assigning zero probability mass to all points not in V T n ). P r is discrete (or supported on a finite number of elements, namely V T n ) on T n , but P g can easily be a continuous distribution over T n . The KL divergences between two such distributions are infinite, and so the JS divergence is saturated. In practice, this means a discriminator might quickly learn to reject all samples that don’t lie on V T n (sequences of one-hot vectors) and give meaningless gradients to the generator. However, it is easily seen that the conditions of Theorem 1 and Corollary 1 of [2] are satisfied even on this non-standard learning scenario with X = T n . This means that W (P r ,P g ) is still well defined, continuous everywhere and differentiable almost everywhere, and we can optimize it just like in any other continuous variable setting. The way this manifests is that in WGANs, the Lipschitz constraint forces the critic to provide a linear gradient from all T n towards towards the real points in V T n . Other attempts at language modeling with GANs [31, 14, 29, 5, 15, 10] typically use discrete models and gradient estimators [27, 12, 16]. Our approach is simpler to implement, though whether it scales beyond a toy language model is unclear. 5.6 Meaningful loss curves and detecting overfitting An important benefit of weight-clipped WGANs is that their loss correlates with sample quality and converges toward a minimum. To show that our method preserves this property, we train a WGAN-GP on the LSUN bedrooms dataset [30] and plot the negative of the critic’s loss in Figure 5a. We see that the loss converges as the generator minimizes W (P r ,P g ). GANs, like all models trained on limited data, will eventually overfit. To explore the loss curve’s behavior when the network overfits, we train large unregularized WGANs on a random 1000-image subset of MNIST and plot the negative critic loss on both the training and validation sets in Figure 5b. In both WGAN and WGAN-GP, the two losses diverge, suggesting that the critic overfits and provides an inaccurate estimate of W (P r ,P g ), at which point all bets are off regarding correlation with sample quality. However in WGAN-GP, the training loss gradually increases even while the validation loss drops. [28] also measure overfitting in GANs by estimating the generator’s log-likelihood. Compared to that work, our method detects overfitting in the critic (rather than the generator) and measures overfitting against the same loss that the network minimizes. 6 Conclusion In this work, we demonstrated problems with weight clipping in WGAN and introduced an alternative in the form of a penalty term in the critic loss which does not exhibit the same problems. Using our method, we demonstrated strong modeling performance and stability across a variety of architectures. Now that we have a more stable algorithm for training GANs, we hope our work opens the path for stronger modeling performance on large-scale image datasets and language. Another interesting direction is adapting our penalty term to the standard GAN objective function, where it might stabilize training by encouraging the discriminator to learn smoother decision boundaries. Acknowledgements We would like to thank Mohamed Ishmael Belghazi, Léon Bottou, Zihang Dai, Stefan Doerr, Ian Goodfellow, Kyle Kastner, Kundan Kumar, Luke Metz, Alec Radford, Sai Rajeshwar, Aditya Ramesh, Tom Sercu, Zain Shah and Jake Zhao for insightful comments.
1. What are the strengths and weaknesses of the paper regarding its exposition, claims, and technical contribution? 2. How can the authors improve the evaluation of their approach, particularly in terms of numerical assessment and user study? 3. What are your concerns regarding the misleading claims in the paper, and how can the authors address them? 4. How can the authors effectively respond to the criticism regarding the second derivative of ReLU's? 5. Can the authors provide more concrete evidence to support their argument that prettier pictures are better?
Review
Review + good results + pretty pictures - exposition could be improved - claims are not backed up - minor technical contribution - insufficient evaluation Let me first say that the visual results of this paper are great. The proposed algorithm produces pretty pictures of bedrooms and tiny cifar images. Unfortunately the rest of the paper needs to be significantly improved for acceptance at NIPS. The exposition and structure of the paper is below NIPS standards. For example L3: "..., but can still generate low-quality samples or fail to converge in some settings." I'm assuming the authors mean "can only..."? l6: "pathological behavior". Pathological seems the wrong word here... Sections 3 and 4 should be flipped. This way the exposition doesn't need to refer to a method that was not introduced yet. Secondly, the paper is full of claims that are not backed up. The reviewer recommends the authors to either remove these claims or experimentally or theoretically back them up. Here are a few: L101: Proof that only simple functions have gradient norm almost everywhere, note that almost everywhere is important here. There can be exponentially many parts of the landscape that are disconnected by gradient norm!=1 parts. If a proof is not possible, please remove this claim. L 144: Two sided penalty works better. Please show this in experiments. The actual technical contribution seems like a small modification of WGAN. Box constraint is relaxed to a l2 norm on gradients. Finally the evaluation is insufficient. The paper shows some pretty generation results for various hyper-parameter setting, but doesn't show any concrete numeric evaluation, apart from the easy to cheat and fairly meaningless inception score. The authors should perform a through user study to validate their approach, especially in light of a slim technical contribution. Minor: How is (3) efficiently evaluated? Did the authors have to modify specific deep learning packages to compute the gradient of the gradient? How is this second derivative defined for ReLUs? Post rebuttal: The authors answered some of my concerns in the rebuttal. I'm glad the authors promised to improve the writing and remove the misleading claims. However the rebuttal didn't fully convince me, see below. I'm still a bit hesitant to accept the argument that prettier pictures are better, the authors rebuttal didn't change much in that regard. I'm sure some image filter, or denoising CNN could equally improve the image quality. I would strongly suggest the authors to find a way to quantitatively evaluate their method (other than the inception score). I'm also fairly certain that the argument about the second derivative of ReLU's is wrong. The second derivative of a relu is not defined, as far as I know. The authors mention it's all zero, but that would make it a linear function (and a relu is not linear)... The reviewers had an extensive discussion on this and we agree that for ReLU networks the objective is not continuous, hence the derivative of the objective does not point down hill at all times. There are a few ways the authors should address this: 1. Try twice differentiable non-linearities (elu, tanh or log(1+exp(x)). 2. Show that treating the ReLU as a linear function (second derivative of zero) does give a good enough gradient estimate, for example through finite differences, or by verifying how often the "gradient" points down hill.
NIPS
Title Improved Training of Wasserstein GANs Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † 1 Introduction Generative Adversarial Networks (GANs) [9] are a powerful class of generative models that cast generative modeling as a game between two networks: a generator network produces synthetic data given some noise source and a discriminator network discriminates between the generator’s output and true data. GANs can produce very visually appealing samples, but are often hard to train, and much of the recent work on the subject [22, 18, 2, 20] has been devoted to finding ways of stabilizing training. Despite this, consistently stable training of GANs remains an open problem. In particular, [1] provides an analysis of the convergence properties of the value function being optimized by GANs. Their proposed alternative, named Wasserstein GAN (WGAN) [2], leverages the Wasserstein distance to produce a value function which has better theoretical properties than the original. WGAN requires that the discriminator (called the critic in that work) must lie within the space of 1-Lipschitz functions, which the authors enforce through weight clipping. Our contributions are as follows: 1. On toy datasets, we demonstrate how critic weight clipping can lead to undesired behavior. 2. We propose gradient penalty (WGAN-GP), which does not suffer from the same problems. 3. We demonstrate stable training of varied GAN architectures, performance improvements over weight clipping, high-quality image generation, and a character-level GAN language model without any discrete sampling. ⇤Now at Google Brain †Code for our models is available at https://github.com/igul222/improved wgan training. 2 Background 2.1 Generative adversarial networks The GAN training strategy is to define a game between two competing networks. The generator network maps a source of noise to the input space. The discriminator network receives either a generated sample or a true data sample and must distinguish between the two. The generator is trained to fool the discriminator. Formally, the game between the generator G and the discriminator D is the minimax objective: min G max D E x⇠P r [log(D(x))] + E x̃⇠P g [log(1 D(˜x))], (1) where P r is the data distribution and P g is the model distribution implicitly defined by ˜x = G(z), z ⇠ p(z) (the input z to the generator is sampled from some simple noise distribution, such as the uniform distribution or a spherical Gaussian distribution). If the discriminator is trained to optimality before each generator parameter update, then minimizing the value function amounts to minimizing the Jensen-Shannon divergence between P r and P g [9], but doing so often leads to vanishing gradients as the discriminator saturates. In practice, [9] advocates that the generator be instead trained to maximize E x̃⇠P g [log(D( ˜ x))], which goes some way to circumvent this difficulty. However, even this modified loss function can misbehave in the presence of a good discriminator [1]. 2.2 Wasserstein GANs [2] argues that the divergences which GANs typically minimize are potentially not continuous with respect to the generator’s parameters, leading to training difficulty. They propose instead using the Earth-Mover (also called Wasserstein-1) distance W (q, p), which is informally defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p (where the cost is mass times transport distance). Under mild assumptions, W (q, p) is continuous everywhere and differentiable almost everywhere. The WGAN value function is constructed using the Kantorovich-Rubinstein duality [24] to obtain min G max D2D E x⇠P r ⇥ D(x) ⇤ E x̃⇠P g ⇥ D( ˜ x)) ⇤ (2) where D is the set of 1-Lipschitz functions and P g is once again the model distribution implicitly defined by ˜x = G(z), z ⇠ p(z). In that case, under an optimal discriminator (called a critic in the paper, since it’s not trained to classify), minimizing the value function with respect to the generator parameters minimizes W (P r ,P g ). The WGAN value function results in a critic function whose gradient with respect to its input is better behaved than its GAN counterpart, making optimization of the generator easier. Additionally, WGAN has the desirable property that its value function correlates with sample quality, which is not the case for GANs. To enforce the Lipschitz constraint on the critic, [2] propose to clip the weights of the critic to lie within a compact space [ c, c]. The set of functions satisfying this constraint is a subset of the k-Lipschitz functions for some k which depends on c and the critic architecture. In the following sections, we demonstrate some of the issues with this approach and propose an alternative. 2.3 Properties of the optimal WGAN critic In order to understand why weight clipping is problematic in a WGAN critic, as well as to motivate our approach, we highlight some properties of the optimal critic in the WGAN framework. We prove these in the Appendix. Proposition 1. Let P r and P g be two distributions in X , a compact metric space. Then, there is a 1-Lipschitz function f⇤ which is the optimal solution of max kfk L 1 Ey⇠P r [f(y)] E x⇠P g [f(x)]. Let ⇡ be the optimal coupling between P r and P g , defined as the minimizer of: W (P r ,P g ) = inf ⇡2⇧(P r ,P g ) E(x,y)⇠⇡ [kx yk] where ⇧(Pr,Pg) is the set of joint distributions ⇡(x, y) whose marginals are P r and P g , respectively. Then, if f⇤ is differentiable‡, ⇡(x = y) = 0§, and x t = tx + (1 t)y with 0 t 1, it holds that P(x,y)⇠⇡ h rf⇤(x t ) = y x t ky x t k i = 1. Corollary 1. f⇤ has gradient norm 1 almost everywhere under P r and P g . 3 Difficulties with weight constraints We find that weight clipping in WGAN leads to optimization difficulties, and that even when optimization succeeds the resulting critic can have a pathological value surface. We explain these problems below and demonstrate their effects; however we do not claim that each one always occurs in practice, nor that they are the only such mechanisms. Our experiments use the specific form of weight constraint from [2] (hard clipping of the magnitude of each weight), but we also tried other weight constraints (L2 norm clipping, weight normalization), as well as soft constraints (L1 and L2 weight decay) and found that they exhibit similar problems. To some extent these problems can be mitigated with batch normalization in the critic, which [2] use in all of their experiments. However even with batch normalization, we observe that very deep WGAN critics often fail to converge. 3.1 Capacity underuse Implementing a k-Lipshitz constraint via weight clipping biases the critic towards much simpler functions. As stated previously in Corollary 1, the optimal WGAN critic has unit gradient norm almost everywhere under P r and P g ; under a weight-clipping constraint, we observe that our neural network architectures which try to attain their maximum gradient norm k end up learning extremely simple functions. To demonstrate this, we train WGAN critics with weight clipping to optimality on several toy distributions, holding the generator distribution P g fixed at the real distribution plus unit-variance Gaussian noise. We plot value surfaces of the critics in Figure 1a. We omit batch normalization in the ‡We can actually assume much less, and talk only about directional derivatives on the direction of the line; which we show in the proof always exist. This would imply that in every point where f⇤ is differentiable (and thus we can take gradients in a neural network setting) the statement holds. §This assumption is in order to exclude the case when the matching point of sample x is x itself. It is satisfied in the case that Pr and Pg have supports that intersect in a set of measure 0, such as when they are supported by two low dimensional manifolds that don’t perfectly align [1]. Algorithm 1 WGAN with gradient penalty. We use default values of = 10, ncritic = 5, ↵ = 0.0001, 1 = 0, 2 = 0.9. Require: The gradient penalty coefficient , the number of critic iterations per generator iteration ncritic, the batch size m, Adam hyperparameters ↵, 1, 2. Require: initial critic parameters w0, initial generator parameters ✓0. 1: while ✓ has not converged do 2: for t = 1, ..., ncritic do 3: for i = 1, ...,m do 4: Sample real data x ⇠ P r , latent variable z ⇠ p(z), a random number ✏ ⇠ U [0, 1]. 5: ˜x G ✓ (z) 6: ˆx ✏x + (1 ✏)˜x 7: L(i) D w ( ˜ x) D w (x) + (kr x̂ D w ( ˆ x)k2 1)2 8: end for 9: w Adam(r w 1 m P m i=1 L (i) , w,↵, 1, 2) 10: end for 11: Sample a batch of latent variables {z(i)}m i=1 ⇠ p(z). 12: ✓ Adam(r ✓ 1 m P m i=1 Dw(G✓(z)), ✓,↵, 1, 2) 13: end while critic. In each case, the critic trained with weight clipping ignores higher moments of the data distribution and instead models very simple approximations to the optimal functions. In contrast, our approach does not suffer from this behavior. 3.2 Exploding and vanishing gradients We observe that the WGAN optimization process is difficult because of interactions between the weight constraint and the cost function, which result in either vanishing or exploding gradients without careful tuning of the clipping threshold c. To demonstrate this, we train WGAN on the Swiss Roll toy dataset, varying the clipping threshold c in [10 1, 10 2, 10 3], and plot the norm of the gradient of the critic loss with respect to successive layers of activations. Both generator and critic are 12-layer ReLU MLPs without batch normalization. Figure 1b shows that for each of these values, the gradient either grows or decays exponentially as we move farther back in the network. We find our method results in more stable gradients that neither vanish nor explode, allowing training of more complicated networks. 4 Gradient penalty We now propose an alternative way to enforce the Lipschitz constraint. A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere, so we consider directly constraining the gradient norm of the critic’s output with respect to its input. To circumvent tractability issues, we enforce a soft version of the constraint with a penalty on the gradient norm for random samples ˆx ⇠ P x̂ . Our new objective is L = E x̃⇠P g [D( ˜ x)] E x⇠P r [D(x)] | {z } Original critic loss + E x̂⇠P x̂ ⇥ (kr x̂ D( ˆ x)k2 1)2 ⇤ . | {z } Our gradient penalty (3) Sampling distribution We implicitly define P x̂ sampling uniformly along straight lines between pairs of points sampled from the data distribution P r and the generator distribution P g . This is motivated by the fact that the optimal critic contains straight lines with gradient norm 1 connecting coupled points from P r and P g (see Proposition 1). Given that enforcing the unit gradient norm constraint everywhere is intractable, enforcing it only along these straight lines seems sufficient and experimentally results in good performance. Penalty coefficient All experiments in this paper use = 10, which we found to work well across a variety of architectures and datasets ranging from toy tasks to large ImageNet CNNs. No critic batch normalization Most prior GAN implementations [21, 22, 2] use batch normalization in both the generator and the discriminator to help stabilize training, but batch normalization changes the form of the discriminator’s problem from mapping a single input to a single output to mapping from an entire batch of inputs to a batch of outputs [22]. Our penalized training objective is no longer valid in this setting, since we penalize the norm of the critic’s gradient with respect to each input independently, and not the entire batch. To resolve this, we simply omit batch normalization in the critic in our models, finding that they perform well without it. Our method works with normalization schemes which don’t introduce correlations between examples. In particular, we recommend layer normalization [3] as a drop-in replacement for batch normalization. Two-sided penalty We encourage the norm of the gradient to go towards 1 (two-sided penalty) instead of just staying below 1 (one-sided penalty). Empirically this seems not to constrain the critic too much, likely because the optimal WGAN critic anyway has gradients with norm 1 almost everywhere under P r and P g and in large portions of the region in between (see subsection 2.3). In our early observations we found this to perform slightly better, but we don’t investigate this fully. We describe experiments on the one-sided penalty in the appendix. 5 Experiments 5.1 Training random architectures within a set We experimentally demonstrate our model’s ability to train a large number of architectures which we think are useful to be able to train. Starting from the DCGAN architecture, we define a set of architecture variants by changing model settings to random corresponding values in Table 1. We believe that reliable training of many of the architectures in this set is a useful goal, but we do not claim that our set is an unbiased or representative sample of the whole space of useful architectures: it is designed to demonstrate a successful regime of our method, and readers should evaluate whether it contains architectures similar to their intended application. From this set, we sample 200 architectures and train each on 32⇥32 ImageNet with both WGAN-GP and the standard GAN objectives. Table 2 lists the number of instances where either: only the standard GAN succeeded, only WGAN-GP succeeded, both succeeded, or both failed, where success is defined as inception score > min score. For most choices of score threshold, WGAN-GP successfully trains many architectures from this set which we were unable to train with the standard GAN objective. 5.2 Training varied architectures on LSUN bedrooms To demonstrate our model’s ability to train many architectures with its default settings, we train six different GAN architectures on the LSUN bedrooms dataset [30]. In addition to the baseline DCGAN architecture from [21], we choose six architectures whose successful training we demonstrate: (1) no BN and a constant number of filters in the generator, as in [2], (2) 4-layer 512-dim ReLU MLP generator, as in [2], (3) no normalization in either the discriminator or generator (4) gated multiplicative nonlinearities, as in [23], (5) tanh nonlinearities, and (6) 101-layer ResNet generator and discriminator. Although we do not claim it is impossible without our method, to the best of our knowledge this is the first time very deep residual networks were successfully trained in a GAN setting. For each architecture, we train models using four different GAN methods: WGAN-GP, WGAN with weight clipping, DCGAN [21], and Least-Squares GAN [17]. For each objective, we used the default set of optimizer hyperparameters recommended in that work (except LSGAN, where we searched over learning rates). For WGAN-GP, we replace any batch normalization in the discriminator with layer normalization (see section 4). We train each model for 200K iterations and present samples in Figure 2. We only succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP. For every other training method, some of these architectures were unstable or suffered from mode collapse. 5.3 Improved performance over weight clipping One advantage of our method over weight clipping is improved training speed and sample quality. To demonstrate this, we train WGANs with weight clipping and our gradient penalty on CIFAR10 [13] and plot Inception scores [22] over the course of training in Figure 3. For WGAN-GP, we train one model with the same optimizer (RMSProp) and learning rate as WGAN with weight clipping, and another model with Adam and a higher learning rate. Even with the same optimizer, our method converges faster and to a better score than weight clipping. Using Adam further improves performance. We also plot the performance of DCGAN [21] and find that our method converges more slowly (in wall-clock time) than DCGAN, but its score is more stable at convergence. 5.4 Sample quality on CIFAR-10 and LSUN bedrooms For equivalent architectures, our method achieves comparable sample quality to the standard GAN objective. However the increased stability allows us to improve sample quality by exploring a wider range of architectures. To demonstrate this, we find an architecture which establishes a new state of the art Inception score on unsupervised CIFAR-10 (Table 3). When we add label information (using the method in [19]), the same architecture outperforms all other published models except for SGAN. We also train a deep ResNet on 128 ⇥ 128 LSUN bedrooms and show samples in Figure 4. We believe these samples are at least competitive with the best reported so far on any resolution for this dataset. 5.5 Modeling discrete data with a continuous generator To demonstrate our method’s ability to model degenerate distributions, we consider the problem of modeling a complex discrete distribution with a GAN whose generator is defined over a continuous space. As an instance of this problem, we train a character-level GAN language model on the Google Billion Word dataset [6]. Our generator is a simple 1D CNN which deterministically transforms a latent vector into a sequence of 32 one-hot character vectors through 1D convolutions. We apply a softmax nonlinearity at the output, but use no sampling step: during training, the softmax output is passed directly into the critic (which, likewise, is a simple 1D CNN). When decoding samples, we just take the argmax of each output vector. We present samples from the model in Table 4. Our model makes frequent spelling errors (likely because it has to output each character independently) but nonetheless manages to learn quite a lot about the statistics of language. We were unable to produce comparable results with the standard GAN objective, though we do not claim that doing so is impossible. Table 4: Samples from a WGAN character-level language model trained with our method on sentences from the Billion Word dataset, truncated to 32 characters. The model learns to directly output one-hot character embeddings from a latent vector without any discrete sampling step. We were unable to achieve comparable results with the standard GAN objective and a continuous generator. WGAN with gradient penalty (1D CNN) Busino game camperate spent odea Solice Norkedin pring in since In the bankaway of smarling the ThiS record ( 31. ) UBS ) and Ch SingersMay , who kill that imvic It was not the annuas were plogr Keray Pents of the same Reagun D This will be us , the ect of DAN Manging include a tudancs shat " These leaded as most-worsd p2 a0 His Zuith Dudget , the Denmbern The time I paidOa South Cubry i In during the Uitational questio Dour Fraps higs it was these del Divos from The ’ noth ronkies of This year out howneed allowed lo She like Monday , of macunsuer S Kaulna Seto consficutes to repor The difference in performance between WGAN and other GANs can be explained as follows. Consider the simplex n = {p 2 Rn : p i 0, P i p i = 1}, and the set of vertices on the simplex (or one-hot vectors) V n = {p 2 Rn : p i 2 {0, 1}, P i p i = 1} ✓ n . If we have a vocabulary of size n and we have a distribution P r over sequences of size T , we have that P r is a distribution on V T n = V n ⇥ · · ·⇥ V n . Since V T n is a subset of T n , we can also treat P r as a distribution on T n (by assigning zero probability mass to all points not in V T n ). P r is discrete (or supported on a finite number of elements, namely V T n ) on T n , but P g can easily be a continuous distribution over T n . The KL divergences between two such distributions are infinite, and so the JS divergence is saturated. In practice, this means a discriminator might quickly learn to reject all samples that don’t lie on V T n (sequences of one-hot vectors) and give meaningless gradients to the generator. However, it is easily seen that the conditions of Theorem 1 and Corollary 1 of [2] are satisfied even on this non-standard learning scenario with X = T n . This means that W (P r ,P g ) is still well defined, continuous everywhere and differentiable almost everywhere, and we can optimize it just like in any other continuous variable setting. The way this manifests is that in WGANs, the Lipschitz constraint forces the critic to provide a linear gradient from all T n towards towards the real points in V T n . Other attempts at language modeling with GANs [31, 14, 29, 5, 15, 10] typically use discrete models and gradient estimators [27, 12, 16]. Our approach is simpler to implement, though whether it scales beyond a toy language model is unclear. 5.6 Meaningful loss curves and detecting overfitting An important benefit of weight-clipped WGANs is that their loss correlates with sample quality and converges toward a minimum. To show that our method preserves this property, we train a WGAN-GP on the LSUN bedrooms dataset [30] and plot the negative of the critic’s loss in Figure 5a. We see that the loss converges as the generator minimizes W (P r ,P g ). GANs, like all models trained on limited data, will eventually overfit. To explore the loss curve’s behavior when the network overfits, we train large unregularized WGANs on a random 1000-image subset of MNIST and plot the negative critic loss on both the training and validation sets in Figure 5b. In both WGAN and WGAN-GP, the two losses diverge, suggesting that the critic overfits and provides an inaccurate estimate of W (P r ,P g ), at which point all bets are off regarding correlation with sample quality. However in WGAN-GP, the training loss gradually increases even while the validation loss drops. [28] also measure overfitting in GANs by estimating the generator’s log-likelihood. Compared to that work, our method detects overfitting in the critic (rather than the generator) and measures overfitting against the same loss that the network minimizes. 6 Conclusion In this work, we demonstrated problems with weight clipping in WGAN and introduced an alternative in the form of a penalty term in the critic loss which does not exhibit the same problems. Using our method, we demonstrated strong modeling performance and stability across a variety of architectures. Now that we have a more stable algorithm for training GANs, we hope our work opens the path for stronger modeling performance on large-scale image datasets and language. Another interesting direction is adapting our penalty term to the standard GAN objective function, where it might stabilize training by encouraging the discriminator to learn smoother decision boundaries. Acknowledgements We would like to thank Mohamed Ishmael Belghazi, Léon Bottou, Zihang Dai, Stefan Doerr, Ian Goodfellow, Kyle Kastner, Kundan Kumar, Luke Metz, Alec Radford, Sai Rajeshwar, Aditya Ramesh, Tom Sercu, Zain Shah and Jake Zhao for insightful comments.
1. What is the contribution of the paper regarding Wasserstein GAN? 2. What are the concerns regarding the presentation and explanation of the results? 3. How does the reviewer assess the effectiveness of the proposed alternative to weight clipping? 4. What additional experiments or analyses are suggested by the reviewer to strengthen the claims made in the paper? 5. Are there any misunderstandings or errors in the author's response to the reviewer's feedback?
Review
Review The authors suggest a simple alternative to the weight clipping of Wasserstein GAN. Although presentation could be greatly improved, it is a decent contribution with practical use. Please implement the following changes for reviewers to better evaluate the soundness of the claims. The result given in Section 2.3 is not sufficiently explained. Please elaborate on how this result helps explain the problem with weight clipping. Also the claim in Line 85 is not implied by the paragraph above. It may help to formalize the paragraph statement in a Proposition and write Linen 85 as a corollary. Line 99 then can refer to this corollary. The same claim is also used in Line 130. Hence it is very important to clarify this claim. It would be useful to see the effect of batch normalization on the Swiss roll data in Figure 1b. Is the result as good as gradient penalty? Please consider adding this to the figure. AFTER AUTHOR FEEDBACK AND REBUTTAL: The authors have made the mistake of claiming (in their feedback) that the second derivative is nonzero because they do not take the second derivative of ReLU wrt a variable but first wrt input variables and then wrt weights. Unfortunately this is not exactly correct. Thanks to the careful observation of R3, in a simple 1 layer ReLU network, the second derivative of ReLU wrt its input comes up as an additive term even then. This term will be zero in almost all local neighborhoods, but more importantly, it will not be defined simply because second derivative of ReLU wrt its input does not exist since the first derivative is not continuous. It is not clear what will happen with more layers and I think this analysis would be a nice future direction. I suggest acceptance of the paper. But I ask reviewers to implement the following change: Present your results using tanh as the nonlinearity, which lets you define first and second order derivatives without any mathematical mistake. Later, explain the audience that the same result holds for ReLU networks in practice even though second derivative is not defined and try to justify why (for example local derivatives should work unless you land on finitely many "bad" points in practice). Adding the results showing nonzero ReLU gradient which the authors referred to in their feedback could help.
NIPS
Title Self-Supervised Relationship Probing Abstract Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current humanannotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intraand inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding. 1 Introduction Visual relationships that describe object relationships in images have become more and more important for high-level computer vision (CV) tasks that need complex reasoning [1, 2, 3, 4]. They are often organized in a structured graph representation called scene graph, where nodes represent objects and edges represent relationships between objects. In recent years, we have witnessed great progress with visual relationship datasets such as Visual Genome [5] and the application of scene graphs to various CV reasoning tasks such as image captioning [6, 7], image retrieval [8], and visual reasoning [9]. Despite this, current visual relationship models still rely on human-annotated relationship labels. Due to the combinatorics involved — two objects and one relationship between them, where objects and relationships each have different types — relationships are numerous and have a long-tailed distribution and, thus, it is difficult to collect enough annotations to sufficiently represent important but less frequently observed relationships. Consequently, current visual relationship models tend to focus on modeling only a few relationships that have a large number of human annotations [10], and they ignore relationship categories with few annotations. We have seen some research attempts that use external knowledge databases to help enrich visual relationships, however, the total number of relationships modeled is still relatively small [11]. On the other hand, in the past few years, we have seen significant progress in natural language processing (NLP) towards building contextualized language models with self-supervised pretraining objectives [12, 13]. The removal of human annotators from the training loop has enabled training on massive unlabeled datasets, leading to significant advances in NLP performance [14, 15]. These trends have also brought significant advances in vision-language (VL) pretraining tasks [16, 17, 18, 19, 20]. Most existing VL pretraining methods concatenate visual objects and the corresponding sentences as one input and adopt the Transformer [21] as the core module to learn contextualized multi-modal representations in a self-supervised manner via self- and cross-attentions. These models rely heavily 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. on the multi-head attention layers to explore implicit relations, or they directly rely on attention distributions to explain the relations between objects [17, 22]. However, different layers vary in their behaviors [23, 24], and it has been shown that attention alone can be deceiving when used for interpretability and explanation [25]. Thus, existing VL pretraining algorithms suffer from two problems: discovered relationships are not modeled explicitly, but are instead expected to be implicitly represented as transformer weights; and, the concatenation of multimodal inputs at training time restricts the model to require multimodal inputs at prediction time, as well. Motivated by textual relation mining work in NLP [26], we propose a novel framework that discovers dependencies between objects from the model’s representation space which addresses the problems highlighted above. Our approach is based on two simple observations: (1) when we slightly change the images, the relative visual relationships in those images remain unchanged; (2) relationships mentioned in image descriptions are visually observable in the corresponding image. Our approach relies on three modules, each consisting of a set of layers. In the first module, implicit intra-modal relationships are modeled using transformer encoders. In the second module, cross-modal learning allows for implicit relationship information to be leveraged across modalities. In the third module, relationships between visual and linguistic entities are represented explicitly as latent variables via a technique we call relationship probe. All modules are trained using self-supervision, with a first stage relying on masked language modeling to train the first two modules, and a second stage relying on contrastive learning and linguistic dependency trees as supervisory signals to train the relationship probe network. Our main contribution is a novel self-supervised relationship probing (SSRP) framework for finding dependencies in visual objects or textual entities that address issues with existing visual relationship models: it relies on self-supervision rather than explicit supervision, it explicitly models relationships as latent variables, and it leverages cross-modal learning but allows a single modality as input at prediction time. We conduct extensive experiments to demonstrate that our method can benefit both vision and VL understanding tasks. 2 Background Visual relationships. It has been demonstrated that visual relationships between objects can help improve performance on many CV tasks [8, 27, 28, 29, 30, 31]. Most of these methods assume a known explicit graph structure, and limit the graph to the most frequently occurring predicate categories while ignoring others that do not have enough labeled examples. Relaxing this assumption, some works transfer the object representations learned with predicate functions to rare predicates in few-shot scene graph generation [32, 33, 34]. Other works capture the relations via attention mechanisms [35, 36, 37, 38]. However, unlike object detectors that are trained on unambiguous and objectively defined object class labels, visual relationships are subjective and it is hard to exhaustively annotate all possible relationships between objects. Thus, we do not explicitly define or label visual relationship classes, but instead, we discover the implicit visual relationships via the accompanied captions. We call our method SSRP in the sense that we do not use any explicit predicate labels. Pretraining. Motivated by the huge success of BERT [13] in NLP, there is a growing interest in pretraining generic models to solve a variety of VL problems [39, 40, 22, 40, 18]. These methods generally employ BERT-like objectives to learn cross-modal representations from visual region features and word embeddings. They use self- and cross-attention mechanisms to learn joint representations that are appropriately contextualized in both modalities. However, most of the VL pretraining works heavily rely on massive amounts of visual-linguistic corpus [19, 17]. Moreover, although huge multi-modal training datasets enable pretraining methods to learn good representations for downstream multi-modal VL tasks, they usually do not benefit visual tasks that only deal with single visual modality during inference. We overcome this problem with a new approach that enables the generation of implicit visual object relationships even with only visual inputs during inference, while benefiting greatly from the cross-modality learning objectives during training. We would like to point out that several works focus on investigating the representations learned by transformer-based pretraining models [41, 42]. Their findings suggest that BERT-based network pretraining learns a rich set of intermediate representations of both semantic and syntactic information, which can be used to unearth the representations of dependency grammar relations. An interesting finding in [26] shows that BERT can recover dependency parse trees that have not been encountered during training. Coenen et al. [43] further present empirical descriptions of syntactic representations in BERT. These results in NLP motivate us to exploit BERT to find visual relationships between image regions without explicitly training on relationship annotations. 3 Method Fig. 1 gives an overview of three variants of our method: SSRPShare, SSRPVisual and SSRPCross. Each variant consists of three modules: intra-modality encoder, inter-modality encoder and relationship probe. The main difference among the three SSRP variants lies in the inter-modality encoding process. The intra-modality and inter-modality encoders are BERT-like encoders, that respectively capture implicit single-modality relations and cross-modality relations among the entities (image objects and textual tokens) and output contextual representations. The relationship probe generates relationship graphs for each modality from the encoded contextual representations in a self-supervised way. In the following, we first briefly describe BERT [13] since our approach is based on BERT architecture, and then we describe the individual modules of our SSRP frameworks as well as the learning process. 3.1 Revisiting BERT BERT uses Masked Language Modeling (MLM), a self-supervised pretraining objective that allows a transformer encoder [21] to encode a sequence from both directions simultaneously. Specifically, for an input sequence S = {w1, . . . , wNw} of Nw tokens, BERT first randomly masks out 15% of the tokens and then predicts the masked tokens in the output. The masked tokens in the input sequence are represented by a special symbol [MASK] and fed into a multi-layer transformer encoder. Let H l = {h1, . . . ,hNw} be the encoded features at the l-th transformer layer, with H0 being the input layer. The features at the (l + 1)-th layer are obtained by applying a transformer block defined as: H l+1 = LN ( LN ( H l + f lSelf-Att(H l) ) + f lFF ( LN(H l + f lSelf-Att(H l)) )) (1) where LN stands for layer normalization [44], f lSelf-Att(·) is a multi-headed self-attention sub-layer, fFF(·) is a feed-forward sub-layer composed of two fully-connected (FC) layers, wrapped in residual connection [45] with an LN as specified in Eq. 1. The token representations in the final layer are used to predict the masked tokens independently. 3.2 Model architecture Input embeddings. The input to the three SSRP pretraining models includes both visual and textual elements, where the former is defined as regions-of-interest (RoIs) in an image and the latter is defined as the tokens in a caption. Specifically, given an image I , we use Faster-RCNN [46] to detect RoIs {v1, . . . , vNv} and take the feature vector prior to the output layer of each RoI as the visual feature embedding. For a caption S, we insert the special tokens [CLS] and [SEP] before and after the sentence, and use the WordPiece tokenizer [47] to split it into tokens {w1, . . . , wNw}. Apart from token and visual feature embeddings, we also add positional encoding to represent tokens. In particular, for token wi, its input representation w̃i is the sum of its trainable token embedding, positional embedding (index in the sequence) and segment (image/text) embedding, followed by an LN layer. Each object vi is represented by its positional feature (normalized top-left and bottom-right coordinates) and its 2048-dimensional RoI feature, both of which are transformed through FC+LN layers to obtain the position-aware object-level embedding ṽi. Intra-modality encoding. The purpose of intra-modality encoding is to model the intra-relations of the encoded representations in one modality via self-attention, same as that in BERT. Specifically, we randomly mask out ṽ\i and w̃\j with a fixed probability, and feed the masked object-level embeddings Ṽ = { ṽ1, . . . , ṽ\i, . . . , ṽNv } and word-level embeddings W̃ = { w̃1, . . . , w̃\j , . . . , w̃Nw } into two intra-modality encoders (fV↔VIntra and f S↔S Intra ) separately. Each layer in the intra-modality encoders contains a self-attention sub-layer and an FF sub-layer (Eq. 1). Inter-modality encoding. The inter-modality encoder models the cross-modality relationships between image and textual entities. The three proposed SSRP pretraining models use different inter-modality encoding schemes as illustrated in Fig. 1. In SSRPShare, the inter-modality encoding is done with a single encoder fV SInter that is shared between the two modalities, and f V S Inter consists of a shared self-attention sub-layer wrapped in residual connection with an LN. The shared weights connect the two modalities by causing the projections of the two input modalities to align in the query, key, and value spaces. In SSRPVisual, the textual features attend to visual features to connect the two modalities. In contrast to SSRPShare, we keep fV SInter for the visual branch which contains a self-attention sub-layer and an FF sub-layer, while using fS→VInter for the textual branch which consists of a self-attention sub-layer, one unidirectional cross-attention sub-layer, and an FF sub-layer. Finally, SSRPCross uses an inter-modality bidirectional cross-attention encoder fV↔SInter , where both textual and visual features attend to each other. Following [17], each layer in fV↔SInter consists of two self-attention sub-layers, one bi-directional cross-attention sub-layer, and two FF sub-layers. Relationship probing. The purpose of the relationship probing is to model the implicit relations among visual or textual entities. Specifically, we build a latent relationship graph Gv for the objects in an image and a latent relationship graph Gw for the tokens in a caption, based on the unmasked contextual object representations V = {v1, . . . ,vNv} and token representations W = {w1, . . . ,wNw}, which are the output feature vectors of the inter-modality encoders. Inspired by [26], we use a visual probe and a textual probe to compute the distances for each object pair (vi,vj) ∈ Gv and each token pair (wi,wj) ∈ Gw, respectively. The distance for an object/token pair is defined as: dBu(ui,uj) 2 = (Bu(ui − uj))T (Bu(ui − uj)) (2) where u ∈ {v,w}, i and j are the object/token indices, and Bu are the parameters for the probe layer. The learning goal of a structural probe (Sec. 3.3) is to determine the edge distances between all pairs of nodes. The outputs of the visual probe and the textual probe layer are respectively the distance matrices Rv = (dBv (vi,vj) 2) ∈ RNv×Nv and Rw = (dBw(wi,wj)2) ∈ RNw×Nw , which capture implicit relations between visual/textual entities. 3.3 Learning We employ two learning stages in our method. In the first stage, we train the BERT encoders including the intra-modality encoders and the inter-modality encoders to obtain the contextual object representations V and the token representations W . In the second stage, with these contextual representations, we freeze the BERT encoders and train the two probe layers to generate implicit relationship matrices Rv and Rw. Fig. 2 shows a schematic diagram of our learning framework. 3.3.1 Stage 1: Training BERT encoders Masked language modeling with RoI feature reconstruction. We train the BERT encoders with the MLM objective to predict masked RoI feature vi and masked token wj given their surroundings I\i and S\j . We also include a L1 reconstruction smoothing loss [48] for the grounding of visual features. We minimize the following loss: LMLM = −EI,S∼D [ log p(vi|I\i, S̃) + log p(wj |S\j , Ĩ)− ∑ i L1(vi − g(vi|I\i, S̃)) ] (3) where Ĩ and S̃ are the image regions and input words with random masking, g(.) outputs the unmasked visual feature, p(vi|I\i, S̃) and p(wj |S\j , Ĩ) are respectively the predicted probabilities for the target object label and word given the masked inputs, and I and S are sampled from the training set D. Note that here we reuse the symbols v and w to represent both the visual features and the label/word for simplicity. Image-text matching. An additional loss is added to perform the instance-level alignment between an image and its caption. Both positive (y = 1) and negative (y = 0) image-sentence pairs are sampled and the model learns to align with a binary cross-entropy loss: LMatch = −EI,S∼D[y log p(falign) + (1− y) log(1− p(falign))] (4) where p(falign) is the output probability of a binary classifier and falign is the visual-textual alignment representation. For SSRPShare and SSRPVisual, falign is computed as galign([v̄;wCLS]), where v̄ =∑ i vi/Nv is the visual representation averaged over the contextual features of all the visual elements V , wCLS is the contextual representation of the special token [CLS], and galign(·) is a non-linear mapping function (see supplementary for details). For SSRPCross, we define falign = galign(wCLS). Essentially, we force wCLS to model either the aggregated textual or visual-textual information. The overall training loss for the first-stage pretraining becomes: LStage1 = LMLM + LMatch. 3.3.2 Stage 2: Training relationship probes In the second stage, the relationship probe layers are learned via a probe loss LSProbe and a contrastive lossLCL-all, where the former is to ensure the learned textual relationships Rw is structurally consistent with a dependency tree and the latter is to ensure that the learned relationships Rv and Rw remain stable across different data augmentations. In particular, on the language side, we use a pre-parsed dependency tree Gw for each sentence [49] to guide the textual relationship probe learning with LSProbe defined as: LSProbe = 1 N2w ∑ i,j |dGw(wi,wj)− dBw(wi,wj)2| (5) where dGw(wi,wj) is the distance between tokens wi and wj in the dependency tree Gw. For the contrastive loss, we adopt stochastic data augmentation methods to transform an original image (or sentence) into semantics-preserving data samples, and treat them as positive pairs; see Fig. 2, where Ii ∼ TI and Si ∼ TS denote image and sentence augmentations, respectively.1 For the data augmentation details, please refer to Sec. 4.1. Specifically, we sample a minibatch of Nc image-caption pairs and apply two separate augmentation strategies to each modality, resulting in 2Nc image-caption pairs. For every positive pair, its negative pairs are not sampled explicitly, but 1Note that in the interest of coherence, we describe data augmentation with contrastive learning in Stage 2, the augmented data can be used to train BERT encoders in Stage 1. instead we take the other 2(Nc − 1) augmented image-caption pairs within a minibatch as negatives. We adapt the contrastive loss introduced in [50, 51] to our cross-modal scenario. The single-modality contrastive loss LSCL(i, j) and cross-modality contrastive loss LXCL(i, j) for a positive image-caption pair 〈{Ii, Ij}, {Si, Sj}〉 are defined as: LSCL(i, j) = − log eZ v,v i,j∑2Nc k=1 1[k 6=i]e Zv,vi,k − log e Zw,wi,j∑2Nc k=1 1[k 6=i]e Zw,wi,k (6) LXCL(i, j) = − ∑ m∈{i,j} ∑ n∈{i,j} ( log ( eZv,wm,n∑2Nc k=1 1[k 6=m]e Zv,wm,k ) + log ( eZw,vm,n∑2Nc k=1 1[k 6=m]e Zw,vm,k )) (7) where 1[k 6=i] ∈ {0, 1} is an indicator function,Zx,yi,j = ((zxi >zyj )/(‖zxi ‖‖z y j ‖))/τ denotes the cosine similarity between zxi and z y j , z v and zw are the nonlinear projections of vectorized relationship matrices Rv and Rw projected using MLP projection head [50], and τ is a temperature hyperparameter [52]. The final loss is computed across all positive image-caption pairs in a mini-batch LCL-all = 12Nc ∑ i,j [LSCL(i, j) +LSCL(j, i) +LXCL(i, j)]. Note that LXCL is invariant to the order of sample indices (i, j) and thus is included just once in LCL-all. In this stage, the overall training objective is: LStage2 = LSProbe + LCL-all. 4 Experiments 4.1 Datasets and implementation details Pretraining corpus. To enlarge the training data, recent VL pretraining works [17, 16, 53, 18] use combined pretraining corpora such as Conceptual Captions (CC) [54], SBU captions [55], MSCOCO [56, 57, 58], Flickr30K [59], VQA [1], GQA [2], VG [5], BooksCorpus (BC) [60], and English Wikipedia (EW), etc. In contrast, we only aggregate pretraining data from the train (113k) and validation (5k) splits of MSCOCO [58]. Specifically, with each MSCOCO image associated with five independent caption annotations, MSCOCO provides us an aligned VL dataset of 591K image-and-sentence pairs on 118K distinct images. Table 1 summarizes the corpus used by different pretraining methods. Data augmentation. Instead of combining the existing VL datasets, we expand the pretraining corpus with data augmentation on both images and sentences, as shown in Table 2. For data augmentation on images, we employ horizontal flipping (HFlip) at the image level and a few augmentations at the RoI feature level including HFlip, rotations (90o, 180o, and 270o) and bounding box jittering (with scale factors selected from the range of [0.8, 1.2]). We enrich the training sentences through two pretrained back-translators [61]: English→German→English (En-De-En) and English→Russian→English (EnRu-En). Our augmentation strategies can generate significantly more training samples: 1.65M at RoI level and 1.77M at sentence level, while largely preserving the semantic information. Pretraining setting. We pretrain our three SSRP variants shown in Fig. 1. We set the numbers of layers for the intra-modality encoders of fS↔SIntra and f V↔V Intra to 9 and 5, respectively, and the number of layers for the inter-modality encoders of fV SInter, f S→V Inter , and f V↔S Inter to 5. For each transformer block, we set its hidden size to 768 and the number of heads to 12. To keep the sizes the same for the relationship matrices, the maximum numbers of words and objects are equally set to 36. Pretraining is divided into two stages. In stage 1, we train with LStage 1. At each iteration, we randomly mask input words and RoIs with a probability of 0.15. All models are initialized with BERT pretrained weights and the respective pretraining corpus is listed in Table 2. For cross-modality matching, we replace each sentence with a mismatched one with a probability of 0.5. We use Adam optimizer [62] with a linear learning-rate schedule [13] and a peak learning rate of 1e−4. The training is carried out with four Tesla V100 GPUs with a batch size of 128 for 10 epochs. After stage 1, we freeze the parameters of the intra-modality and inter-modality encoders and further train the relationship probes with LStage 2. The syntactic dependency tree for each sentence is built by [49]. All variants of SSRP are trained for 30 epochs with Adam, a batch size of 512, and a learning of 5e−5. Fine-Tuning tasks. We fine-tune the pretrained models to handle multiple downstream tasks: three VL understanding tasks (NLVR2 [63], VQA [1], and GQA [2]) and a generation task (image captioning), following the standard fine-tuning settings for downstream tasks in [17, 53]. For VL understanding tasks, we use linearly-fused probed relationships and visual-textual alignment prediction falign in Eq. 4 as features. For image captioning, we utilize the Up-Down [64] framework and incorporate the refined object features learned by SSRPVisual. The captioning model is first trained with cross-entropy loss and is then followed by reinforcement learning loss [65]. 4.2 Experimental results & analysis We first perform ablation experiments over a few design choices of our method on NLVR2. We then show the comparison results on VQA, GQA and image captioning tasks. Effect of data augmentation. Table 3 shows the ablation study results. For the ‘Raw’ setting, we pretrain our models only on the original corpus, while in the ‘Aug.’ setting, we augment the original corpus with the augmentation techniques mentioned in Table 2. It is evident that our data augmentation strategy indeed improves the performance of all three models. Note that we employ data augmentation only during pretraining, but not during fine-tuning. Effect of attention. Comparing the three variants that use different attention settings in Table 3, we observe that SSRPCross performs the best, and SSRPVisual is better than SSRPShare. This confirms the benefits of the cross-attention structures that enable the features of one modality to attend to the other. Effect of relationship probing. To analyze the effectiveness of the visual and textual relationships learned via pretraining, we concatenate the visual-textual alignment representation falign and relationships (Rel.) to form a relationshipaware feature vector for answer prediction. Table 3 shows that using language relationships Rw leads to better results than using visual relationships Rv. This is due to the available dependency tree for supervising the language model during training, while the visual relationships are learned in a completely self-supervised way. Combining visual and textual relationships achieves the best results. Our method SSRPCross (75.71) outperforms LXMERT (74.9) and VisualBERT (67.4) on NLVR2 dev-set, demonstrating that the probed relationships are beneficial for the reasoning task. Results on VQA& GQA. Table 4 shows the performance of our SSRPCross on VQA and GQA. Our method outperforms VilBERT and VisualBERT, while being highly competitive with the best method that is trained with considerably larger training corpora. Results on image captioning. Unlike the recent VL pretraining methods, which cannot be applied to single-modality vision tasks such as image captioning due to the cross attention used in pretraining, our SSRPShare and SSRPVisual models do not have such a limitation. Thus, we apply the stronger model SSRPVisual to image captioning using its refined object features and the learned implicit visual relationships. Table 5 shows the quantitative results, where SSRPVisual outperforms the baselines, indicating that the learned relationship-aware image representations can benefit image captioning. Note that the online results of BUTD are achieved with model ensemble, while we use a single model. Results on the online MSCOCO test server BUTD [64] (c5) 80.2 36.9 27.6 117.9 – SSRPVisual (c5) 81.5 37.5 28.3 119.8 – BUTD [64] (c40) 95.2 68.5 36.7 120.5 – SSRPVisual (c40) 95.3 68.6 37.2 122.4 – V is ua l D ep en de nc ie s (S SR P C ro ss ) E n- D e- E n E n- R u- E n Or iginal Image Image (HFlip) Or iginal Image RoI (Jitter ) Or iginal Image RoI (HFlip)Or iginal Image RoI (Rotate) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold)Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse Gold Parse SSRPCrossSSRPCross Figure 3: Examples of generated relationships for different augmented images and sentences. The bottom part shows the dependency trees resulted from SSRPCross outputs. Black edges above each sentence are the gold tree provided by Stanza [49], and red edges are provided by our SSRPCross. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel.Obj . Obj .+Rel. Obj . Obj .+Rel. G iv en Q ue ry I m ag es R et ri ev ed I m ag es ? To p- 2? COCO_val2014_000000485129 COCO_val2014_000000361238COCO_val2014_000000243134 COCO_val2014_000000335089 COCO_val2014_000000099119 COCO_val2014_000000347210 Figure 4: A visualization of the retrieved images on MSCOCO validation set. The ‘Obj.’ method averages object features and computes the cosine similarities between images. The ‘Obj. + Rel.’ method enhances the object features according to the predicted relationships. What do probes learn during training? To answer that, we visualize in Fig. 3 the heat-maps of a few relationship examples generated by SSRPCross, where a darker color indicates a closer relationship. Particularly, the first row shows the example images and their augmented counterparts, each of which contains objects and their probed visual relationships represented by straight lines with varying color intensity values. The second row presents the visual relationship distance graphs for the corresponding images. The bottom rows show the distance graphs and dependency trees for augmented captions. Fig. 3 shows that the probed dependency trees closely resemble the gold dependency trees. In addition, the distance graphs of the original data samples and their augmented counterparts for sentences and images are also close to each other, validating our assumption that the visual/linguistic relationships should be preserved even when data augmentation is applied. Remarkably, the learned implicit relationships between objects are stable across differently augmented images, despite the fact that no gold visual relationships are provided in training. Are visual relationships useful for visual tasks? To further verify the benefits of implicit visual relationships in single-modality visual tasks, we perform image retrieval on MSCOCO with SSRPVisual. Fig. 4 shows the top-2 image retrieval results. As shown, ‘Obj. + Rel.’ retrieves better visuallymatching images that are consistent with the object relationships in query images. For example, in the third example, the person in the top-1 retrieved image is next to a pizza, similar to the original image. This suggests that our model can capture the complex underlying visual relationships. 5 Conclusion We have proposed a self-supervised visual relationship probing method that implicitly learns visual relationships without training on ground-truth relationship annotations. Our method transfers the textual relationships from image descriptions to image objects and explores the visual relationships by maximizing the agreement between differently augmented images via contrastive learning. Through our relationship probes, we have demonstrated that relationship structures in images and sentences can be well explored with well-designed distance and contrastive learning objectives. We believe such implicit relationships in images and languages can help improve many existing vision-language tasks, especially in the scenarios with limited annotations. Broader Impact Current representation learning models such as BERT and alike follow a similar structure. We think it is important to discover or probe the implicit knowledge that these models capture about language and vision. Our research on self-supervised relationship probing is a push in that direction and can be used for grounding the relationships expressed in language. In this paper, we introduce SSRP, a self-supervised relationship probing method for visual and textual relationship extraction. Our research could be used to enrich the current scene graph generation methods and to complete the missing relationships between objects. The visual relationships generated by our method could be applied to a wide range of vision and vision-language applications including image captioning, image retrieval, object detection, visual question answering, visual reasoning, and visual-textual cross-modal retrieval, etc. Here, we discuss the broader impact on the two important example applications (image retrieval and image captioning) which can benefit greatly from the implicit relationships obtained with our method. By performing image retrieval using the implicit visual relationships discovered with our method, visual search engines can provide higher-quality results that better respect the visual relationships contained in query images to users. This provides a smoother visual search experience and helps users find their desired images. On the other hand, for image captions/descriptions, with the implicit visual relationships generated by our method, richer and improved descriptions of images that more accurately describe the scenes in images can be obtained. This can help blind or visually-impaired people [66] ‘see’ their surrounding environments better. In terms of technical impacts, our method opens a new direction to better model visual object relationships, which is completely different from current visual relation models that heavily rely on human-annotated explicit visual relation labels. Annotating visual relationships is a highly subjective process where different annotators are likely to annotate quite differently. Relations are also very diverse and there is no clear definition. Our approach bypasses all these challenges of annotating relations by advocating to discover rich implicit relations directly from natural images and their textual descriptions in a self-supervised manner without using any explicit relation annotations. Thus, our method leads to richer and fairer visual relation model. In addition, in terms of dataset, our method also goes beyond current pretraining models that prefer to combine more and more datasets together for self-supervised training. Instead, our proposed method is developed specifically to work effectively with augmented data that can be cheaply obtained with the proposed augmentation strategies and can be nicely integrated into the self-supervision objectives. Overall, our method makes VL pretraining and visual relationship modeling more accessible to the masses.
1. What is the main contribution of the paper regarding visual language tasks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its complexity and experimental evaluation? 3. Do you have any questions regarding the comparison with prior works, such as LXMERT? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors developed a novel architecture for Visual Language tasks such as VQA, GQA and caption generation. For the first stage, They use Faster-RCNN to encode images and WordPiece tokenizer for the words. Both modalities uses BERT for learning good representations and a contrastive loss is used to match the two modalities. 3 different encoders are compared: i) a shared BERT encoder across both text and images ii) a BERT encoder that can attend the visual part when predicting text. iii) a cross modal BERT encoder that can attend both side. The second stage is the main contribution. they learn a relationship probe by computing pairwise distance between each embeddings for both modalities and making sure the resulting matrix are consistent across different data augmentation. Conventional data augmentation is used for images, while for text they used a pre-trained translator for going from En-De-En and En-Ru-En. For text, a supplementary supervised loss is used to aligned the relations with a pre-parsed dependency tree. Ablation study shows that the cross modality encoder provides much better accuracy on NLVR2. Also the relation probes provide a mild improvement over the cross-modal encoder. Other benchmarks shows consistent improvement over baselines. Strengths This work address an interesting challenge, which is the one of learning relationship between different parts of images and objects in an unsupervised fashion to target the long tail of rare relationships. The self supervised relationship probing is novel to my knowledge and is clever. Weaknesses A big weakness of the paper is the readability. The resulting algorithm has many parts, 2 training stage and total of 7 different component to the losses. that makes it really hard to follow and evaluate. It is also hard to relate to existing architecture e.g. I believe that the current architecture (beside relationship probing) is very similar to LXMERT [15] but it's not exactly clear what are the differences. Another weakness is experimental evaluation. While the ablation study is useful. It still doesn't highlight if it actually learns useful relationships. The gain could simply come from data augmentation provided by the pre-trained translator or the used parser. One of their contribution is to have competitive performance without the need of ancillary corpus by using the parser and translator. While it is interesting it makes comparison to baselines very difficult. Also, the qualitative evaluation of Figure 4 is highly unconvincing. ========= POST REBUTTAL ========== Rebuttal is well made and addresses my concerns. Table A Shows a good ablation study and Table B makes a clever comparison of blue score in their query matching. I've increased my score to 6.